Planet Fellowship (en)

Sunday, 20 April 2014

FreeBSD Port of luminance-hdr updated

emergency exit | 00:24, Sunday, 20 April 2014

I finally managed to update graphics/luminance to 2.3.1. Thanks to lme@freebsd and pawel@freebsd for feedback and commit.

This is last version that runs with Qt4. As we recently got Qt5 into the ports I will try to update the port soon again. I also have some new ports coming and hope to get around to fixing/updating my other ones. Stay tuned.

I am making strong use of RedPorts now. In case you don’t know the service, I really recommend it.

Saturday, 19 April 2014

Another new Free Software machine: the Acer C720 Chromebook

the_unconventional's blog » English | 22:59, Saturday, 19 April 2014

It’s been nearly a year since I’ve bought my last main laptop. Don’t worry, I’m still very happy with it. But when I’m travelling, it isn’t an ideal machine. It’s too large to fit in my regular bag, it’s too heavy to comfortably ride my bike, and the battery life of 3:30 hours isn’t exactly spectacular, so I’ll always have to bring the charger, cranking up the weight and volume even more.

That’s why I decided that I wanted to get a secondary, small and light “away laptop”, as opposed to the other one I’m planning to keep as a “home laptop”. My demands weren’t exactly large. I just wanted to be able to check my e-mails and calendars and send some XMPP messages while away. Basically what other people use phones for, which I luckily don’t have.

Obviously, I’m not interested in buying anything that comes with a Microsoft OS pre-installed, especially now boot loaders are becoming more locked down every day. Another option would have been to bring my Lemote YeeLoong, but that thing is simply too slow to use a GUI on, aside from the fact that its battery life is even worse than my other laptop’s.

I’ve been reading a lot of good things about the latest generation Chromebooks; especially now that Google has enabled a SeaBIOS stack within their Coreboot images. I decided to go for the Acer Aspire C720, which meets all my demands and only costs €269.

I went to pick it up at the post office the next day, and started with a visual inspection. After all, I was going to void the warranty right away, so I’d have to make sure that I wouldn’t be needing it.

The Chrome logo isn’t exactly something I was looking for, but luckily I have a huge load of Free Software-related stickers laying around.

Quite uncommon for low-priced machines, the C720 comes with a matte 11,2″ LCD panel.

Unfortunately, I had to accept Google’s terms and conditions in order to reach a Bash shell. I don’t really like the way that’s engineered, and I’m wondering if there would be a workaround for it. Perhaps I could have tried to reach a tty, but I completely forgot about testing that. Sorry.

After the visual inspection was completed, I was confident enough to open it up. There was one of those lame “warranty void if removed” stickers on the bottom, which I had to peel off. I don’t think that my warranty is actually void though, because I remember something about the EU ruling those stickers invalid.

Anyway, it’s only a matter of removing a bunch of screws, and then using a credit card shaped object to click the case open. It can be a bit fiddly, and I would recommend that you start near the left USB port and slowly move your way around from there. There’s no need to use a lot of force. Just wiggle until you hear and feel the bottom plate coming off.

Once you’ve done that, you’ll have to remove the ROM chip’s write-protect screw, which is number 7 in this picture. Once it’s out, place the bottom back, but do not close it. You only have to put back one screw now: the battery lock screw, which is number 6, or the second from the right on the second row. Then gently turn the machine around (remember it’s not completely assembled) and power it up again.

After the machine is powered up, hit Ctrl + D to enter Developer mode. Then confirm to wipe the SSD contents by pressing Enter.

The machine will reboot and wipe some Chrome OS partitions.

After the reboot is complete, you’ll get a warning about Chrome OS being damaged. Just hit Ctrl + D again to boot into developer mode. Then set up networking and select the guest account. Once the Chrome browser pops up, press Ctrl + Alt + T to open a terminal. Then run the following commands:

:~$ shell
:~$ sudo bash
:~# crossystem dev_boot_usb=1 dev_boot_legacy=1
:~# set_gbb_flags.sh 0×489

Once Flashrom completes flashing the ROM chip, turn off the Chromebook by holding the power button, remove the battery lock screw, take off the bottom again, and put the write-protect screw back in place. Now you may click the bottom plate back in place and put all the screws back where they belong. (They’re all identical, so it doesn’t really matter.)

As you can see, the Chromebook now boots to SeaBIOS. Hit Esc to enter the boot menu, and you’ll be able to boot any regular GNU/Linux iso from any regular USB flash drive. Proceed the installation of your favorite distribution as you would on any x86 computer.

You can simply wipe the entire partition table and create a new MBR or GPT. Unlike older Chromebooks, there is no need to keep anything from Chrome OS around. Remember, it’s Coreboot + SeaBIOS we’re using here. Once you’re done with the installation, remove the USB flash drive, reboot, and say hello to GRUB.

 

Some potential issues

The Debian installer was unable to set up a network connection due to the WiFi chip failing to exchange keys with my access points. Don’t worry about it. You can just finish the installation without networking, and then set up something in /etc/network/interfaces after the reboot. The WiFi will work within “regular” Debian; just not the installer. (Don’t forget to edit /etc/hostname and /etc/hosts to your desires, as this step is skipped when you install without networking.)

One other thing that really had me going for a while were unexplainable GPG signature expirations for Debian’s APT archives. It took me a while to figure out what caused them, but it turned out to be the hardware clock set to 2052. In case you have the same problem, install the ntp package (don’t worry about the GPG warning) and wait a few seconds for the time to be corrected. If you want to be sure that the hardware clock is synchronized with the system time, run the sudo hwclock –systohc command.

Due to the incorrect hardware clock, the ext4 filesystems are also inaccurately timestamped, which causes the system to go into emergency mode on the next boot (due to the last mount time being in the future). This can easily be fixed by entering your root password and remounting your root partition as read-only. In my case, that’s /dev/sda2, so run sudo mount -o remount,ro /dev/sda2 / and top it off with sudo fsck /dev/sda2. Fix the incorrect timestamp by pressing y, and then do the same thing for your /home partition, which is /dev/sda3 for me. The /home partition should not be mounted at all, so you only have to run sudo fsck /dev/sda3 without (re)mounting anything.
This will happen every time you remove the battery power (i.e. remove the bottom plate), so if you’re planning on regularly opening your machine, you might want to memorize these things.

 

Getting the touchpad to work

The C720 features a Cypress APA touchpad, which does not have mainline kernel support yet. It’s only supported by the Chromium OS kernels, whereas I’m running Debian’s stock 3.13 kernel at the time of writing. Chromium developer Benson Leung has written patches for the required kernel modules, but all the scripts that implement them seem to be targeted at either Ubuntu or Arch Linux; neither of which work on Debian.

I decided to figure something out myself, and I finally managed to get everything to compile. You can download this script, save it as debian-peppy-touchpad.sh, make it executable, and run it as a regular user. You’ll need Debian jessie for it to work, but I doubt that wheezy would even boot up on the C720 anyway.

BEWARE: If you have also enabled the unstable and/or experimental repositories (for APT pinning or whatever), make sure to disable source downloads from unstable/experimental by commenting out the deb-src lines in /etc/apt/sources.list.
The linux package automatically links to the latest available version, which would be 3.14-trunk from experimental. You need the same kernel source as the kernel you’re running, which is most likely the one from testing. I can’t find any other way for APT to honour that; not even by trying the usual suspects such as -t testing or linux/testing. If anyone has any suggestions, I’m happy to hear them. Otherwise, just make sure your sources.list is in order.

Once you’ve compiled the patched kernel modules and rebooted your machine, you’ll notice that you have a working touchpad. However, it’s still kind of flimsy, because it’s just not very well built. However, you can pass some arguments to the X server to make it a bit more bearable:

Create the file /etc/X11/xorg.conf.d/50-cros-touchpad.conf and paste the following lines into it:

Section “InputClass”
    Identifier      “touchpad peppy cyapa”
    MatchIsTouchpad “on”
    MatchDevicePath “/dev/input/event*”
    MatchProduct    “cyapa”
    Option          “FingerLow” “10″
    Option          “FingerHigh” “10″
EndSection

Log out and back in, and your touchpad should work reasonably well from now on. Just beware that every kernel update requires you to run the debian-peppy-touchpad.sh script again. At least until the patches have been merged upstream. Perhaps it can be run by default as a kernel postinstall script. I’d have to look into that.

 

Fixing suspend

By default, suspending the C720 causes quite some issues, but they’re not very hard to fix. First, create the file /etc/modprobe.d/tpm_tis.conf and paste options tpm_tis force=1 into it.

After that, you’ll have to add some things to /etc/rc.local:

sleep 2
echo 100 > /sys/class/backlight/intel_backlight/brightness
echo EHCI > /proc/acpi/wakeup
echo HDEF > /proc/acpi/wakeup
echo XHCI > /proc/acpi/wakeup
echo LID0 > /proc/acpi/wakeup
echo TPAD > /proc/acpi/wakeup
echo TSCR > /proc/acpi/wakeup
exit 0

Run sudo chmod +x /etc/rc.local afterwards.

Finally, you’ll have to create the file /etc/pm/sleep.d/05_Sound, and paste the following lines into it:

#!/bin/sh
# File: “/etc/pm/sleep.d/05_Sound”.
case “${1}” in
hibernate|suspend)
# Unbind ehci for preventing error
echo -n “0000:00:1d.0″ | tee /sys/bus/pci/drivers/ehci-pci/unbind
# Unbind snd_hda_intel for sound
echo -n “0000:00:1b.0″ | tee /sys/bus/pci/drivers/snd_hda_intel/unbind
echo -n “0000:00:03.0″ | tee /sys/bus/pci/drivers/snd_hda_intel/unbind
;;
resume|thaw)
# Bind ehci for preventing error
echo -n “0000:00:1d.0″ | tee /sys/bus/pci/drivers/ehci-pci/bind
# Bind snd_hda_intel for sound
echo -n “0000:00:1b.0″ | tee /sys/bus/pci/drivers/snd_hda_intel/bind
echo -n “0000:00:03.0″ | tee /sys/bus/pci/drivers/snd_hda_intel/bind
;;
esac

This file also has to be executable, so run sudo chmod +x /etc/pm/sleep.d/05_Sound for it to work.

Then reboot, and suspend should work fine from now on.

 

WiFi issues

Similar to the RAM, the C720′s WiFi module is soldered down to the mainboard. It uses an Atheros chip, so there’s no need to install any non-free firmware to /lib/modules in order to use WiFi. The Bluetooth part does require firmware-atheros to be installed though, but I wasn’t planning on using that anyway.

Unfortunately, the AR9462 chip does not come without issues. There are quite a lot of people complaining about intermittent latency spikes and / or packet drops, which does not even seem to be limited to GNU/Linux users: I’ve read about MS Windows users observing the same symptoms. The problem mostly appears on 802.11n networks, and mine is no exception.

The sad part is that I have a box full of working mini PCI Express WiFi cards, which I could have easily swapped out if the C720 would have had a socketed wireless module.

EDIT: I seem to have fixed this problem by entering my access point’s BSSID manually in Network Manager. Apparently, the wireless chip checks for other access points with the same SSID every 120 seconds, causing the latency spikes. The silly thing is that I actually did the same thing on my other computers, due to hijacking concerns. It never crossed my mind that I hadn’t done so on my C720 yet, but now that I have, it solved all my wireless issues.
By the way, I did this in conjunction with disabling hardware decryption by creating /etc/modprobe.d/ath9k.conf and adding options ath9k nohwcrypt=1 to it.

I have pinged my Raspberry Pi from my C720 over a thousand times just for testing purposes, and I never got anyting higher than 8ms, with an average of 1,2ms. Seems decent enough for WiFi to me.

 

Remapping the keys

Unlike regular laptops, Chromebooks don’t have any F-keys on the top row. At least, there’s no F-keys printed on them, but they are in fact mapped as F1 to F10. The search key is mapped as a super key, by the way.

The two most interesting things to get to work are the three volume keys and the two brightness keys. The volume keys are easy. Just open Settings > Keyboard > Shortcuts > Sound and Media, and map F8 to Volume mute, F9 to Volume down, and F10 to Volume up.

Mapping the brightness keys can be done in many ways, but I chose to use xbacklight. You’ll have to create a custom shortcut and map F6 to xbacklight -dec 10 and F7 to xbacklight -inc 10.

I also used the GNOME keyboard settings to map F5 as the Print Screen key. I simply changed Print to F5 everywhere in Keyboard > Shortcuts > Screenshots.

The other keys (back, forward, refresh, full screen, et cetera) are a bit trickier. You’ll need xvkbd to create the complex mappings required to use them properly. I used the following list of commands:

Browser back (F1 key)*
xvkbd -xsendevent -text “\[Alt_L]\[Left]“

Browser forward (F2 key)
xvkbd -xsendevent -text “\[Alt_L]\[Right]“

Browser refresh (F3 key)
xvkbd -xsendevent -text “\[F5]“

Browser fullscreen (F4 key)
xvkbd -xsendevent -text “\[F11]“

Press ‘Del’ key (Shift + Backspace)
xvkbd -xsendevent -text “\[Delete]“

Press ‘Home’ key (Ctrl + Arrow Left)
xvkbd -xsendevent -text “\[Home]“

Press ‘End’ key (Ctrl + Arrow Right)
xvkbd -xsendevent -text “\[End]“

Press ‘Page Up’ key (Ctrl + Arrow Up)
xvkbd -xsendevent -text “\[Prior]“

Press ‘Page Down’ key (Ctrl + Arrow Down)
xvkbd -xsendevent -text “\[Next]“

* You’ll notice that you can’t map anything to the Back/F1 key, because it pops up Yelp. Still, you can map the Back command to the Forward key temporarily, fire up dconf-editor, navigate to /org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/ and look for the Back command. Change the binding value to F1, and it will work. You can then properly map the Forward/F2 key to the Forward command, and so on.

 

Hardware and privacy

One of the first things I usually do when I buy new computing devices, is ripping out the cameras and microphones. Unfortunately, the microphone is soldered directly to the main board (as if it were a capacitor), and the webcam is attached to the display cable. I would have to rip apart the plastic screen completely and cut through cables to take the camera out, and removing the microphone is likely impossible without bricking the main board (as I would have to solder the silicon).
Instead, I decided to put pieces of felt tape over the webcam and the microphone hole. Additionally, I blacklisted the uvcvideo kernel module and disabled PulseAudio’s input profiles.

The build quality of the C720 obviously isn’t great. But what can you expect from a €269 device? The screen has a very slight flicker at certain brightness levels, and the touchpad is.. well.. a POS. And unlike my other laptop, the fan control isn’t very smooth. It’s a classic on/off system, which – in a very quiet room – sounds like some dude is turning a leaf blower on and off somewhere very distant. It’s not noisy at all, but it would be less annoying if the sound would be constant rather than with intervals. But then again, this is a machine I’m planning to use when I’m *not* in my silent home environment.

Other than that, I don’t have any complaints about the hardware. The keyboard is all right, the screen bezel is tight enough, the USB ports are sturdy enough, there’s barely any heat generated, and the battery life is incredible. You can easily get 7 hours of screen-on time on a machine that really isn’t slow. The only things I’m really missing, are an ethernet port (although it’s not really that important for a travelling machine) and keyboard leds. (The latter due to my addiction to tleds displaying RX/TX activity on my keyboard.)

As always, I’ll add a list of hardware specifications:

Chassis: Acer Aspire C720
Chipset: Intel HM87 (Lynx Point)
BIOS: Coreboot + SeaBIOS
CPU: Intel Celeron 2955U @ 1,40GHz (Haswell)
GPU: Intel HD Graphics GT1 @ 1GHz
RAM: 2GB DDR3 (onboard)
SSD: 16GB Kingston SNS4151
WLAN: Atheros AR9462 (onboard)
Audio: Realtek ALC283
Touchpad: Cypress APA
Display: 11″ 16:9 (1366×768) matte with LED backlight
Connectors: 1x USB 3.0, 1x USB 2.0, 1x HDMI, 1x 3,5mm audio

 

And, of course, a list of working/problematic hardware:

USB: xhci and ehci working out of the box
Ethernet: not available
WiFi: enter your BSSID manually and it will work perfectly
Bluetooth: requires non-free microcode, won’t test
Speakers: working out of the box
Headphones: working out of the box
Keyboard: working out of the box
Touchpad: requires compilation of staging drivers, but works fine afterwards
Hotkeys: brightness and volume require remapping, but work fine
DVD drive: not available
Card reader: working out of the box
HDMI video: working out of the box
HDMI audio: working out of the box
VGA output: not available
Webcam: not tested (disabled)
Microphone: not tested (disabled)
Suspend/resume: requires extensive configuration, but works fine afterwards
Hibernate/resume: not tested yet
ACPI: it’s coreboot :)
Sensors: coretemp working out of the box; fan regulation managed by firmware, but could be better

 

Final verdict

If you’re looking for an amazing laptop, the C720 obviously isn’t for you. But then again, you can’t expect any laptop within this price range to be perfect. Obviously, that’s not the reason I bought this machine anyway.
But if you’re looking for a relatively easy way to get a GNU/Linux-friendly, lightweight, portable, sub-€300 laptop, the C720 is probably one of your best choices.

The best feature obviously is the largely free firmware stack. You’ll get SeaBIOS on top of Coreboot, which is all Free Software. The Intel Management Engine, however, is not.
Another great thing is the lack of devices requiring proprietary firmware or microcode, aside from the Bluetooth chip. Atheros WiFi and Intel graphics have always been the best bet for Free Software users, so I’m glad that this machine contains both.
Finally, the very generous battery life is truly amazing, which is exactly what you’d be looking for in a laptop to use on the road.

Naturally, there’s also a couple of downsides to the C720. For starters, the touchpad really is a problematic piece of plastic. It’s taking months for the drivers to be merged into the mainline kernel, and from the looks of it, it won’t be available until Linux 3.15 or perhaps even 3.16. So you’ll be stuck compiling your own patched modules for a bit longer. And aside from the driver issues, the touchpad itself just isn’t great. It’s flimsy, not very accurate, and sometimes the buttons get stuck. It’s not really a deal breaker, but not being opposed to using mice will really benefit you.
The other downside is the fan control, which really could have been designed in a less annoying way. But maybe I’ve been spoiled by my Clevo’s perfectly engineered fan controller, which is probably my other laptop’s best asset.

In the end, the upsides outweigh the downsides, and I’m very satisfied with my C720. It does everything I expect it to do, for a very reasonable price, without having to resort to purchasing a device with proprietary software pre-installed.

I’ve collected the output of some useful analytic commands for your information. At this point, I’ve run lshw, lspci, lsusb, dmesg, dmidecode, and glxinfo. If you’d like to see anything else, please let me know.

Friday, 18 April 2014

Participating in the 1st International Festival for Technoshamanism

agger's Free Software blog | 20:02, Friday, 18 April 2014

This Monday, I’ll be boarding a plane for Brazil in order to attend the First International Festival of Technoshamanism, which will take place from April 23 to April 30 in Arraial d’Ajuda, Bahia.

Which kind of raises the question: What is “technoshamanism”?

It can best be described as an attempt to unite science with religion, and to integrate the worldview of indigenous peoples like the South American Indians with modern technology. It is also about finding a new way for humanity in the era we could call the anthropocene, where not only indigenous people all over the world, but practically speaking all of us arfe threatened with impending destruction.

In that respect, and in integrating the indigenous worldview, technoshamanism is inspired by the perspecitvism introduced by the Brazilian anthropologist Eduardo Viveiros de Castro. This includes an epistemological inversion, where the split between living, conscious human beings and the “dead” Nature inherent in European thought is replaced with a more general view of the world, where animals and things can be considered “living” and “conscious” as well, albeit usually in another way. In that sense, Viveiros’ perspectivism could be considered a formalization and generalization of an Amerindian philosophy.

There’s a more comprehensive explanation available in an article by Brazilian writer and pshychologist Fabiane Borges, which was the basis of her presentation at Transmediale 2014 (in Berlin). I’ve translated this article to from Portuguese to English, and it’s now available as a PDF here.

Borges describes Viveiros’ perspectivism as follows:

The difference between the evolutionary and the Amerindian perspective is that the former believes that there is one nature and many cultures, while the latter thinks of it as many natures and one culture. For the Indian, the only culture that exists is human culture. Everything that exists is human. A stone, the moon, a river, a jaguar, the deceased – all of these are human, but they are dressed in different clothes, behave differently and have different views on reality. For the Indians, a meeting of shamans may mean the same thing as that of a congregation of tapirs in a mudhole – each group is performing its own rituals.

Of course, if we delve into the differences between groups, we will find different priorities for each species and a particular creation myth for each of them, but the important thing here is to understand that the human foundation shared by all beings also serves to connect them and keeps them in a state of constant communication. This understanding is very important: behind the nature of a stone lies a human culture which is also the basis for inter-species communication. (…)

The shaman is a kind of diplomat who has the ability to assume several of these points of view. He is able to contact all those different forms; he can change his clothes and visit the points of view of many different beings. There may be a pact between him and those beings, a mutual affinity but also a repulsion. He is able to leave his own point of view behind and see himself from the outside and see the Indians of his tribe from the point of view of the tree or of the birds, the moon, the stars, or any other object or material. This ability means that the shaman has a deeper insight into the nature of things than most Indians, because he has improved this technique by intense training. That is why his madness, his schizofrenia and his perceptual deviation is considered to be wisdom.

Such a worldview, however, doesn’t always match modern society very well. Borges discusses the French sociologist Bruno Latour and his distinction between “humans” and “earthbound”, where the “earthbound” are those who are more bound to our planet and its well-being, while the “humans” are more dedicated to human society, not least its financial aspects:

On one side we have the poor, dirty bums: lazy, retarded, subjectivist infantile hippies, losers, misfits, spiritualists, barbarians. On the other side the urban people, committed to modernity, growth, development, enrichment, security, productivity, objectivity, and expansionism. These opposed camps are, in spite of not being very clearly defined, disputing modes of existence and ways of relating to Earth and to Life itself.

The point here is, that in the overall economic management of our Western societies (or of all the world’s societies, if we want to tell the truth) the “earthbound” are losing or being neglected, while the “humans” are dominating; “financial responsibility” dictates constant “growth”, i.e., we must burn down the planet in order to preserve it. But if we want to survive in the long run, we might do worse than starting listening in earnest to the earthbound, or at least to the scientists from the IPCC.

Technoshamanism, by following this thread, becomes a kind of spiritual search for everything for which there is no room in the harsh realities of modern industrial societies. It thus becomes a philosophy of garbage – of all the things we routinely throw away: Madness, hallucinations, nonconformity, the compassion for the unemployed and the sick and the poor in general, if and when they are perceived as obstructive to the juggernaut of growth. This means that even though the refuses of society are not necessarily healthy, we are obliged to search for our lost humanity precisely on the garbage heap.

Borges summarizes this position as follows:

This is equivalent to saying that technoshamanism apart from arising directly from a transversal shamanism is also dirty and noiseocratic. It belongs in the garbage dump, is unclean. A significant part of what technoshamanism affirms originates in the leftovers of scientific thinking, from precarious laboratories, uncertain knowledge, hacking, electronic garbage, workarounds, cats, originates from the recycling of materials, from the duplication of already thoroughly tested scientific results.

To this we may add particular questions from social movements related to feminism, to the movements of queers, of blacks, for free software, of the landless, of indigenous people, of river communities, of homeless people and the unemployed among countless others who also perceive through their own noises, their own dissidency, their own garbage.

The last paragraph also tells us what this has to do with free software. In fact, the festival is arranged in close collaboration with the local hacklab Bailux, whose volunteers for several years now have been working with the Indians from the nearby Aldeia Velha to do things with free software; precisely, among other things, helping the Indians preserve their ancestral knowledge using free software. The Brazilian hacker bus, one of the “crown jewels” of a local hacker movement which is completely dedicated to political change through free software, will be driving down to the festival from São Paulo. So, while the overall political and philosophical ideas behind the festival are not related to free software as such, they have everything to do with a culture where free software is completely ingrained. And that, one might add, is not without its own significance.

Links:

Tuesday, 15 April 2014

Optimal Sailfish SDK workflow with QML auto-reloading

Seravo | 08:37, Tuesday, 15 April 2014

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

Sailfish is the Linux based operating system used in Jolla phones. Those who develop apps for Jolla use the Sailfish SDK (software development kit), which is basically a customized version of Qt Creator. Sailfish OS apps are written using the Qt libraries and typically in the C++ programming language. The user interfaces of Sailfish apps are however written in a declarative language called QML. The syntax of QML is a custom markup language and includes a subset of CSS and JavaScript to define style and actions. QML files are not compiled but stay as plain text files when distributed with the app binaries and are interpreted at run-time.

While SailfishOS IDE (Qt Creator) is probably pretty good for C++ programming with Qt libraries, and the Sailfish flavour comes nicely bundled with complete Sailfish OS instances as virtual machines (one for building binaries and one for emulating running the binaries) the overall workflow is not very optimal from a QML development point of view. Each time a developer presses the Play button to launch his app, Qt Creator builds the app from scratch, packages it, deploys it on the emulator (or a real device if set up to do so) and only then actually runs the app. After doing some changes to the source code, the developer needs to remember to press Stop and then press Play to build, deploy and start the app again. Even on a super fast machine this cycle takes at least 10 seconds.

It would be a much more optimal workflow if relaunching the app after QML source code changes would happen in only 1 second or even less. Using Entr it is possible.

Enter Entr

Entr is an multi platform app which uses the operating system facilities to watch for file changes and to run a command the instant a watched file is changed. To install Entr on a Sailfish OS emulator or device, ssh to the emulator or device, add the community repository chum and install the package entr with (Note the chum for 1.0.5.16 also exists, but the repo is empty.):

ssh nemo@xxx.xxx.xxx.xxx
ssu ar chum http://repo.merproject.org/obs/sailfishos:/chum:/1.0.4.20/1.0.4.20_armv7hl/
pkcon refresh
pkcon install entr

After this change to the directory where your app and it’s QML files reside and run entr:

cd /usr/share/harbour-seravo-news/qml/
find . -name *.qml | entr -r /usr/bin/harbour-seravo-news

The find command will make sure all QML files in current or any subdirectory will be watched. Running entr with parameter -r will make sure it kills the program before running it again. The name of our app in this example here is seravo-news (available in the Jolla store if you are interested).

With this the app would automatically reload it any of the QML files change. To do this mount the app directory on the emulator (or device) to your local system using SSH:

mkdir mountpoint
sshfs nemo@xxx.xxx.xxx.xxx:/usr/share/harbour-seravo-news mountpoint/

Then finally open Qt Creator, point it to the files in the mountpoint directory and start editing. Every time you’ve edited QML files and you feel like you want to see how the result looks like, simply press Ctrl+S to save and watch the magic! It’s even easier than what web developers are used to do then pressing F5 to reload, because on the emulator (or device) there is nothing you need to do, just look at it while the app auto-restarts directly.

Remember to copy or directly git commit your files from the mountpoint directory when you’re completed writing the QML files.

Entr has been packaged for SailfishOS by Seravo staff. Subscribe to our blog to get notified when we post about how to package and build RPM packages using the Open Build System and submit them for inclusion in SailfishOS Chum repository.

Sunday, 13 April 2014

What the Heartbleed bug revealed to me

gollo's blog » English | 09:55, Sunday, 13 April 2014

Today, I made a really negative experience with the StartSSL certificate authority. This is the first time that this has happened to me. The problem is that it affects StartSSL’s reputation because it reveals that they value money much higher than security. Security however should be a CA’s primary concern. So, what happened?

It all started when I checked which of the certificates that were issued to me by StartSSL were potentially compromised by the OpenSSL Heartbleed bug. Fortunately, there were only a few of them and those that were possibly affected were all private ones (i.e. no FSFE certificates were affected). Since Hanno Böck stated in an article [1] that he was able to revoke a StartSSL Class 2 certificate for free and a friend of mine confirmed this [2], I immediately went ahead and sent revocation requests for the affected certificates. The first thing I realised was that the StartSSL website was under heavy load and this was not surprising given the severity of the Heartbleed bug and the number of certificates that StartSSL has probably issued. Nonetheless, I managed to send the revocation requests and received confirmation e-mails about them. Of course I stated the CVE number of the heartbleed bug as the reason for the revocation. Not much later, I was informed that one of the revocation requests had succeeded and I was able to create a new certificate. So far, so good. The trouble started, when I – after not having heard back from StartSSL about the other revocation requests for more than a day – contacted StartSSL to ask why those requests had not succeeded. I was advised to check the e-mail address behind the account I had formerly used for paying my fees to StartSSL. I followed the advise, and there they were: Three requests for USD 24.90 each with the note “revocation fee”. Quite a surprise for me after what I had read and heard. So I asked back, why I had to pay and others didn’t have to. Eddy Nigg’s answer came promptly:

First revocation is not charged in the Class 2 level – successive
revocations carry the fee as usual.

It’s obviously an unfortunate situation, but the revocation fee is
clearly part of the business model otherwise we couldn’t provide the
certificates as freely as we did and would have to charge for every
certificate a fee as all other issuers do.

This was rather shocking for me. This statement clearly reveals, that StartSSL only cares about money, not security. A responsible CA would try to revoke as many compromised certificates as possible. It definitely doesn’t help a CA’s reputation if they do not support their customers in a situation were the customer did not make a mistake and was affected by something I’d call “higher power”. The problem I see is the following: There are most probably many people like me who also care about security in their private life, but also want everything to be convenient. Unfortunately, CAcert has not managed to become part of the major browsers so far [3] and thus StartSSL is pretty much the only way to get cheap certificates for things like a private blog if you are not particularly rich – which is both true for me and also FSFE, for whose certificates I am responsible too. So my gut feeling is that many people who also saw StartSSL as their logical choice will think like me and rather not pay an mount of money that is higher than the fee you have to pay to become Class 2 validated just to revoke a certificate. They will rather stop using the compromised certificate and simply create a new one with a different CN field (which is doesn’t cost them extra). The logical result is that there will be loads of possibly compromised certificates out there that are not on StartSSL’s certificate revocation list. Would *you* trust a CA that doesn’t care about such an issue? Well, I don’t.

So what should I make out of all this? First of all, it seems that all the people who distrust commercial CAs have a good point. Second, CAcert becomes more important than ever. I have been a CAcert assurer for years, but made the mistake to go the convenient way for my private blog and such. Knowing quite a few things about CAcert, I can assure you that they *do* care about security. They care for it quite a bit. I will definitely have to increase my participation in this organisation – the problem is that my involvement in FSFE, my job and my family do not leave me with a particularly big amount of spare time. Maybe those of you who read this will also jump on the train for a Free (as in Freedom and free beer) and secure CA. But even with CAcert in the major browsers, the whole CA system should actually be questioned. For the whole certificate chain, they will always be a single point of failure, no matter if they are called CAcert, StartSSL, VeriSign or you name it. Maybe it’s time for something new to replace or complement what we have now. For example, I have been pointed to TACK [4] by Hanno, which really sounds interesting.

Ah, and of course the rumors that StartSSL is part of Mozilla’s products solely because they paid for it sound much more reasonable to me than a week ago.

For now, I will stop using StartSSL certificates and will recommend the same to FSFE. I will also remove StartSSL from the trust store in my browser. It seems that others agree with me [5-6]. And of course, I will stop recommending StartSSL immediately.

[1] http://www.golem.de/news/openssl-wichtige-fragen-und-antworten-zu-heartbleed-1404-105740.html
[2] StartSSL usually charges USD 24.90 for certificate revocations, which is understandable because it normally only becomes necessary when the certificate owner makes a mistake and StartSSL certificates are really cheap.
[3] Even the opposite: Debian dropped them from their ca-certificates package, a choice which I am still not sure what to think about.
[4] http://tack.io/
[5] https://www.mirbsd.org/permalinks/wlog-10_e20140409-tg.htm#e20140409-tg_wlog-10
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=994033

Saturday, 12 April 2014

KDE software on Mac OS X

Mario Fux | 13:29, Saturday, 12 April 2014

As I probably already mentioned somewhere there is currently quite some energy going into the work of bringing more and better KDE applications to the Mac platform.

So it’s a perfect time to get involved, help to solve the problems or test our software on another platform and thus make another step in {our,a} plan to konquer the world. And don’t hesitate to do this for other platforms as well and/or come to Randa to help with this.

PS: There is still the little poll open about KDE, families and Randa. Please participate and share your anonymous opinion. Currently seven people filled it in.

flattr this!

2014-04-12

Hugo - FSFE planet | 08:24, Saturday, 12 April 2014

Wall Street Journal: The encryption flaw that punctured the heart of the Internet this week underscores a weakness in Internet security: A good chunk of it is managed by four European coders and a former military consultant in Maryland.

To answer some of the astonished comments I made yesterday, the lack of contributors to the project is baffling. So: the whole Internet relied on 10 volunteers and 1 employee and nobody helped them?

I guess this sort of comes back to one of the essential question in Free Software: how do you get the users to fund it? For some kind of software, this can be difficult; but in the case of OpenSSL I would have thought this to be an easy thing, since so many banks and web companies intensively rely on it.

But apparently, they didn’t care at all if this major piece of security they were using was able to keep up with security standards or not. Considering the number of people involved with the project, I don’t see how it can put enough scrutiny and efforts to make sure it follows the best security review.

(Now, I have to wonder if the WSJ piece is actually correct in the way it counts the contributors to the project, because it’s fairly possible that lots of companies making use of OpenSSL actually had security experts and developers in-house test the code and send patches and bug reports upstream; a bit like Google and that other security firm did when they found out about Heartbleed…)

According to Brett Simmons, That pretty much wraps it up for C.

The whole heartbleed bugs also reminds me that OpenSSL is also an example of bad idea when it comes to licensing issues.

Friday, 11 April 2014

Get ArkOS up and running on Ubuntu in a virtual machine

Sam Tuke's blog | 23:47, Friday, 11 April 2014

So you’ve heard about the plucky new all-in-one host-it-yourself Linux distribution that’s turning Raspberry Pi’s into Freedom Boxes? ArkOS is a nifty little Arch Linux spin-off with slick marketing and granny-friendly interface. Yes it runs owncloud, dovecot, XMPP, transmission, and many more. Fortunately you don’t need a Raspberry Pi to give it a spin: here’s how to run it on Ubuntu machines.

<figure about="http://blogs.fsfe.org/samtuke/wp-content/plugins/creative-commons-license-manager/embed-helper.php?id=742" xmlns:cc="http://creativecommons.org/ns#" xmlns:dc="http://purl.org/dc/terms/">ArkOS logo <figcaption>ArkOS project logo Sam Tuke CC BY-SA <script> function showEmbed(element) { var figureNode = element.parentNode.parentNode; var helperNode = document.createElement('html'); helperNode.appendChild(figureNode.cloneNode(true)); embedNode = document.createElement('input'); embedNode.value = helperNode.innerHTML.replace(/<:/g,"<").replace(/<.:/g,"</figcaption><style scoped="scoped">figure[about] { display: inline-block; margin: 0; padding: 0; position: relative; } figure[about] > img, figure[about] > audio, figure[about] > video, figure[about] > object { display: block; margin: 0; padding: 0; } figure[about] figcaption { background-color: rgba(190, 190, 190, 0.6); bottom: 0px; font-size: 12px; height: 22px; left: 0px; line-height: 22px; overflow: hidden; padding-left: 22px; position: absolute; width: 0px; } audio +figcaption, video + figcaption, object + figcaption { left: 22px !important; } figure[about] figcaption:hover { height: inherit; width: inherit; } figure[about] figcaption [property*=title] { display: none; } figure[about] figcaption [property*=attributionName] { border: none; color: black; display: inline-block; height: 22px; margin: 0 0.5em; text-decoration: underline; text-shadow: none; } figure[about] figcaption small { position: absolute; left: 0; bottom: 0; } figure[about] figcaption small, figure[about] figcaption small a, figure[about] figcaption small a abbr { color: transparent !important; border: none; display: inline-block; height: 22px; margin: 0; padding: 0; text-shadow: none !important; width: 22px; } figure[about] figcaption small a[href^="http://creativecommons.org/licenses/"] { background: url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABYAAAAWCAQAAABuvaSwAAAAAnNCSVQICFXsRgQAAAAJcEhZcwAABJ0AAASdAXw0a6EAAAAZdEVYdFNvZnR3YXJlAHd3dy5pbmtzY2FwZS5vcmeb7jwaAAABmklEQVQoz5XTPWiTURTG8d8b/GjEii2VKoqKi2DFwU9wUkTdFIeKIEWcpIOTiA4OLgVdXFJwEZHoIII0TiJipZJFrIgGKXQQCRg6RKREjEjMcQnmTVPB3jNc7j1/7nk49zlJ+P+1rPsqydqFD1HvSkUq9MkpaQihoWRcfzqftGUkx9y10Yy33vlttz2GzBmNQtfLrmqqGu6odNKccOvvubXt1/Da+tAZBkwKx1OwHjNqti1EQ7DBN2Vr2vBl4cJiaAjOCdfbcMF3mWC7O6qmDFntms9KzgYZNU/bcFkxBM+UjXjiilFNl4yZsCIoqrRgA0IuGNRws1W66H1KSE5YFzKoa+pFTV0/ydYk66s+kt5kE1ilqd7qs49KIcj75bEfxp0RJn0yKxtMm21rzmtYG6x0Wt5Fy4ODbhuzJejx06M2PCzc+2frbgjn0z9YEE4tih7Q8FyShgdVzRvpQk+omLe5wxvBIV+ECTtkQpCx00Oh4ugCI7XcfF8INa9MqQnhQdrRSedYJYcdsc9eTHvjRbzsyC5lBjNLYP0B5PQk1O2dJT8AAAAASUVORK5CYII="); background-position: 0px center; background-repeat: no-repeat; } figure[about] > figcaption button, figure[about] > figcaption input { color: inherit; background: inherit; border-width: 1px; font-size: smaller; margin: 0 0.5em; width: 5em; } </style></figure>

Note: Installing the guest additions package, and including rules for forwarding ports 80 and 443 may not actually be necessary, but hey ho, they can’t hurt either, and may avoid hiccups. And in case you’re wondering, my favourite Fedora laptop is out for repair currently so my KXStudio machine has stepped into the breach. Guides should return to trusty Fedora shortly.

  1. On your host Ubuntu machine, install dependencies:
    sudo apt-get install python-setuptools libxslt1-dev python-dev libevent-dev
  2. install the latest version of Vagrant virtual machine configurator by downloading a .deb package direct from their website (repo versions are too old):
    http://www.vagrantup.com/downloads.html
  3. Download the ArkOS ‘genesis’ image for Vagrant via web browser:
    https://arkos.io/downloads/ (look for "genesis testing and development")
  4. $ cd into the directory containing the fresh image, then run this to generate a config file called ‘Vagrantfile’:
    vagrant init [image filename]
  5. Add these configuration lines to the newly generated Vagrantfile to enable connectivity from within new virtual machines to the wider internet, and to forward necessary ports to your host machine so you can browse ArkOS hosted pages from the comfort of your host machine’s web browser. Paste the entirety of this code before the existing final line containing ‘end’:
    # Allow the client Internet connectivity through the host
    config.vm.provider "virtualbox" do |v|
    v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
    v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    end
    # Allow virt host to access pages hosted on the client
    config.vm.network :forwarded_port, host: 1080, guest: 80
    config.vm.network :forwarded_port, host: 8000, guest: 8000
    config.vm.network :forwarded_port, host: 1443, guest: 443
  6. Start Vagrant for the first time, watch it try to set up Genesis:
    vagrant up
  7. Watch out for git errors stating “host unreachable”. If there are none, you’re good. If there are, you have a connectivity problem.
  8. If all is good, ssh into your new machine:
    vagrant ssh
  9. Double check for connectivity; if you get pings back, all is good. If not, you have connectivity problems:
    ping google.com
  10. If all is good, proceed to the next step. If you have connectivity problems, $ exit to return to your host machine, fix your Vagrantfile config options, then restart the virtual machine, auto-reconfigure Genesis successfully, and ssh back in:
    vagrant halt
    vagrant up
    vagrant provision
    vagrant ssh
  11. Install virtual box guest additions and lynx command line browser in the virtual machine:
    sudo pacman -S virtualbox-guest-utils virtualbox-guest-modules virtualbox-guest-dkms lynx
  12. Restart the virtual machine:
    exit
    vagrant halt
    vagrant up
    vagrant ssh
  13. Enter the genesis folder and start up Genesis:
    cd genesis
    sudo python2 genesis-panel
  14. Wait for the additional packages to install, and look for the success message indicating that Genesis is running “INFO Starting server”.
  15. Now, on your host machine (Ubuntu), visit the below address and you should see your ArkOS control panel staring back at you, ready for play!:
    localhost:8000
  16. Start configuration via the web interface, add database credentials for Genesis, follow the official instructions.
  17. Let us know how you get on in the comments :)

flattr this!

2014-04-11

Hugo - FSFE planet | 08:34, Friday, 11 April 2014

The heartbleed vulnerability is not only a catastrophic security issue, it also spans other interesting topics.

The first obvious lesson, is that the communication around the vulnerability was brilliant marketing.

The other lesson, less satisfying, is why is the majority of the internet relying on a very poorly funded project?!

The Washington Post published an article that misses the real issue. The heartbleed debacle is not an issue with the fact that OpenSSL is Free Software (the Apple goto fail bug shows it’s even worse when it’s proprietary—all Apple users had to wait several days before a patch was sent), nor with the fact that the Internet have no single authority (if anything, the openssl library is a single point of failure).

I find it astonishing that OpenSSL is so poorly funded and apparently lacks a governance strategy that includes large stakeholders such as the major websites making use of the library and which, instead, are essentially all irresponsible free-riders.

The real issue here is one of responsibility.

XKCD has an amazingly simple explanation of how the vulnerability works.

Thursday, 10 April 2014

Speaking about Open Data and Hacktivism

agger's Free Software blog | 21:08, Thursday, 10 April 2014

Hacktivist: Christian Panton

Tonight, I did a talk on “Open Data and Hacktivism” at Open Space Aarhus, the local hackerspace. The event was hosted in collaboration with “Aarhus Data Drinks”, which is a social initiative associated with Open Data Aarhus, the municipal portal for publication of open data.

I covered a number of subjects but focused on hacker ethics and focused on the inherent contradiction between government interests (in Denmark, innovation and productivity, i.e. mainly business related values) and a hacktivist agenda (transparency, accountability, freedom).

After a detour around Aaron Swartz and Danish Hacktivism, I spoke a lot of the Brazilian “Transparência Hacker” – a bunch of cool people who are completely dedicated to free software. I expect to see their hacker bus on my upcoming trip to Brazil.

Link: Slides (PDF)

Wednesday, 09 April 2014

What Heartbleed means for Free Software

Sam Tuke's blog | 16:18, Wednesday, 09 April 2014

The bug in OpenSSL nicknamed “heartbleed” that was discovered this week has been labelled “catastrophic“, “11 out of 10” for seriousness, and credited with “undoing web encryption“. It reached the height of mainstream press yesterday with dedicated front page articles on The Guardian and and The New York Times. This Free Software bug is now known worldwide and is set to remain infamous for years to come. So what does all this mean for the reputation of Free Software?

Software has bugs

Heartbleed is ostensibly a programming oversight, the result of a “missing bounds check“, which basicaly means the program will allow more data input than it should, because the developers didn’t anticipate or test an appropriate scenario. This sort of bug is extremely common in all kinds of software, and Free Software is not immune. Because Free Software makes its source code available to independent audit and review, such bugs are more likely to be found in important apps like OpenSSL. And because high profile Free Software projects are more apt to use automated code testing tools and bug detecting equipment, such bugs are more likely to be blocked from introduction in the first place.

Heartbleed proves that software has bugs, and that Free Software is no exception.

The fix was fast

The Codenomicon Security researchers who discovered the bug notified the OpenSSL team days before making the vulnerability public. The problem was fixed, with updates available for the most important Gnu/Linux servers, before the news even broke, as is the custom with security critical issues. Therefore the fix was extremely fast. Compare that to track records of leading proprietary software companies. Apple’s infamous “goto fail” bug waited four days after public disclosure for a fix to appear, and when it did, the patch concealed its real purpose, making no mention of the critical flaw that it addressed. Microsoft last year admitted to sharing details of vulnerabilities in their software in secret before they were fixed, leaving their own customers exposed to exploitation.

Heartbleed shows that important Free Software can react quickly to pre-empt exposure to publicly known vulnerabilities.

Access enabled discovery

What prevented this bug from going undetected for another two years? Heartbleed’s discovery took place during review of source code that wouldn’t have been possible had OpenSSL been proprietary. Vulnerabilities can be found with or without source code access, but the chances that they’ll be identified by good guys and reported, and not by bad guys who’ll exploit them, are higher when independent auditing of the code is made possible.

Heartbleed demonstrates that Free Software encourages independent review that gets problems fixed.

Tracing the problem

Was the heartbleed bug introduced by the NSA? Is the problem deliberate, or a mistake? We need not wait for a public statement from OpenSSL to sate our curiosity – the full history of the bug is a matter of public record. We know who introduced them problem, when, who approved the change, and the original explanation as to why. We know exactly who to ask about the problem – their names and email addresses are listed beside their code. We can speculate about hidden agendas behind the work in question, but the history of the problem is fundamentally transparent, meaning investigators both inside and outside of OpenSSL can ask the right questions faster and immediately rule out a slew possibilities that initially suggest themselves in such a case.

Heartbleed shows the value of Free Software transparency and accountability.

Catastrophic success

Despite the understandable consternation surrounding heartbleed’s discovery, its impact is actually very encouraging. The shock resulting from the flaw reflects how widely OpenSSL is relied upon for mission critical security, and how well it serves us the rest of the time. 66% Of all webservers have OpenSSL installed via Apache or Nginx according to Netcraft. The list of top shelf security apps using OpenSSL in the back-end is a long one, including PHP (running on 39%  of all servers). The fact that heartbleed has become front page news is a good thing for raising public awareness of the ubiquity of Free Software crypto across business sectors, and for reminding us how much we take its silent and steady protection for granted.

Heartbleed exposes Free Software’s importance and historical trustworthiness to a global audience.

Impact on Free Software?

Many commentators on the heartbleed bug believe it demonstrates weaknesses and flaws in Free Software as a concept and method. But I see the contrary: heartbleed demonstrates how well Free Software is working to deliver security we need, to identify problems with it, and to fix them when they’re found. As a crypto lover and developer it only remains for me to thank the OpenSSL Team for their dedication, and the stirling Free Software they provide to us all.

flattr this!

Free Software in Education News – March

Being Fellow #952 of FSFE » English | 11:58, Wednesday, 09 April 2014

Here’s what we collected in March. If you come accross anything that might be worth mentioning in this series, please drop me a note, dump it in this pad or drop it on the edu-eu mailinglist!

FSFE Edu-Team activities

We are still working on our edu pages

Former FSFE vice preseident Henrik Sandklef on Education in general

I already covered it in the last edu post, but as it happened in March, I feel happy to mention Kevin’s blog post about the NLEdu campaign again.

Community

Nice post by Luis Ibanez on “Testing and tinkering with the Arduino Starter Pack

A crowd-sourced article by Greg Hislop about a workshop exploring FOSS in universities

The “Gymnasium Leoninum” in Handrup, Germany describes why they use Free Software (in German)

Government

Free Software opening educational doors in Appalachia. Unfortunately, license cost were the main reason to use Kdenlive and Blender…

edu software

Germán Arias started a campaign to support GNU FisicaLab. GNU FisicaLab is an educational application for solving physics problems. This allows students focus in physics concepts when solving problems, leaving aside the mathematical details. FisicaLab has an extensive documentation, with a lot of examples, to help the student to familiarize quickly with the application.

distro news

OLPC/olpc dead or alive: Survey on the status of olpc

future events

  • April 11-13: EduCamp Frankfurt (M), Germany
  • April 12-13: Sugar Camp #3 in Carrefour Numérique – Cité des Sciences, Paris. Hosted by Bastien Guerry (via Walter Bender)
  • April 22-24: The TechKids e.V. organizes a three day basecamp in Bonn, Germany
  • May 15: Youth Future Project invites you to write an article for their booklet about the youth movement on sustainability in Europe. It’s called: “TAKING RESPONSIBILITY FOR OUR FUTURE”. Call for Papers ends on May 15th
  • August 9-15: KDE edu Randa meeting (please register!)
  • August 23-24: This year’s FrOSCon will have the [[https://www.teckids.org/froglabs_2014_froscon.htm][Froglabs again], a two day workshop for kids with Freedroidz, PyGame, Blender and much more Free Software!

Thanks to all contributors!

Double whammy for CACert.org users

DanielPocock.com - fsfe | 05:47, Wednesday, 09 April 2014

If you are using OpenSSL (or ever did use it with any of your current keypairs in the last 3-4 years), you are probably in a rush to upgrade all your systems and replace all your private keys right now.

If your certificate authority is CACert.org then there is an extra surprise in store for you. CACert.org has changed their hash to SHA-512 recently and some client/server connections silently fail to authenticate with this hash. Any replacement certificates you obtain from CACert.org today are likely to be signed using the new hash. Amongst other things, if you use CACert.org as the CA for a distributed LDAP authentication system, you will find users unable to log in until you upgrade all SSL client code or change all clients to trust an alternative root.

Tuesday, 08 April 2014

reConServer for easy SIP conferencing

DanielPocock.com - fsfe | 16:24, Tuesday, 08 April 2014

In the lead up to the release of Debian wheezy, there was quite some debate about the Mumble conferencing software which uses the deprecated and unsupported CELT codec. Although Mumble has very recently added Opus support, it is still limited by the fact that it is a standalone solution without any support for distributed protocols like SIP or XMPP.

Making SIP conferencing easy

Of course, people could always set up SIP conferences by installing Asterisk but for many use cases that may be overkill and may simply introduce alternative security and administration overheads.

Enter reConServer

The popular reSIProcate SIP stack includes a high-level programming API, the Conversation Manager, dubbed librecon. It was developed and contributed to the open source project by Scott Godin of SIP Spectrum. In very simple terms, a Conversation object with two Participants is a phone call. A Conversation object with more than two Participants is a conference.

The original librecon includes a fully functional demo app, testUA that allows you to control conferences from the command line.

As part of Google Summer of Code 2013, Catalin Constantin Usurelu took the testUA.cxx code and converted it into a proper daemon process. It is now available as a ready-to-run SIP conferencing server package in Debian and Ubuntu.

The quick and easy way to try it

Normally, a SIP conferencing server will be linked to a SIP proxy and other infrastructure.

For trying it out quickly, however, no SIP proxy is necessary.

Simply install the package with the default settings and then configure a client to talk to the reConServer directly by dialing the IP address of the server.

For example, set the following options in /etc/reConServer/reConServer.config:

UDPPort = 5062
EnableAutoAnswer = true

and it will happily accept all SIP calls sent to the IP address where it is running.

Now configure Jitsi to talk to it directly in serverless SIP configuration:

Notice here that we simply put a username without any domain part, this tells Jitsi to create an account that can operate without a SIP proxy or registration server:

Calling in to the conference

Notice in the screenshot below we simply dial the IP address and port number of the reConServer process, sip:192.168.1.100:5062. When the first call comes in, reConServer will answer and place the caller on hold. When the next caller arrives, the hold should automatically finish and audio will be heard.

Next steps

To make it run as part of a proper SIP deployment, set the necessary configuration options (username, password, proxy) to make reConServer register to the SIP proxy. Users can then call the conference through the proxy.

To discuss any problems or questions, please come and join the recon-users mailing list or the Jitsi users list

Consider using Wireshark to observe the SIP packets and learn more about the protocol.

KDE, families and Randa

Mario Fux | 10:30, Tuesday, 08 April 2014

First and foremost I’d like to thank the KDE e.V. that they invited me to extended board meeting in Berlin two weeks ago. I got some more insights in the board’s work and could participate in the fundraising workshop on Saturday. So what did we learn?

“Ask, ask again and ask for more” and “KISS – Keep it simple and smart”. I hope to be able to apply this and the other things we learned to the fundraising campaign for the Randa Meetings 2014 which we’re going to launch in the next weeks.

Another thing where I was quite active in the last weeks is the “recruitment” for people that should come to Randa this summer. As you of course already know, two of the topics this year are the KDE SDK and the porting of apps to KF5 and other platforms. Thus I tried to get in contact with KDE-Mac people and then also got in contact with people from Macports. I’m currently working on bringing the technical parts of the discussion back to KDE-Mac mailing list.

And I’m working further to bring Windows, Android and the aforementioned Mac people to Randa. So if you’re interested and I did not yet get in contact with you (under which rock were you hiding?;-) get in contact, please. One of my personal goals is it by the way to get some “foreign” machines to our CI park, namely Windows, Mac, Android and Co ;-) . There e.g. the Macports.org CI people could be of valueable help.

On another topic or actually the middle one in the title above: I’m happy to tell you that this year we’ve already three or four participants registered for the Randa Meetings whom will bring their families with them to Randa. Don’t fear, none of the money of the KDE e.V. will be used to pay their accommodation or travel and food costs. They will pay for their families’ stay. But why do I think that this is so nice?

Because I think this is an important step and the right direction. A huge problem of many free software communities is the fact, that contributors leave after they get graduated or get families. So it’s (IMNSHO) only in the best interest of KDE if there are possibilities for KDE contributors to bring their families to KDE meetings. It is nice if you can hack on KDE software during the day and eat lunch and dinner with your family and spend the evening with them. And who knows probably we need to organize a day nursery in the coming years.

But what about the coming years and my family? First and foremost I’d like to write here a huge and humongous thank you to my family, the small and the big one and even some farther relatives. Without them I couldn’t organize these meetings in Randa. So as you may have already read some time back I decided to found an association for the Randa Meetings and each year since the founding I was searching for some local sponsors for some expense allowance for me and some other helpers. Do you have any idea what amount of work it is to cook for this crowd for a whole week. You won’t believe how much KDE and free software people eat ;-) .

And to be honest for the coming years I plan to stabilize this expense allowance or even small wage even more. But don’t fear (again ;-) . None of the money of the KDE e.V. or the planned fundraising campaign will land in my wallet! I just want to be able to keep the Randa Meetings alive for the next years (I roughly estimate to work one to one and a half month on the organization of a single edition of the Randa Meetings) and thus look for new opportunities. So if you have some ideas tell me or at least participate in this is short and tiny (takes around a minute to fill in) survey or poll about this topic. Would be nice to have it widespread…

But what’s next for the Randa Meetings beneath the fundraising campaign? In the coming days I plan to poke and email the people and groups that are already registered for the sprints in Randa that they should check their data, check their groups and see who is missing and who needs to be poked. We need to fix a more or less final budget till the end of April.

So stay tuned when we launch the fundraising campaign for the Randa Meetings and help us to spread the word. Thanks for reading and don’t forget to flattr me below ;-) .

PS: This blog post already got a bit larger than planned but here is another PS ;-) :
PPS: In the coming days I plan as well to check the wiki pages for the Randa Meetings and add some information about the some hardware present at this year’s meetings (e.g. touch screen, WeTabs, etc.) which you can use and I will add some additional information for families.

The best replacement for Windows XP: Linux with LXDE

Seravo | 08:34, Tuesday, 08 April 2014

Lubuntu logo, LXDE archAs of today, Microsoft has officially ended the support for Windows XP and it will no longer receive any security updates. Even with the updates, XP has never been a secure platform and by now users should really stop using it. But what should people install instead? Our recommendation is Lubuntu.

Windows XP has finally reached its end-of-life. Most people have probably already bought new computers that come preinstalled with a newer version of Windows and others have had the wisdom to move to another operating system for a long time ago. Considering the licensing model, performance and ease of use in newer Windows versions, it is completely understandable that there is a large amount of people who have decided to stick to XP for a long time. But now they must upgrade, and the question is, to what?

It is obvious that the solution is to install Linux. It is the only option when the requirement is having a usable desktop environment on the same hardware as XP was used on. But the hard choice is what Linux distribution to choose. For this purpose Seravo recommends Lubuntu version 14.04 (currently in beta-2, but final release coming in just a few weeks).

Why? First of all the underlying Linux distribution is Ubuntu, the worlds third most popular desktop operating system (after Windows and Mac). The big market share guarantees that there are plenty of peer users, support and expertise available. Most software publishers have easy to install packages for Windows, Mac and Ubuntu. All major pre-installed Linux desktop offerings are based on Ubuntu. And when you count in Ubuntu’s parent distribution Debian and all of the derivative Linux distributions, it is certainly the most widely used Linux desktop platform. There is safety in the numbers and a platform with lots of users is most likely maintained, so it is a safe choice. Ubuntu 14.04 is also a long term release (LTS), and the publishing company Canonical promises that the base of this Ubuntu release will receive updates and security fixes until 2019.

However we don’t recommend the default desktop environment Ubuntu provides. Instead we recommend to use the Ubuntu flavour Lubuntu, which comes with the desktop environment LXDE. This is a very lightweight graphical user interface, meaning it will be able to run on machines that have just 128 MB of RAM memory. On better machines LXDE will just be lightning fast to use, and it will leave more unused memory for other applications to use (e.g. Firefox, Thunderbird, LibreOffice etc). Also the default applications in Lubuntu are chosen to be lightweight ones, so the file manager and image viewers are fast. There are also some productivity software included like Abiword and Sylpheed, but most users will rather want to use the heavier but more popular equivalents like LibreOffice Writer and Mozilla Thunderbird. These can easily be installed in Lubuntu using the Lubuntu software center.

Note that even though Lubuntu will be able to run on very old machines as it gets by with so little resources, you might still have some difficulties installing it if your machine does not support the PAE feature or if there are other hardware which are not supported by the millions of Linux device drivers that Ubuntu ships with by default. If you live in Finland, you can buy professional support from Linux-tuki.fi and have an expert do the installation for you.

Why is Lubuntu the best Win XP replacement?

Classic menu

Classic menu

First of all Lubuntu is very easy to install and maintain. It has all the ease of use and out-of-box security and usability enhancements that Ubuntu has engineered, including the installer, encrypted home folders, auto-updates, readable fonts, stability and other features which makes up a good quality experience.

The second reason to use Lubuntu is that it is very easy to learn, use and support. Instead of the Ubuntu default Unity desktop environment Lubuntu has LXDE, which looks and behaves much like the classic desktop. LXDE has a panel at the bottom, a tree-like application launcher in the left lower corner, a clock and notification area in the right lower corner and a panel for window visualization and switching in the middle. Individual windows have their manipulation buttons in the right upper corner and application menus right inside the application windows and always visible. Anybody who has used Windows XP will immediately feel comfortable: applications are easy to discover and launch, there is no need to know their name or category in advance. It is easy to see what applications are open and to switch between them with classic mouse actions or using simple Alt+Tab shortcut. From a support perspective it is easy to ask users by phone to open menu File and Save as and so on, as users can easily see and choose the correct menu items for the application in question.

The third reason is that while the LXDE is visually simple, users can always install whatever application available in the Ubuntu repositories and get productive with whatever complex productivity software they want. A terminal can be spawned in under a second with shortkeys Ctrl+Alt+T. Even though LXDE itself is simple, it won’t hamper anybodys ability to be productive and do complex things.

The fourth reason is that when using Lubuntu, switching to a more modern desktop UI is easy. On top of a Lubuntu installation a admin can install the gnome-session package, and then users will be able to choose another session type in the login screen to get into Gnome 3.

Some might criticize LXDE that it does not have enough features. Yes, in LXDE pressing the keyboard button Print Screen will not automatically launch a screenshot tool nor dragging a window to the side of the screen will not automatically make the window into a perfectly aligned half screen sized window. But it is still possible to achieve the same end results using other means in LXDE and all the important features, like system settings, changing resolution, attaching external screen, attaching USB key and easy mounting and unmounting etc are all part of the standard feature set. In fact the lead developer of Lubuntu has said he will not add any new features and only do bug fixes. It could be said that LXDE is feature complete and the next development effort is rewriting in Qt instead of the current GTK2 toolkit, a move that will open new technological horizons under the hood, but not necessarily do anything to end user visible features.

Another option with similar design ideas is XFCE and the Ubuntu flavour Xubuntu that is built around this desktop environment. Proponents of XFCE say it has more features than LXDE but most of those features are not needed by average users and some components, like the file manager and image viewer, are more featureful in LXDE than in XFCE and the features in those apps are more likely to be actually needed. However the biggest and most striking difference is that XCFE isn’t actually that lightweight, and to run smoothly it needs a computer that is more than twice as powerful than what LXDE needs.

Our fifth and final reason to recommend LXDE and Lubuntu is speed. It is simply fast. And fast is always better. Have you ever wondered how come computers year after year feel sluggish even though processor speed is doubled each 18 moths according the Moore’s law? Switch to LXDE and you’ll have an environment that is lightning fast on any reasonable modern hardware.

Getting Linux and LXDE

LXDE is available also in many other Linux distributions like Debian and OpenSUSE, but for the reasons stated above we recommend installing it downloading Lubuntu 14.04, making a bootable USB key of it (following simple installation instructions) and installing it on all of your old Windows XP machines. Remember though to copy your files from XP to an external USB key so that you can later put them back on your computer when Lubuntu is installed.

Lubuntu desktop Lubuntu menu Lubuntu windows

Sunday, 06 April 2014

Multitouch gestures with the Dell XPS 13 on Arch Linux

Hugo - FSFE planet | 18:29, Sunday, 06 April 2014

  1. Install the linux-xps13-archlinux kernel (soon in the AUR)

  2. Install xf86-input-synaptics and, from AUR, touchegg and touchegg-gce-git (this last one is to be able to configure gestures with the graphic interface).

  3. Edit /etc/X11/xorg.conf.d/50-synaptics.conf

    Section "InputClass"
        Identifier "touchpad catchall"
        Driver "synaptics"
        MatchIsTouchpad "on"
        Option "TapButton1" "1"
        Option "TapButton2" "2"
        Option "TapButton3" "0"
        Option "ClickFinger3" "0"
    
  4. Configure your gestures with Touchègg

    Here’s my ~/.config/touchegg/touchegg.conf:

    <script src="https://gist.github.com/hugoroy/10009770.js"></script> <noscript>
    <touchégg>
    <settings>
    <property name="composed_gestures_time">0</property>
    </settings>
    <application name="All">
        <gesture type="DRAG" fingers="3" direction="RIGHT">
            <action type="SEND_KEYS">Super+Right</action>
        </gesture>
        <gesture type="PINCH" fingers="5" direction="OUT">
            <action type="SEND_KEYS">Control+Shift+equal</action>
        </gesture>
        <gesture type="DRAG" fingers="3" direction="LEFT">
            <action type="SEND_KEYS">Super+Left</action>
        </gesture>
        <gesture type="PINCH" fingers="5" direction="IN">
            <action type="SEND_KEYS">Control+minus</action>
        </gesture>
        <gesture type="DRAG" fingers="3" direction="UP">
            <action type="MAXIMIZE_RESTORE_WINDOW"></action>
        </gesture>
        <gesture type="DRAG" fingers="4" direction="UP">
            <action type="SEND_KEYS">Super</action>
        </gesture>
        <gesture type="DRAG" fingers="4" direction="DOWN">
            <action type="SEND_KEYS">Escape</action>
        </gesture>
        <gesture type="TAP" fingers="3" direction="">
            <action type="MOUSE_CLICK">BUTTON=2</action>
        </gesture>
        <gesture type="DRAG" fingers="3" direction="DOWN">
            <action type="SEND_KEYS">Super+Down</action>
        </gesture>
    </application>
    <application name="Evince">
        <gesture type="DRAG" fingers="4" direction="LEFT">
            <action type="SEND_KEYS">Control+Left</action>
        </gesture>
        <gesture type="DRAG" fingers="4" direction="RIGHT">
            <action type="SEND_KEYS">Control+Right</action>
        </gesture>
    </application>
    <application name="Firefox">
        <gesture type="DRAG" fingers="4" direction="LEFT">
            <action type="SEND_KEYS">Alt+Left</action>
        </gesture>
        <gesture type="DRAG" fingers="4" direction="RIGHT">
            <action type="SEND_KEYS">Alt+Right</action>
        </gesture>
    </application>
    </touchégg>
    
    up to date version on github</noscript>
  5. Add to your session (using gnome-session-properties for instance):

    • touchegg
    • synclient TapButton3=0

The real improvement is that I can use three-finger tapping to simulate the middle-click mouse button which is used for quick pasting or for opening links in a new tab.

As far as “pinching” is concerned, it does not work reliably at all for me.

Saturday, 05 April 2014

Certificate Pinning for GNU/Linux and Android

Jens Lechtenbörger » English | 13:09, Saturday, 05 April 2014

Previously, I described the dismal state of SSL/TLS security and explained how certificate pinning protects against man-in-the-middle (MITM) attacks; in particular, I recommended GnuTLS with its command line tool gnutls-cli for do-it-yourself certificate pinning based on trust-on-first-use (TOFU). In this post, I explain how I apply those ideas on my Android phone. In a nutshell, I use gnutls-cli in combination with socat and shell scripting to create separate, certificate pinned TLS tunnels for every target server; then, I configure my apps to connect into such a tunnel instead of the target server, which protects my apps against MITM attacks with “trusted” and other certificates. Note that nothing in this post is specific to Android; instead, I installed the app Lil’ Debi, which provides a Debian GNU/Linux system as prerequisite for the following.

Prepare Debian Environment

Lil’ Debi by default uses DNS servers of a US based search engine company, which is configured in /etc/resolv.conf. I don’t want that company to learn when I access my e-mail (and more). Instead, I’d like to use the “normal” DNS servers of the network to which I’m connected, which gets configured automatically via DHCP on Android. However, I don’t know how to inject that information reliably into Debian (automatically, upon connectivity changes). Hence, I’m currently manually running something like dhclient -d wlan0, which updates the network configuration. I’d love to hear about better solutions.

Next, the stable version of gnutls-bin does not support the option --strict-tofu. More recent versions are available among “experimental” packages. To install those, I switched to Debian Testing (replace stable with testing in /etc/apt/sources.list; do apt-get update, apt-get dist-upgrade, apt-get autoremove). Then, I installed gnutls-cli:
apt-get -t experimental install gnutls-bin

Afterwards, I created a non-root user gnutls with directories to be used in shell scripts below:
useradd -s /bin/false -r -d /var/lib/gnutls gnutls
mkdir /var/{lib,log}/gnutls
chown gnutls /var/{lib,log}/gnutls
chmod 755 /var/{lib,log}/gnutls

For network access on android, I also needed to assign gnutls to a special group as follows. (Before that, I got “net.c:142: socket() failed: Permission denied” or “socket: Permission denied” for network commands.)
groupadd -g 3003 aid_inet
usermod -G aid_inet gnutls

Finally, certificates need to be pinned with GnuTLS. I did that on my PC as described previously and copied the resulting file ~/.gnutls/known_hosts to /var/lib/gnutls/.gnutls/known_hosts.

Certificate Pinning via Tunnels/Proxies

I use socat to create (encrypting) proxies (i.e., local servers that relay received data towards the real destinations). In my case, socat relays received data via a shell script into GnuTLS, which establishes the TLS connection to the real destination and performs certificate checking with option --strict-tofu. Thus, the combination of socat, shell script, and gnutls-cli creates a TLS tunnel with certificate pinning against MITM attacks. Clearly, none of this is necessary for apps that pin certificates themselves. (On my phone, ChatSecure, a chat app implementing end-to-end encryption with Off-The-Record (OTR) Messaging, pins certificates, but other apps such as K-9 Mail, CalDAV Sync Adapter. and DAVdroid do not.) For the sake of an example, suppose that I send e-mail via server smtp.example.org at port 25, which I would normally enter in my e-mail app along with the setting to use SSL/TLS for every connection, which leaves me vulnerable to MITM attacks with “trusted” certificates. Let’s see how to replace that setting with a secure tunnel. First, the following command starts a socat proxy that listens on port 1125 for incoming network connections from my phone. For every connection, it executes the script gnutls-tunnel.shand relays all network traffic into that script:
$ socat TCP4-LISTEN:1125,bind=127.0.0.1,reuseaddr,fork \
EXEC:"/usr/local/bin/gnutls-tunnel.sh -s -t smtp.example.org 25"

Second, the script is invoked with the options -s -t smtp.example.org 25. Thus, the script invokes gnutls-cli to open an SMTP (e-mail delivery) connection with TLS protection (some details of gnutls-tunnel.sh are explained below, details of gnutls-cli in my previous article). If certificate verification succeeds, this establishes a tunnel from the phone’s local port 1125 to the mail server. (There is nothing special about the number 1125; I prefer numbers ending in “25” for SMTP.)

Third, I configure my e-mail app to use the server named localhost at port 1125 (without encryption). Then, the app sends e-mails into socat, which forwards them into the script, which in turn relays them via a GnuTLS secured connection to the mail server smtp.example.org.

Shell Scripting

To setup my GnuTLS tunnels, I use three scripts, which are contained in this tar archive. (One of those scripts contains the following text: “I don’t like shell scripts. I don’t know much about shell scripts. This is a shell script. Use at your own risk. Read the license.”)

First, instead of the invocation of socat shown above, I’m using the following wrapper script, start-tls.sh, whose first argument needs to be the local port to be used by socat, while the other arguments are passed on. Moreover, the script redirects log messages to a file.
#!/bin/sh
umask 0022
LPORT=$1
shift
LOG=/var/log/gnutls/socat-$LPORT.log
socat TCP4-LISTEN:$LPORT,bind=127.0.0.1,reuseaddr,fork EXEC:"/usr/local/bin/gnutls-tunnel.sh $*" >> $LOG 2>&1 &

Second, gnutls-tunnel.sh embeds the invocation of gnutls-cli --strict-tofu, parses its output, and writes log messages. That script is too long to reproduce here, but I’d like to point out that it sends data through a background process as described by Marco Maggi. Moreover, it uses a “delayed encrypted bridge.” Currently, the script knows the following essential options:

  • -t: Use option --starttls for gnutls-cli; this start with a protocol-specific plaintext connection which switches to TLS later on.
  • -h: Try to talk HTTP in case of errors.
  • -i: Try to talk IMAP in case of errors.
  • -s: Try to talk SMTP, possibly before STARTTLS and in case of errors.

Third, I use a script called start-tls-tunnels.sh to start my TLS tunnels, essentially as follows:
#!/bin/sh
TLSSHELL=/bin/sh
TLSUSER=gnutls
run_as () {
su -s $TLSSHELL -c "$1" $TLSUSER
}
# SMTP (-s) with STARTTLS (-t) if SMTPS is not supported, typically to
# port 25 or 587:
run_as "/usr/local/bin/socat-tls.sh 1125 -t -s smtp.example.org 587"
# Without -t if server supports SMTPS at port 465:
run_as "/usr/local/bin/socat-tls.sh 1225 -s mail.example.org 465"
# IMAPS (-i) at port 993:
run_as "/usr/local/bin/socat-tls.sh 1193 -i imap.example.org 993"
run_as "/usr/local/bin/socat-tls.sh 1293 -i imap2.example.org 993"
# HTTPS (-h) at port 443:
run_as "/usr/local/bin/socat-tls.sh 1143 -h owncloud.example.org 443"

Once the Debian system is running (via Lil’ Debi), I invoke start-tls-tunnels.sh in the Debian shell. (This could be automated in the app’s startup script start-debian.sh.) Then, I configure K-9 Mail to use localhostwith the local ports defined in the script (without encryption).

(You may want to remove the log files under /var/log/gnutls from time to time.)

Certificate Expiry, MITM Attacks

Whenever certificate verification fails because the presented certificate does not match the pinned one, gnutls-tunnel.sh logs an error message and reports an error condition to the invoking app. Clearly, it is up to the app whether and how to inform the user. For example, K-9 Mail fails silently for e-mail retrieval via IMAP (which is an old bug) but triggers a notification when sending e-mail via SMTP. The following screen shot displays notifications of K-9 Mail and CalDAV Sync Adapter.

MITM Notifications of K-9 Mail and CalDAV Sync Adapter

The screenshot shows that in case of certificate failures for HTTPS connections, I’m using error code 418. That number was specified in RFC 2324 (updated a couple of days ago in RFC 7168). If you see error code 418, you know that you are in deep trouble without coffee.

In any case, the user needs to decide whether the server was equipped with a new certificate, which needs to be pinned, or whether a MITM attack takes place.

What’s Next

SSL/TLS is a mess, and the above is far more complicated than I’d like it to be. I hope to see more apps pinning certificates themselves. Clearly, users of such apps need some guidance how to identify the correct certificates that should be pinned.

If you develop apps, please implement certificate pinning. As I wrote previously, I believe these papers to be good starting points:

You may also want to think about the consequences if “trust” in some CA is discontinued as just happened for CAcert for Debian and its derivatives. Recall that CAcert was a “trusted” CA in Debian, which implied that lots of software “trusted” any certificate issued by that CA without needing to ask users any complicated question at all. Now, as that “trust” has been revoked (see this bug report for details), users will see warnings concerning those same, unchanged (!), previously “trusted” certificates; depending on the client software, they may even experience an unrecoverable error, rendering them unable to access the server at all. Clearly, this is far from desirable.

However, if your app supports certificate pinning, then such a revocation of “trust” does not matter at all. The app will simply continue to be usable. It is high time to distinguish trust from “trust.”

Friday, 04 April 2014

Best real-time communication (RTC / VoIP) softphone on the Linux desktop?

DanielPocock.com - fsfe | 18:24, Friday, 04 April 2014

The Debian community has recently started discussing the way to choose the real-time communications (RTC/VoIP) desktop client for Debian 8 (jessie) users.

Debian 7 (wheezy), like Fedora, ships GNOME as the default desktop and the GNOME Empathy client is installed by default with it. Simon McVittie, Empathy package maintainer has provided a comprehensive response to the main discussion points indicating that the Empathy project comes from an Instant Messaging (IM) background (it is extremely easy to setup and use for XMPP chat) but is not a strong candidate for voice and video.

Just how to choose an RTC/VoIP client then?

One question that is not answered definitively is just who should choose the default RTC client. Some people have strongly argued that the maintainers of individual desktop meta-packages should choose as they see fit.

Personally, I don't agree with this viewpoint and it is easy to explain why.

Just imagine the maintainers of GNOME choose one RTC application and the maintainers of XFCE choose an alternative and these two RTC applications don't talk to each other. If a GNOME user wants to call an XFCE user, do they have to go to extra effort to get an extra package installed? Do they even have to change their desktop? For power users these questions seem trivial but for many of our friends and family who we would like to contact with free software, it is not amusing.

When the goal of the user is to communicate freely and if they are to remain free to choose any of the desktops then a higher-level choice of RTC client (or at least a set of protocols that all default clients must support) becomes essential.

Snail mail to the rescue?

There are several friends and family I want to be able to call with free software. The only way I could make it accessible to them was to burn self-booting Debian Live desktop DVDs with an RTC client pre-configured.

Once again, as a power-user maybe I have the capability to do this - but is this an efficient way to overcome those nasty proprietary RTC clients, burning one DVD at a time and waiting for it to be delivered by snail mail?

A billion browsers can't be wrong

WebRTC has been in the most recent stable releases of Firefox/Iceweasel and Chrome/Chromium for over a year now. Many users already have these browsers thanks to automatic updates. It is even working well on the mobile versions of these browsers.

In principle, WebRTC relies on existing technologies such as the use of RTP as a transport for media streams. For reasons of security and call quality, the WebRTC standard mandates the use of several more recent standards and existing RTC clients simply do not interoperate with WebRTC browsers.

It really is time for proponents of free software to decide if they want to sink or swim in this world of changing communications technology. Browsers will not dumb-down to support VoIP softphones that were never really finished in the first place.

Comparing Empathy and Jitsi

There are several compelling RTC clients to choose from and several of them are now being compared on the Debian wiki. Only Jitsi stands out offering the features needed for a world with a billion WebRTC browser users.

Feature Empathy WebRTC requirement? Comments
Internet Connectivity Establishment (ICE) and TURN (relay) Only for gmail XMPP accounts, and maybe not for much longer For all XMPP users with any standards-based TURN server, soon for SIP too Mandatory Enables effective discovery of NAT/firewall issues and refusal to place a call when there is a risk of one-way-audio. Some legacy softphones support STUN, which is only a subset of ICE/TURN.
AVPF X Mandatory Enables more rapid feedback about degrading network conditions, packet loss, etc to help variable bit rate codecs adapt and maximise call quality. Most legacy VoIP softphones support AVP rather than AVPF.
DTLS-SRTP X Mandatory for Firefox, soon for Chrome too DTLS-based peer-to-peer encryption of the media streams. Most legacy softphones support no encryption at all, some support the original SRTP mechanism based on SDES keys exchanged in the signalling path.
Opus audio codec X Strongly recommended. G.711 can also be used but does not perform well on low bandwidth/unreliable connections Opus is a variable bit rate codec the supercedes codecs like Speex, SILK, iLBC, GSM and CELT. It is the only advanced codec browsers are expected or likely to implement. Most of the legacy softphones support the earlier codec versions (such as GSM) and some are coded in such a way that they can't support any variable bit-rate codec at all.

Retrofitting legacy softphones with all of these features is no walk in the park. Some of them may be able to achieve compliance more easily by simply throwing away their existing media code and rebuilding on top of the WebRTC media stack used by the browsers

However, the Jitsi community have already proven that their code can handle all of these requirements by using their media processing libraries to power their JitMeet WebRTC video conferencing server

Dreams are great, results are better

Several people have spoken out to say they want an RTC client that has good desktop integration (just like Empathy) but I'm yet to see any of them contribute any code to such an effort.

Is this type of desktop integration the ultimate priority and stubbornly non-negotiable though? Is it more an example of zealous idealism that may snuff out hope of bringing the optimum communications tools into the hands of users?

As for solving all the other problems facing free communications software, the Jitsi community have been at it for more than 10 years. Just have a look at their scorecard on Github to see what I mean. Jitsi lead developer Emil Ivov has a PhD in multimedia and is a regular participant in the IETF, taking on some of the toughest questions, like how to make a world with two protocols (SIP and XMPP) friendly for real users.

A serious issue for all Linux distributions

Communications technology is one of the most pervasive applications and also one of the least forgiving.

Users have limited patience with phones that don't work, as the Australian Russell Crowe demonstrated in his infamous phone-throwing incident.

Maximizing the number of possible users is the key factor that makes networks fail or succeed. It is a knife that cuts both ways: as the free software community struggles with this issue, it undermines our credibility on other issues and makes it harder to bring free desktops to our friends, families and workplaces. Do we really want to see the rest of our work in these areas undermined, especially when there is at least one extremely viable option knocking at the door?

Thursday, 03 April 2014

LogAnalyzer and rsyslog MongoDB support now in wheezy-backports and Ubuntu

DanielPocock.com - fsfe | 17:12, Thursday, 03 April 2014

LogAnalyzer is a powerful but simple log file analysis tool. The upstream web site gives an online demo.

It is developed in PHP, runs in Apache and has no other dependencies such as databases - it can read directly from the log files.

For efficiency, however, it is now trivial to make it work with MongoDB on Debian.

Using a database (including MongoDB and SQL backends) also means that severity codes (debug/info/notice/warn/error/...) are retained. These are not available from many log files. The UI can only colour-code and filter the messages by severity if it has a database backend.

Package status

The packages just entered Debian recently. It has now been migrated to wheezy-backports and Ubuntu so anybody on wheezy or Ubuntu can use it.

Quick start with MongoDB

The version of rsyslog in Debian wheezy does not support MongoDB output. It is necessary to grab 7.4.8 from backports.

Some versions, up to 7.4.4 in backports, had bugs with MongoDB support - if you tried those, please try again now.

The backported rsyslog is a drop-in replacement for the standard rsyslog package and for users with a default configuration it is unlikely you will notice any difference. For users who customized the configuration, as always, make a backup before trying the new version.

  • Install all the necessary packages: apt-get install rsyslog-mongodb php5-mongo mongodb-server
  • Add the following to /etc/rsyslog.conf:
    <verbatim>

    module (load="ommongodb")
    *.* action(type="ommongodb" server="127.0.0.1" db="logs" collection="syslog")

    </verbatim>
  • Look for the MongoDB settings in /etc/loganalyzer/config.php and uncomment them. Comment out the stuff for disk log access.
  • Restart rsyslog and then browse your logs at http://localhost/loganalyzer

Monday, 31 March 2014

Open source for office workers

Seravo | 10:56, Monday, 31 March 2014

Document Freedom logoOpen source software is great and it’s not only great for developers who can code and use the source directly. Open source is a philosophy. Open source is for technology like what democracy is for society: it isn’t magically superior right away, but it enables a process which over time leads to best results – or at least avoids the worst results. A totalitarian regime might be efficient and benevolent, but there is a big risk it will become corrupt and get bad. And then a totalitarian regime gets bad, it can be really, really ugly.

Because of this philosophy even regular office workers should strive for maximizing their use of open source software. To help ordinary non-technical people Seravo has contributed to the VALO-CD project, which in 2008-2013 created a collection of the best Free and Open Source Software for Windows, which is available both in Finnish and English. The CD (contents suitable also for a USB stick) and related materials are still available for download.

We have also participated in promoting open standards. Most recently we helped the Free Software Foundation Europe publish a press releases in Finland regarding the Document Freedom Day. Also the theme of our latest Seravo-salad was the OpenDocument Format. Open standards are essential in making sure users can access their own data and open files in different programs. Open standards is also about programs being able to communicate with each other directly using publicly defined protocols and interfaces.

Information technology is rather challenging, and understanding abstract principles like open source and open standards does not happen in one go. Seravo is proud to support the oldest open source competence center in Europe, the Finnish Center for Open Systems and Solutions COSS ry which has promoted open technologies in Finland since 2003.

When it comes down to details, training is needed. This and last year we have cooperated with the Visio educational centre in Helsinki to provide courses on how to utilize open source software in non-profit organizations.

Learn more

We have recently published the following presentations in Finnish so people can learn more by themselves:

<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/32708853" width="540"></iframe>

<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/32811305" width="540"></iframe>

<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/32811445" width="540"></iframe>

<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/32811521" width="540"></iframe>

Saturday, 29 March 2014

My new bedroom HTPC: a Gigabyte BRIX XM14

the_unconventional's blog » English | 17:42, Saturday, 29 March 2014

As I wrote two weeks ago, I recently bought a new TV for my living room. Because of that, I ended up with a spare TV collecting dust in the attic. I’ve never really been a “multiple TVs in my home” kind of guy, primarily because I rarely watch live television. The only reason I even have cable TV is because you can’t get DOCSIS without DVB-C in my country.

But now that technology has evolved beyond the era of having to watch whatever junk the broadcaster wants to feed you, having a TV in the bedroom might even be worth it. I already have a Raspberry Pi with a 3TB hard drive which I use as a NAS to stream all kinds of downloaded TV shows to my HTPC in the living room, so it would be trivial to do the same in the bedroom.

I didn’t have to invest a lot of money in doing so: I already had most things laying around. A TV, an HDMI cable, and an ethernet cable were boxed up and ready, so all I really needed was a small computer, a wireless mouse and keyboard, and a Debian installer flash drive.

Because the demands of a simple HTPC aren’t exactly huge, I wanted to buy a machine as small as possible, while also trying to limit the power consumption. Peripheral connectors weren’t really important: all it needed was gigabit ethernet, HDMI, and one or two USB ports.

The Gigabyte BRIX GB-XM14-1037 barebone was one of the machines that caught my eye, due to its small footprint, energy efficient CPU, and very friendly price (€160). I only had to add two 2GB DDR3 SO-DIMMs and a 30GB mSATA solid-state drive, adding €70 to the total price. The BRIX unit comes with a 19V DC power supply and a VESA monitor mount included.

It’s a simple Intel Ivy Bridge barebone with only a few connectors, but it does everything I want it to do.

Because I expected that using a mouse on a bed sheet would be quite tricky, I bought a wireless keyboard with a touchpad. It isn’t ideal, but it gets the job done.

I installed Debian 7.4 without any issues using one of my USB3 flash drives. The BIOS is – unfortunately – UEFI-enabled, but it has support for legacy boot and a simple toggle to disable the Restricted Boot anti-functionality. Needless to say, I installed Debian using the classic BIOS, but I did use a GPT partition table, just because I can. As always, I did a netinstall and then ran a script to install only the packages I want. The result is a minimal GNOME 3.4 desktop (it’s an HTPC after all) with kernel 3.13 from Debian’s wheezy-backports archive. The stock 3.2 kernel also works well on this machine, but Ivy Bridge support is generally better post 3.2.x.

All the hardware is fully compatible with GNU/Linux, although the included WiFi card does require proprietary firmware (firmware-realtek) to be installed. It’s not really an issue to me, because I have no intention of ever using WiFi on this machine. Nevertheless, there is a new batch of Atheros WiFi cards coming from Taiwan sometime soon, so I’ll most likely swap it out for a Free Software friendly card anyway.

I’ll also add a quick overview of hardware parts, confirming their good behavior:

USB: ehci working out of the box
Ethernet: working out of the box
WiFi: requires firmware-realtek
HDMI video: working out of the box (some EDID issues may appear on older kernels)
HDMI audio: working out of the box
DisplayPort video: not tested (but it is recognized by X)
DisplayPort audio: not tested
Suspend/resume: working out of the box
Hibernate/resume: working out of the box
ACPI: I haven’t found any bugs or issues
Sensors: ACPI thermal zone, southbridge, and coretemp working out of the box

Because it’s technically a laptop board with a mobile CPU, the fan doesn’t spin all the time, but it only spins up when it’s needed to. The fan regulation seems to be decent, and the noise level is more than acceptable. Even at full load, the fan doesn’t annoy me.

For the sake of completeness, I’ll also provide a list of hardware parts:

Chassis: Gigabyte GB-XM14-1037
Chipset: Intel HM70 (Panther Point)
BIOS: American Megatrends v2.15
CPU: Intel Celeron 1037U @ 1,8GHz (Ivy Bridge)
GPU: Intel GMA HD2500 @ 350MHz
RAM: 2x 2GB Crucial CT25664BF160 DDR3-1600 SO-DIMM
SSD: 30GB Kingston SSDNow mS200 mSATA
LAN: Realtek RTL8111E PCI-E Gigabit Ethernet
WLAN: Realtek RTL8188CE
Audio: HDMI/DisplayPort only (no analog or IEC958 outputs)
Connectors: 2x USB 2.0, 1x HDMI, 1x Mini DisplayPort, 1x Gigabit Ethernet

All in all, this is a great HTPC. I can enjoy all my downloaded shows from my bed thanks to this machine combined with my Raspberry Pi NAS. There are very few downsides to it, because it has very decent GNU/Linux support. The only thing I can think of, is the WiFi module. Realtek does not provide FOSS firmware, so loading a proprietary binary is required to use it. Also, the antennas do not appear to be working very well, because I rarely get a decent signal, whereas my laptop works perfectly when I put it on the same table. So, generally speaking, you’re much better off using ethernet. But that really goes for every computer.

I’ve collected the output of some useful analytic commands for your information. At this point, I’ve run lshw, lspci, lsusb, dmesg, dmidecode, and glxinfo. If you’d like to see anything else, please let me know.

Friday, 28 March 2014

GNU GPL, JS and BS

Hugo - FSFE planet | 23:08, Friday, 28 March 2014

<aside>JS</aside>

This is a long due post, in response to a thread about a new JS outliner released under the GPL. I just did not take the time to write something about it until now… sorry!

It’s quite outstanding, but trying to find some good resource online about this issue is nearly impossible. If you try to read about how the GNU GPL impacts javascript web apps, you will find so much nonsense–that will make you believe the GPL is going to “infect” everything. (In spite of the fact that some of the most important Javascript libraries out there are licensed under the GPL, like Jquery IIRC.)

First, let’s get things straight: the GNU GPL does not infect anything nor has any “viral” effect. You don’t catch the GPL like the flu. In order for GPL obligations to kick in and apply to you, you must either:

  1. distribute GPL-licensed Javascript files; or

  2. write something that is based on GPL-licensed Javascript files.

In the first case, it’s no surprises that if you download and distribute GPL-licensed software, you must respect the conditions of the GPL.

In the second case, it’s a little bit more difficult to grasp, because you need to understand what constitutes a work based on the GPL program. And for this, you need an basic understanding of copyright law.

If you build other Javascript parts which will work with the GPL Javascript, there’s a fair chance that the whole is based on the GPL Javascript and thus is subject to the conditions of the GPL. (That’s the intended effect: it’s copyleft!) So for instance, if you write Javascript in which you re-use the functions of the GPL JS library, that will be covered by the GPL as well.

But for pretty much everything else, it’s not covered. So basically, just adding a line of script to interact with the DOM is not going to make the entire website subject to the GPL. That would be like saying using LibreOffice forces you to distribute all documents produced with it under the GPL. It’s just nonsense. Keep in mind that this is a legal thing, this is copyright law; this is not software development.

So in the case of the Concord outliner it’s pretty obvious: if you put an outliner in your web app, it’s not going to make the whole web app covered by the GPL. However, if you integrate the outliner and that you build your web app on top of that outliner; you expand it, so yes, that’s covered. But hey, that’s what the GPL is for.

Otherwise, write your own from scratch or try an alternative license, like the MPL-2.0.

Innovation policy and Internet liability in courts–beyond advertising

Hugo - FSFE planet | 15:15, Friday, 28 March 2014

Morozov on innovation policy:

But why assume that innovation—and, by extension, economic growth—should be the default yardstick by which we measure the success of technology policy? One can easily imagine us living with a very different “Internet” had the regulators of the 1990s banned websites from leaving small pieces of code—the so-called “cookies”—on our computers. Would this slow down the growth of the online advertising industry, making everyday luxuries such as free e-mail unavailable? Most likely. But advertising is hardly the only way to support an e-mail service: It can also be supported through fees or even taxes. Such solutions might be bad for innovation, but the privacy they afford to citizens might be good for democratic life.

We should note that this ultimate goal of innovation is also what drives most of the debate around internet business liability. There’s so much fear at the European Union that the next law would stifle innovation and nip in the bud the “next Google” that any sane debate is almost impossible.

Of course, nobody’s asking if we even want the next Google to happen. I certainly don’t want another Google, nor did I want Facebook to come into existence. I’d much prefer having something like unhosted web apps. This is what true beneficial innovation is about.

This way of thinking about innovation and “the Internet” has also spread out to Courts.

Consider this piece about how the Paris Court interprets the LCEN (the French translation of the EU E-commerce 2001 directive):

Considérant qu’en vertu du même critère, l’exploitation du site par la commercialisation d’espaces publicitaires, dès lors qu’elle n’induit pas une capacité d’action du service sur les contenus mis en ligne, n’est pas davantage de nature à justifier de la qualification d’éditeur du service en cause ;

Qu’il importe d’observer à cet égard, que la LCEN dispose que le service hébergeur peut être assuré même à titre gratuit, auquel cas il est nécessairement financé par des recettes publicitaires et qu’elle n’édicte, en tout état de cause, aucune interdiction de principe à l’exploitation commerciale d’un service hébergeur au moyen de la publicité ;

The crux of the argument is whether a service on the web like YouTube is merely a hosting provider (in which case it has a derogatory liability regime) or if it is something else than merely a hosting provider, in which case they could be found liable for copyright infringement.

The argument was that since Dailymotion (a YouTube competitor) displays advertisement next to the allegedly infringing videos, it must be considered an advertiser instead of a hosting provider. But the Court was not convinced by this argument.

Instead, and here’s the absolute nonsense, the Court says that:

  • the law says a hosting provider can provide its service without charge;
  • which necessarily means the service is financed through advertising,
  • thus the law does not forbid to qualify an advertiser as a hosting provider.

See what happened? The problem in this reasoning is that the second part is totally flawed. Since when a service on the web provided free of charge is necessarily financed through advertising? Have the judges ever heard about something called Wikipedia, one of the largest website worldwide, hosting millions of encyclopedia articles and media–all this without advertising and without tracking their users.

The fact that Wikipedia is run by the non-for-profit Wikimedia is completely irrelevant, the point is not about the commercial nature of the provider, but really about the nature of the activity of the service provider.

Thanks to this kind of ill-advised ruling, almost nothing is done to shape what qualifies as a hosting provider that deserves a derogatory liability regime.

We need to take back control of innovation and technology policy to foster privacy and freedom; more than ever.

Thursday, 27 March 2014

Document Freedom Day in Vienna 2014

FSFE Fellowship Vienna » English | 22:08, Thursday, 27 March 2014

Our borrowed bike for heavy loadsOur desk in front of the memorial after removing our postersOur booth in direct sun lightIn the afternoon different Fellows joined us for some time

On Wednesday the 26th of March the Viennese Fellowship group of the FSFE celebrated the yearly Document Freedom Day with an information stall in our main shopping street again. We started at 10am and stayed until 7pm. Even if it wasn’t very warm, at least there was no strong wind or rain. Occasionally we even could enjoy direct sunshine. At dusk, people couldn’t easily scan our leaflets anymore in order to decide if they want one or not. We dismantled our stall after it went completely dark.

Beside a super huge package of the official DFD information material provided by the FSFE we spread additional leaflets that our regional group had designed especially for this event. Even if we also used the left over small A6 leaflets from last year, having even more leaflets was quite a good idea since we ran out of this year’s official DFD folders by lunch time.

Instead of discs with copies of GNU/Linux distributions this time we provided a leaflet with basic information about differences between 10 of the most popular free software distributions. It also contains download links. This not only made our preparation less time consuming, but cut the costs for our stall considerably without making our material less useful.

Another positive aspect of not providing distribution discs anymore is the smaller environmental impact of our material. We are not convinced that our discs had much effect on the readiness of people to try out free software, after all, who installs software obtained from strangers on the street? Of much greater value might be some easy to understand information on what distribution people new to free software should choose. At least in Austria most people use broadband Internet connections and own computers with the possibility to download and burn their own installation media. Besides, nowadays many people can not use discs since their devices lack an optical drive.

Spread over the day we were visited from four very friendly police officers. Two of them were quite interested and talked to us about free software and open standards for quite a while and also took some information material with them.

Surprisingly many tourists were interested. Unfortunately, we didn’t have much English material to share with them. Generally, most people instantly got our main argument for open standards and free software. We talked about independence on our own personal computers. We shouldn’t depend on companies when we want to access and share data.

The artist Ulrike Truger created the stone monument against police brutality behind our stall. A friend of hers dropped by and made a big fuss about the fact that we used its plain surface to temporarily stick up our DFD posters. Via mobile, the artist herself demanded that we remove our posters at once. This is very strange since Truger has, more than once, erected huge monuments illegaly without public consent. Why does she think she can permanently occupy public space for her cause with a five ton heavy stone but demand from us that we stay away at all times? Our cause to liberate all of us from hostile power concentrations is not less worthy and we would have removed our posters a few hours later without leaving any trace anyway. I think it is a good thing to have such a monument but the artist shouldn’t consider her causes worthier and more important than all other possible concerns.

Luckily most people where very happy to find us there and we got a lot of vocal support for our work – even after we had removed our posters fom the huge stone. The stall lasted for 9 hours and I thank all supporters for their help and patience. Together we can change the world for the better, no matter how strange some people might behave.

How’s (battery) life with Jolla?

Seravo | 12:20, Thursday, 27 March 2014

Some years ago Nokia conducted a large survey among its customers on what people most liked about Nokia phones. One of the top features turned out to be their unrivaled battery life. Despite the hype around screen resolutions, processor performance and software versions, one of the most important features for a mobile device is simply how long until you have to charge it again.

Jolla phone uptime shows device has been continously on for 8 days and 13 hours

Jolla phone uptime shows the device has continuously been turned on for 8 days and 13 hours

Back in 2012 we wrote about how to get an Android phone last for a week without recharging. On a mobile phone, the single most significant power hog is the display. On the other hand with the display turned off, the biggest energy hogs are the wireless radio devices. The Android phone in our example lasted for an entire week after locking it in 2G mode only, thus utilising only the most basic GSM network connection with all other forms of connectivity disabled.

The Nokia Maemo series devices and the Meego N9 smart phone already sported a feature where if the device was not in active use, it would automatically downgrade the network connections or disable them. When a user the opened an application that requires network access, the network was re-enabled automatically without extra user intervention. This feature is also present in Jolla phones. This is the reason why Jolla users every now and then see the “Connecting” notification; the connections are disabled but are automatically invoked upon request for network access.

We tested this feature by having all networks (3G, WLAN, Bluetooth, GPS) enabled in the Jolla phone settings and by having e-mail updates active with instant message presence turned on, but with no further active usage of the device. The results showed that the Jolla phone battery lasted for over 8 days! The screenshot attached is from SailTime by Mikko Ahlroth, which visualises the output of the uptime Linux command.

Keeps on running…

But wait, that was not all! The unique hardware feature present in the Jolla Phone is, of course The Other Half (abbreviated TOH), an exchangeable back side of the device. One of the connectors of TOH is a I2C connection, which is able to transfer both data and power. This makes possible TOHs that supplement the main battery within the device. In fact, the main battery is also located on the backside, so it could completely be replaced with a TOH that connects directly to the connectors the original battery would use.

First examples of this have already emerged. Olli-Pekka Heinsuo has created two battery supplementing Other Halves: the Power Half, which holds an extra battery for increased capacity and the Solar Half, which hosts a solar panel that directly charges the device. Olli-Pekka is attending the Seravo sponsored Jolla and Sailfish Hack Day next Saturday. If you wish to attend, please make sure to swiftly register to the event as the attendance capacity is limited!

Solar Half Power Half

Wednesday, 26 March 2014

How Kolab is using Open Standards for Interoperability

Torsten's FSFE blog » english | 16:18, Wednesday, 26 March 2014

Today is Document Freedom Day which started out for documents in the OOXML days, but now is much more generally about Open Standards. This is a great opportunity to show you how Kolab uses Open Standards all the way down to the storage layer.

Since Kolab is a lot about email, it uses SMTP (RFC 821) and IMAP (RFC 1730) to send and store emails which is by itself not overly exciting since at least in the free world, most email software does that. But Kolab goes further and uses IMAP as a NoSQL storage engine and therefore gets all the scalability, ACLs and sharing from IMAP for free. It uses the IMAP METADATA Extension (RFC 5464) to storage other content and even configuration in IMAP folders. Since Kolab is a Groupware Solution, it stores contacts, events and tasks in there. Of course, it does so using Open Standards as well. Contacts are stored in the xCard (RFC 6351) format and tasks as well as events are stored in the xCal (RFC 6321) format. All those objects are then encapsulated in a MIME message according to RFC 2822.

The advantage of that is that your personal data is not scattered all over the place, but in one central IMAP database which you can easily back up (e.g. with offlineimap) or move to another server (e.g. with imapsync). The new version of Kolab supports file storage and the default is to store them in IMAP as well.

Unfortunately, not every IMAP client understands the METADATA extension and displays the other data as you would like. Therefore, Kolab offers other protocols to access that data. For your calendars, Kolab uses CalDAV (RFC 4791). Address book with contacts can be synchronized using CardDAV (RFC 6352) and files are available via WebDAV (RFC 2518).

For a proper groupware it is important that you can invite people to events or assign them tasks even if those people are using other systems on other servers. To achieve this, Kolab uses iTIP (RFC 5546) invitations and sometimes has to implement workarounds, because other clients are not respecting the standard fully. Unfortunately, this is a problem Kolab faces with all standards that are for inter-operating with other software.

To make sure all those different standards interact well together in one system and to be able to further enhance the functionality, the Kolab community uses a defined KEP process. KEP is short for Kolab Enhancement Proposal and works similar to Python’s PEP or XMPP’s XEP.

Happy Document Freedom Day! :)

Tuesday, 25 March 2014

Comic and quotes: why you should use Open Standards

I love it here » English | 16:48, Tuesday, 25 March 2014

In a few hours Document Freedom Day will start with the event in Tokyo organised by the Japanese LibreOffice team. During the whole day, people around the world will explain why Open Standards matters. This year we can provide you a comic to make the topic a bit more catchy:

<figure about="http://blogs.fsfe.org/mk/wp-content/plugins/creative-commons-license-manager/embed-helper.php?id=1267" xmlns:cc="http://creativecommons.org/ns#" xmlns:dc="http://purl.org/dc/terms/">Comic about why to use Open Standards <figcaption>Next Time Choose Open Standards Jamie Casley CC BY-SA <script> function showEmbed(element) { var figureNode = element.parentNode.parentNode; var helperNode = document.createElement('html'); helperNode.appendChild(figureNode.cloneNode(true)); embedNode = document.createElement('input'); embedNode.value = helperNode.innerHTML.replace(/<:/g,"<").replace(/<.:/g,"</figcaption><style scoped="scoped">figure[about] { display: inline-block; margin: 0; padding: 0; position: relative; } figure[about] > img, figure[about] > audio, figure[about] > video, figure[about] > object { display: block; margin: 0; padding: 0; } figure[about] figcaption { background-color: rgba(190, 190, 190, 0.6); bottom: 0px; font-size: 12px; height: 22px; left: 0px; line-height: 22px; overflow: hidden; padding-left: 22px; position: absolute; width: 0px; } audio +figcaption, video + figcaption, object + figcaption { left: 22px !important; } figure[about] figcaption:hover { height: inherit; width: inherit; } figure[about] figcaption [property*=title] { display: none; } figure[about] figcaption [property*=attributionName] { border: none; color: black; display: inline-block; height: 22px; margin: 0 0.5em; text-decoration: underline; text-shadow: none; } figure[about] figcaption small { position: absolute; left: 0; bottom: 0; } figure[about] figcaption small, figure[about] figcaption small a, figure[about] figcaption small a abbr { color: transparent !important; border: none; display: inline-block; height: 22px; margin: 0; padding: 0; text-shadow: none !important; width: 22px; } figure[about] figcaption small a[href^="http://creativecommons.org/licenses/"] { background: url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABYAAAAWCAQAAABuvaSwAAAAAnNCSVQICFXsRgQAAAAJcEhZcwAABJ0AAASdAXw0a6EAAAAZdEVYdFNvZnR3YXJlAHd3dy5pbmtzY2FwZS5vcmeb7jwaAAABmklEQVQoz5XTPWiTURTG8d8b/GjEii2VKoqKi2DFwU9wUkTdFIeKIEWcpIOTiA4OLgVdXFJwEZHoIII0TiJipZJFrIgGKXQQCRg6RKREjEjMcQnmTVPB3jNc7j1/7nk49zlJ+P+1rPsqydqFD1HvSkUq9MkpaQihoWRcfzqftGUkx9y10Yy33vlttz2GzBmNQtfLrmqqGu6odNKccOvvubXt1/Da+tAZBkwKx1OwHjNqti1EQ7DBN2Vr2vBl4cJiaAjOCdfbcMF3mWC7O6qmDFntms9KzgYZNU/bcFkxBM+UjXjiilFNl4yZsCIoqrRgA0IuGNRws1W66H1KSE5YFzKoa+pFTV0/ydYk66s+kt5kE1ilqd7qs49KIcj75bEfxp0RJn0yKxtMm21rzmtYG6x0Wt5Fy4ODbhuzJejx06M2PCzc+2frbgjn0z9YEE4tih7Q8FyShgdVzRvpQk+omLe5wxvBIV+ECTtkQpCx00Oh4ugCI7XcfF8INa9MqQnhQdrRSedYJYcdsc9eTHvjRbzsyC5lBjNLYP0B5PQk1O2dJT8AAAAASUVORK5CYII="); background-position: 0px center; background-repeat: no-repeat; } figure[about] > figcaption button, figure[about] > figcaption input { color: inherit; background: inherit; border-width: 1px; font-size: smaller; margin: 0 0.5em; width: 5em; } </style></figure>

Furthermore, during the last days I had a look at our testimonials page. I liked reading them again. Here is a small selection with highlighting by me, but have a look at the others, too.

Build so the whole world — and all of history to come — can build upon what you make. (Lawrence Lessig)

I know a smart business decision when I see one – choosing open standards is a very smart business decision indeed. (Neelie Kroes, Vice-President, European Commission)

People have grown used to the idea that files can only be opened with the same program that saved it, as if it were the natural state of affairs. And it was, long ago, in the dark ages of processor- and memory-starved eight-bit computers. Today, a program that doesn’t use open standards to exchange data with whichever other program you wish to use should be considered as defective as one that crashes on startup. (Federico Heinz, President, Fundación Vía Libre)

Distribute those quotes, use them in your presentations, e-mail signatures, whatever you think helps to distribute the message, and for tomorrow I wish you all a good Document Freedom Day!

Monday, 24 March 2014

The Free Software pact for the European elections!

With/in the FSFE » English | 15:16, Monday, 24 March 2014

In case you did not notice, the Free Software Pact is now available in even more languages.

As explained in our press release, FSFE officially supports the Free Software Pact drafted by April. The aim is to get candidates to this year’s European Parliament elections to take a stand for free software by signing this little text.

Thus, it’s important to get translations so that you can contact your local politicians and inform them about free software and why it’s important!

A lot of the translating efforts have happened on our mailing list, so go subscribe there if you want to help proofread ongoing translations before uploading them on the wiki.

The elections are coming near!

flattr this!

Sunday, 23 March 2014

Note to self: Backup bottlenecks

Inductive Bias | 18:26, Sunday, 23 March 2014

I learnt the following relations the hard way 10 years ago when trying to backup a rather tiny amount of data, went through the computation again three years ago. Still I had to re-do the computation this morning when trying to pull a final full backup from my old MacBook. Posting here for future reference: Note 1: Some numbers like 10BASE-T included only for historic reference. Note 2: Excluded the Alice DSL uplink speed - if included the left-hand chart would no longer be particularly helpful or readable...

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  Being Fellow #952 of FSFE » English  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog  DanielPocock.com - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Vienna » English  Fellowship Interviews  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  Freedom Blog » Free Software  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  I love it here » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Jonas Öberg  Karsten on Free Software  Leena Simon» english  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nicolas Jean's FSFE blog » English  Paul Boddie's Free Software-related blog » English  Pressreview  Saint's Log  Sam Tuke's blog  Seravo  Supporting Free Software » English  The trunk  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Thoughts in Parentheses » Free Software  Tonnerre Lombard  Torsten's FSFE blog » english  Valentin Rusu » fsfe  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  blog  blog.padowi.se » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mina86.com  mkesper's blog » English  nikos.roussos  pb's blog  pichel's blog  rieper|blog » en  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog