Planet Fellowship (en)

Thursday, 27 August 2015

Hardware Experiments with Fritzing

Paul Boddie's Free Software-related blog » English | 23:42, Thursday, 27 August 2015

One of my other interests, if you can even regard it as truly separate to my interests in Free Software and open hardware, involves the microcomputer systems of the 1980s that first introduced me to computing and probably launched me in the direction of my current career. There are many aspects of such systems that invite re-evaluation of their capabilities and limitations, leading to the consideration of improvements that could have been made at the time, as well as more radical enhancements that unashamedly employ technology that has only become available or affordable in recent years. Such “what if?” thought experiments and their hypothetical consequences are useful if we are to learn from the strategic mistakes once made by systems vendors, to have an informed perspective on current initiatives, and to properly appreciate computing history.

At the same time, people still enjoy actually using such systems today, writing new software and providing hardware that makes such continuing usage practical and sustainable. These computers and their peripherals are certainly “getting on”, and acquiring or rediscovering such old systems does not necessarily mean that you can plug them in and they still work as if they were new. Indeed, the lifetime of magnetic media and the devices that can read it, together with issues of physical decay in some components, mean that alternative mechanisms for loading and storing software have become attractive for some users, having been developed to complement or replace the cassette tape and floppy disk methods that those of us old enough to remember would have used “back in the day”.

My microcomputer of choice in the 1980s was the Acorn Electron – a cut-down, less expensive version of the BBC Microcomputer hardware platform – which supported only cassette storage in its unexpanded form. However, some expansion units added the disk interfaces present on the BBC Micro, while others added the ability to use ROM-based software. On the BBC Micro, one would plug ROM chips directly into sockets, and some expansion units for the Electron supported this method, too. The official Plus 1 expansion chose instead to support the more friendly expansion cartridge approach familiar to users of other computing and console systems, with ROM cartridges being the delivery method for games, applications and utilities in this form, providing nothing much more than a ROM chip and some logic inside a convenient-to-use cartridge.

The Motivation

A while ago, my brother, David, became interested in delivering software on cartridge for the Electron, and a certain amount of discussion led him to investigate various flash memory integrated circuits (ICs, chips), notably the AMD Am29F010 series. As technological progress continues, such devices provide a lot of storage in comparison to the ROM chips originally used with the Electron: the latter having only 16 kilobytes of capacity, whereas the Am29F010 variant chosen here has a capacity of 128 kilobytes. Meanwhile, others chose to look at EEPROM chips, notably the AT28C256 from Atmel.

Despite the manufacturing differences, both device types behave in a very similar way: a good idea for the manufacturers who could then sell products that would be compatible straight away with existing products and the mechanisms they use. In short, some kind of de-facto standard seems to apply to programming these devices, and so it should be possible to get something working with one and then switch to the other, especially if one kind becomes too difficult to obtain.

Now, some people realised that they could plug such devices into their microcomputers and program them “in place” using a clever hack where writes to the addresses that correspond to the memory provided by the EEPROM (or, indeed, flash memory device) in the computer’s normal memory map can be trivially translated into addresses that have significance to the EEPROM itself. But not routinely using such microcomputers myself, and wanting more flexibility in the programming of such devices, not to mention also avoiding the issue of getting software onto such computers so that it can be written to such non-volatile memory, it seemed like a natural course of action to try to do the programming with the help of some more modern conveniences.

And so I considered the idea of getting a microcontroller solution like the Arduino to do the programming work. Since an Arduino can be accessed over USB, a ROM image could be conveniently transferred from a modern computer and, with a suitable circuit wired up, programmed into the memory chip. ROM images can thus be obtained in the usual modern way – say, from the Internet – and then written straight to the memory chip via the Arduino, rather than having to be written first to some other medium and transferred through a more convoluted sequence of steps.

Breadboarding

Being somewhat familiar with Arduino experimentation, the first exercise was to make the circuit that can be used to program the memory device. Here, the first challenge presented itself: the chip employs 17 address lines, 8 data lines, and 3 control lines. Meanwhile, the Arduino Duemilanove only provides 14 digital pins and 6 analogue pins, with 2 of the digital pins (0 and 1) being unusable if the Arduino is communicating with a host, and another (13) being connected to the LED and being seemingly untrustworthy. Even with the analogue pins in service as digital output pins, only 17 pins would be available for interfacing.

The pin requirements
Arduino Duemilanove Am29F010
11 digital pins (2-12) 17 address pins (A0-A16)
6 analogue pins (0-6) 8 data pins (DQ0-DQ7)
3 control pins (CE#, OE#, WE#)
17 total 28 total

So, a way of multiplexing the Arduino pins was required, where at one point in time the Arduino would be issuing signals for one purpose, these signals would then be “stored” somewhere, and then at another point in time the Arduino would be issuing signals for another purpose. Ultimately, these signals would be combined and presented to the memory device in a hopefully coherent fashion. We cannot really do this kind of multiplexing with the control signals because they typically need to be coordinated to act in a timing-sensitive fashion, so we would be concentrating on the other signals instead.

So which signals would be stored and issued later? Well, with as many address lines needing signals as there are available pins on the Arduino, it would make sense to “break up” this block of signals into two. So, when issuing an address to the memory device, we would ideally be issuing 17 bits of information all at the same time, but instead we take approximately half of the them (8 bits) and issue the necessary signals for storage somewhere. Then, we would issue the other half or so (8 bits) for storage. At this point, we need only a maximum of 8 signal lines to communicate information through this mechanism. (Don’t worry, I haven’t forgotten the other address bit! More on that in a moment!)

How would we store these signals? Fortunately, I had considered such matters before and had ordered some 74-series logic chips for general interfacing, including 74HC273 flip-flop ICs. These can be given 8 bits of information and will then, upon command, hold that information while other signals may be present on its input pins. If we take two of these chips and attach their input pins to those 8 Arduino pins we wish to use for communication, we can “program” each 74HC273 in turn – one with 8 bits of an address, the other with another 8 bits – and then the output pins will be presenting 16 bits of the address to the memory chip. At this point, those 8 Arduino pins could even be doing something else because the 74HC273 chips will be holding the signal values from an earlier point in time and won’t be affected by signals presented to their input pins.

Of all the non-control signals, with 16 signals out of the way, that leaves only 8 signals for the memory chip’s data lines and that other address signal to deal with. But since the Arduino pins used to send address signals are free once the addresses are sent, we can re-use those 8 pins for the data signals. So, with our signal storage mechanism, we get away with only using 8 Arduino pins to send 24 pieces of information! We can live with allocating that remaining address signal to a spare Arduino pin.

Address and data pins
Arduino Duemilanove 74HC273 Am29F010
8 input/output pins 8 output pins 8 address pins (A0-A7)
8 output pins 8 address pins (A8-A15)
8 data pins (DQ0-DQ7)
1 output pin 1 address pin (A16)
9 total 25 total

That now leaves us with the task of managing the 3 control signals for the memory chip – to make it “listen” to the things we are sending to it – but at the same time, we also need to consider the control lines for those flip-flop ICs. Since it turns out that we need 1 control signal for each of the 74HC273 chips, we therefore need to allocate 5 additional interfacing pins on the Arduino for sending control signals to the different chips.

The final sums
Arduino Duemilanove 74HC273 Am29F010
8 input/output pins 8 output pins 8 address pins (A0-A7)
8 output pins 8 address pins (A8-A15)
8 data pins (DQ0-DQ7)
1 output pin 1 address pin (A16)
3 output pins 3 control pins (CE#, OE#, WE#)
2 output pins 2 control pins (CP for both ICs)
14 total 28 total

In the end, we don’t even need all the available pins on the Arduino, but the three going spare wouldn’t be enough to save us from having to use the flip-flop ICs.

With this many pins in use, and the need to connect them together, there are going to be a lot of wires in use:

The breadboard circuit with the Arduino and ICs

The breadboard circuit with the Arduino and ICs

The result is somewhat overwhelming! Presented in a more transparent fashion, and with some jumper wires replaced with breadboard wires, it is slightly easier to follow:

An overview of the breadboard circuit

An overview of the breadboard circuit

The orange wires between the two chips on the right-hand breadboard indicate how the 8 Arduino pins are connected beyond the two flip-flop chips and directly to the flash memory chip, which would sit on the left-hand breadboard between the headers inserted into that breadboard (which weren’t used in the previous arrangement).

Making a Circuit Board

It should be pretty clear that while breadboarding can help a lot with prototyping, things can get messy very quickly with even moderately complicated circuits. And while I was prototyping this, I was running out of jumper wires that I needed for other things! Although this circuit is useful, I don’t want to have to commit my collection of components to keeping it available “just in case”, but at the same time I don’t want to have to wire it up when I do need it. The solution to this dilemma was obvious: I should make a “proper” printed circuit board (PCB) and free up all my jumper wires!

It is easy to be quickly overwhelmed when thinking about making circuit boards. Various people recommend various different tools for designing them, ranging from proprietary software that might be free-of-charge in certain forms but which imposes arbitrary limitations on designs (as well as curtailing your software freedoms) through to Free Software that people struggle to recommend because they have experienced stability or functionality deficiencies with it. And beyond the activity of designing boards, the act of getting them made is confused by the range of services in various different places with differing levels of service and quality, not to mention those people who advocate making boards at home using chemicals that are, shall we say, not always kind to the skin.

Fortunately, I had heard of an initiative called Fritzing some time ago, initially in connection with various interesting products being sold in an online store, but whose store then appeared to be offering a service – Fritzing Fab – to fabricate individual circuit boards. What isn’t clear, or wasn’t really clear to me straight away, was that Fritzing is also some Free Software that can be used to design circuit boards. Conveniently, it is also available as a Debian package.

The Fritzing software aims to make certain tasks easy that would perhaps otherwise require a degree of familiarity with the practice of making circuit boards. For instance, having decided that I wanted to interface my circuit to an Arduino as a shield which sits on top and connects directly to the connectors on the Arduino board, I can choose an Arduino shield PCB template in the Fritzing software and be sure that if I then choose to get the board made, the dimensions and placement of the various connections will all be correct. So for my purposes and with my level of experience, Fritzing seems like a reasonable choice for a first board design.

Replicating the Circuit

Fritzing probably gets a certain degree of disdain from experienced practitioners of electronic design because it seems to emphasise the breadboard paradigm, rather than insisting that a proper circuit diagram (or schematic) acts as the starting point. Here is what my circuit looks like in Fritzing:

The breadboard view of my circuit in Fritzing

The breadboard view of my circuit in Fritzing

You will undoubtedly observe that it isn’t much tidier than my real-life breadboard layout! Having dragged a component like the Arduino Uno (mostly compatible with the Duemilanove) onto the canvas along with various breadboards, and then having dragged various other components onto those breadboards, all that remains is that we wire them up like we managed to do in reality. Here, Fritzing helps out by highlighting connections between things, so that breadboard columns appear green as wires are connected to them, indicating that an electrical connection is made and applies to all points in that column on that half of the breadboard (the upper or lower half as seen in the above image). It even highlights things that are connected together according to the properties of the device, so that any attempt to modify to a connection that leads to one of the ground pins on the Arduino also highlights the other ground pins as the modification is being done.

I can certainly understand criticism of this visual paradigm. Before wiring up the real-life circuit, I typically write down which things will be connected to each other in a simple table like this:

Example connections
Arduino 74HC273 #1 74HC273 #2 Am29F010
A5 CE#
A4 OE#
A3 WE#
2 CP
3 CP
4 D3 D3 DQ3

If I were not concerned with prototyping with breadboards, I would aim to use such information directly and not try and figure out which size breadboard I might need (or how many!) and how to arrange the wires so that signals get where they need to be. When one runs out of points in a breadboard column and has to introduce “staging” breadboards (as shown above by the breadboard hosting only incoming and outgoing wires), it distracts from the essential simplicity of a circuit.

Anyway, once the circuit is defined, and here it really does help that upon clicking on a terminal/pin, the connected terminals or pins are highlighted, we can move on to the schematic view and try and produce something that makes a degree of sense. Here is what that should look like in Fritzing:

The schematic for the circuit in Fritzing

The schematic for the circuit in Fritzing

Now, the observant amongst you will notice that this doesn’t look very tidy at all. First of all, there are wires going directly between terminals without any respect for tidiness whatsoever. The more observant will notice that some of the wires end in the middle of nowhere, although on closer inspection they appear to be aimed at a pin of an IC but are shifted to the right on the diagram. I don’t know what causes this phenomenon, but it would seem that as far as the software is concerned, they are connected to the component. (I will come back to how components are defined and the pitfalls involved later on.)

Anyway, one might be tempted to skip over this view and try and start designing a PCB layout directly, but I found that it helped to try and tidy this up a bit. First of all, the effects of the breadboard paradigm tend to manifest themselves with connections that do not really reflect the logical relationships between components, so that an Arduino pin that feeds an input pin on both flip-flop ICs as well as a data pin on the flash memory IC may have its connectors represented by a wire first going from the Arduino to one of the flip-flop ICs, then to the other flip-flop IC, and finally to the flash memory IC in some kind of sequential wiring. Although electrically this is not incorrect, with a thought to the later track routing on a PCB, it may not be the best representation to help us think about such subsequent problems.

So, for my own sanity, I rearranged the connections to “fan out” from the Arduino as much as possible. This was at times a frustrating exercise, as those of you with experience with drawing applications might recognise: trying to persuade the software that you really did select a particular thing and not something else, and so on. Again, selecting the end of a connection causes some highlighting to occur, and the desired result is that selecting a terminal highlights the appropriate terminals on the various components and not the unrelated ones.

Sometimes that highlighting behaviour provides surprising and counter-intuitive results. Checking the breadboard layout tends to be useful because Fritzing occasionally thinks that a new connection between certain pins has been established, and it helpfully creates a “rats nest” connection on the breadboard layout without apparently saying anything. Such “rats nest” connections are logical connections that have not been “made real” by the use of a wire, and they feature heavily in the PCB view.

PCB Layout

For those of us with no experience of PCB layout who just admire the PCBs in everybody else’s products, the task of laying out the tracks so that they make electrical sense is a daunting one. Fritzing will provide a canvas containing a board and the chosen components, but it is up to you to combine them in a sensible way. Here, the circuit board actually corresponds to the Arduino in the breadboard and schematic views.

But slightly confusing as the depiction of the Arduino is in the breadboard view, the pertinent aspects of it are merely the connectors on that device, not the functionality of the device itself which we obviously aren’t intending to replicate. So, instead of the details of an actual Arduino or its functional equivalent, we instead merely see the connection points required by the Arduino. And by choosing a board template for an Arduino shield, those connection points should appear in the appropriate places, as well as the board itself having the appropriate size and shape to be an Arduino shield.

Here’s how the completed board looks:

The upper surface of the PCB design in Fritzing

The upper surface of the PCB design in Fritzing

Of course, I have spared you a lot of work by just showing the image above. In practice, the components whose outlines and connectors feature above need to be positioned in sensible places. Then, tracks need to be defined connecting the different connection points, with dotted “rats nest” lines directly joining logically-connected points needing to be replaced with physical wiring in the form of those tracks. And of course, tracks do not enjoy the same luxury as the wires in the other views, of being able to cross over each other indiscriminately: they must be explicitly routed to the other side of the board, either using the existing connectors or by employing vias.

The lower surface of the PCB design in Fritzing

The lower surface of the PCB design in Fritzing

Hopefully, you will get to the point where there are no more dotted lines and where, upon selecting a connection point, all the appropriate points light up, just as we saw when probing the details of the other layouts. To reassure myself that I probably had connected everything up correctly, I went through my table and inspected the pin-outs of the components and did a kind of virtual electrical test, just to make sure that I wasn’t completely fooling myself.

With all this done, there isn’t much more to do before building up enough courage to actually get a board made, but one important step that remains is to run the “design checks” via the menu to see if there is anything that would prevent the board from working correctly or from otherwise being made. It can be the case that tracks do cross – the maze of yellow and orange can be distracting – or that they are too close and might cause signals to go astray. Fortunately, the hours of planning paid off here and only minor adjustments needed to be done.

It should be noted that the exercise of routing the tracks is certainly not to be underestimated when there are as many connections as there are above. Although an auto-routing function is provided, it failed to suggest tracks for most of the required connections and produced some bizarre routing as well. But clinging onto the memory of a working circuit in real three-dimensional space, along with the hope that two sides of a circuit board are enough and that there is enough space on the board, can keep the dream of a working design alive!

The Components

I skipped over the matter of components earlier on, and I don’t really want to dwell on the matter too much now, either. But one challenge that surprised me given the selection of fancy components that can be dragged onto the canvas was the lack of a simple template for a 32-pin DIP (dual in-line package) socket for the Am29F010 chip. There were socket definitions of different sizes, but it wasn’t possible to adjust the number of pins.

Now, there is a parts editor in Fritzing, but I tend to run away from graphical interfaces where I suspect that the matter could be resolved in more efficient ways, and it seems like other people feel the same way. Alongside the logical definition of the component’s connectors, one also has to consider the physical characteristics such as where the connectors are and what special markings will be reproduced on the PCB’s silk-screen for the component.

After copying an existing component, ransacking the Fritzing settings files, editing various files including those telling Fritzing about my new parts, I achieved my modest goals. But I would regard this as perhaps the weakest part of the software. I didn’t resort to doing things the behind-the-scenes way immediately, but the copy-and-edit paradigm was incredibly frustrating and doesn’t seem to be readily documented in a way I could usefully follow. There is a Sparkfun tutorial which describes things at length, but one cannot help feeling that a lot of this should be easier, especially for very simple component changes like the one I needed.

The Result

With some confidence and only modest expectations of success, I elected to place an order with the Fritzing Fab service and to see what the result would end up like. This was straightforward for the most part: upload the file created by Fritzing, fill out some details (albeit not via a secure connection), and then proceed to payment. Unfortunately, the easy payment method involves PayPal, and unfortunately PayPal wants random people like myself to create an account with them before they will consider letting me make a credit card payment, which is something that didn’t happen before. Fortunately, the Fritzing people are most accommodating and do support wire transfers as an alternative payment method, and they were very responsive to my queries, so I managed to get an order submitted even more quickly than I thought might happen (considering that fabrication happens only once a week).

Just over a week after placing my order, the board was shipped from Germany, arriving a couple of days later here in Norway. Here is what it looked like:

The finished PCB from Fritzing

The finished PCB from Fritzing

Now, all I had to do was to populate the board and to test the circuit again with the Arduino. First, I tested the connections using the Arduino’s 5V and GND pins with an LED in series with a resistor in an “old school” approach to the problem, and everything seemed to be as I had defined it in the Fritzing software.

Given that I don’t really like soldering things, the act of populating the board went about as well as expected, even though I could still clean up the residue from the solder a bit (which would lead me onto a story about buying the recommended chemicals that I won’t bother you with). Here is the result of that activity:

The populated board together with the Arduino

The populated board together with the Arduino

And, all that remained was the task of getting my software running and testing the circuit in its new form. Originally, I was only using 16 address pins, holding the seventeenth low, and had to change the software to handle these extended addresses. In addition, the issuing of commands to the flash memory device probably needed a bit of refinement as well. Consequently, this testing went on for a bit longer than I would have wished, but eventually I managed to successfully replicate the programming of a ROM image that had been done some time ago with the breadboard circuit.

The outcome did rely on a certain degree of good fortune: the template for the Arduino Uno is not quite compatible with the Duemilanove, but this was rectified by clipping two superfluous pins from one of the headers I soldered onto the board; two of the connections belonging to the socket holding the flash memory chip touch the outside of the plastic “power jack” socket, but not enough to cause a real problem. But I would like to think that a lot of preparation dealt with problems that otherwise might have occurred.

Apart from liberating my breadboards and wires, this exercise has provided useful experience with PCB design. And of course, you can find the sources for all of this in my repository, as well as a project page for the board on the Fritzing projects site. I hope that this account of my experiences will encourage others to consider trying it out, too. It isn’t as scary as it would first appear, after all, although I won’t deny that it was quite a bit of work!

Connecting to a server’s web interface over SSH

the_unconventional's blog » English | 13:00, Thursday, 27 August 2015

Sometimes, I have to remotely administer servers. And sometimes, those servers run a daemon that has to be configured using a web interface – e.g. CUPS and ejabberd.

In order to connect to such a web interface, you need a browser, an IP address and a port. In essence, the web interface has to be publicly accessible for that – which is not something you’d usually want. (Firewall configuration, access control, security risks, possibly needing port forwarding, and so on.)

Now there are many ways to remotely administer servers using a GUI. All of which have one thing in common: they suck. Either they’re free software and they suck (VNC, X forwarding, …) or they’re proprietary and they suck even more (RDP, NX, TeamViewer, …)
And then there’s the whole issue of how it’s outright ridiculous to install a GUI on a server in the first place.

 

Using SSH to connect to web interfaces

Fortunately, one can easily bind a server’s local port to a client’s local port using nothing but SSH. This means you only need to use port 22, and you can use SSH pubkey authentication and encryption for everything you do.

All it takes is this command:

ssh [username]@[hostname] -T -L [random-local-port]:localhost:[desired-server-port]

For example, what if I wanted to access the CUPS admin page as user raspberry-pi on my Raspberry Pi, with hostname lithium and IP192.168.0.2, without having to allow TCP traffic on port 631?

The username would be raspberry-pi
The hostname would be lithium.local (or 192.168.0.2)
The random local port can be anything, but I used 63789
The server port would be 631

That would mean:

ssh raspberry-pi@lithium.local -T -L 63789:localhost:631

I had already set up public key access years ago, so all I had to do was press Enter, open localhost:63789 on the client, and enjoy.

Killing a tunnel is also easy: just hit Ctrl + C and close the terminal.

 

Using SSH to connect to web interfaces on other servers in your server’s LAN

So what if you want to connect to the web interface of a daemon running on another server in the same LAN as the server you have SSH access to, while the server in question is not remotely accessible? Even that is possible!

Say, for example, your server is in a LAN and has 192.168.0.50 as its internal IP address. 192.168.0.50:22 will be forwarded to a random port of the external IP address (let’s say 90.70.60.50:4444). You have a local user account called cindy. Another server in the LAN has 192.168.0.60 as its internal IP address, has chatserver as its hostname, and let’s say it runs the ejabberd web interface on the default port 5280.

You can once again use the familiar command:

ssh [username]@[external-ip] -p [external-port] -T -L [random-local-port]:[desired-server-in-the-lan]:[desired-server-port]

The username would be cindy
The external IP would be 90.70.60.50
The external port would be 4444
The random local port can be anything, but I used 5599
The other server in the LAN would be chatserver.local (or 192.168.0.60)
The other server’s port would be 5280

That would mean:

ssh cindy@90.70.60.50 -p 4444 -T -L 5599:192.168.0.60:5280

 

The local admin is mean! They refuse to forward a port!

Sometimes, the local admin is mean and refuses to forward a port. In that case, you won’t be able to connect to any SSH server behind NAT. But fear not! SSH can still help you if the local admin cannot.

However, you’re going to need some infrastructure on your side. At least a publicly accessible server running SSH and preferably nothing else. Sure, if you have a static IP or something like DynDNS you could use your own computer at home, but I really wouldn’t recommend it.

In the best case scenario, you rent a very basic VPS somewhere, and do a minimal GNU/Linux server install on it. Install openssh-server and perhaps a firewall that blocks all traffic except port 22. Create a user account on that VPS with as little rights as possible. (Just its own home directory, and keep it out of the sudoers file.)

In this example, let’s say that I have a VPS somewhere with the IP address 88.55.22.90, and that I bought a domain name for it: theunconventionalisawesome.xxx. I made a user account called adminsaremean.

Anyone can connect to this server if they’d guess the password (so choose a strong one). But there’s literally nothing interesting on the server, and the user account can’t do anything interesting either. However, it can be very valuable as a reverse SSH proxy server.

So how does one use a reverse SSH proxy server? Well, from the client side (the machine that wasn’t allowed to have its port forwarded), you connect to the publicly accessible proxy server on a random port. Let’s say the client’s user account is called hippopotamus. Then, you connect to the server from your computer. From that shell, you connect to localhost with the random port you chose.

First things first: connecting to your proxy. You’ll need to do this:

ssh -N -R [random-port]:localhost:22 [username]@[proxy-server]

The random port can be anything, but I chose 1234
The username will be adminsaremean
The proxy server will be theunconventional.xxx

That would mean:

hippopotamus@client-somewhere-far-away:~$
ssh -N -R 1234:localhost:22 adminsaremean@theunconventional.xxx

Running this command will show nothing interesting. That’s exactly what we want.

Now, on to the server. The client, with user account hippopotamus, will be connected to port 1234, and you want to open a secure shell from your server to that client. This can be done with the following command (from the proxy server):

ssh [username]@localhost -p [random-port]

The username will be hippopotamus
The random port will be 1234

That would mean:

adminsaremean@proxy-server:~$
ssh hippopotamus@localhost -p 1234

Now you’ll be logged in to the remote client, even though it’s behind NAT and maybe even a firewall, no matter how mean the local admin is.

 

So how do I connect to a web interface now? This is really hard!

Connecting to a web interface this way, is really hard. You’ll need to create two tunnels: one from the client to the server, and one from your computer to the server.

So, we’re still connected from the client to port 1234 on the server. And we want, for instance, access to the CUPS admin page on the client.

In order to do so, you’ll first need a tunnel from port 631 on the client to a random port on the proxy server. Let’s use 5678 in this example:

adminsaremean@proxy-server:~$
ssh hippopotamus@localhost -p 1234 -T -L 5678:localhost:631

If you’re hardcore, you can now use Lynx or w3m to connect to localhost:5678 on your proxy server. But that’s not ideal, of course.

So from your computer, you connect to the proxy server and bind port 5678 (which is forwarded to 631 on the client) to another random local port. Let’s use 6070 in this example:

kevin@thinkpad-with-libreboot:~$
ssh adminsaremean@theunconventional.xxx -T -L 6070:localhost:5678

This would mean that port 6070 on your computer is forwarded to port 5678 on the proxy server, which in turn is forwarded to port 631 on localhost:1234 on the proxy server, which is actually port 22 on the remote client, meaning that the localhost:631 on the proxy server actually wasn’t the local host! Think about this right before you go to bed!

 

I have the remote client set up to only accept the public key on my laptop. Will this still work securely?

Because you’re using reverse SSH to connect to the server, this connection will be outgoing rather than incoming from the remote client’s end. However, it will treat any connection to the random port on the proxy server as an incoming SSH connection. So you will have to create a public key on the server with a very strong passphrase, and add it to ~/.ssh/authorized_keys on the remote client.

 

I don’t want my proxy server to allow password authentication. Will that work?

Sure. But you’ll need to create SSH keys for every remote client you ever want to manage and every computer you ever want to use to remotely manage those clients, and add those pubkeys to ~/.ssh/authorized_keys on your proxy server.

 

But if I use pubkey authentication, any clever user on the client side could log in to any other client, which may not even be theirs!

First off, there is no such thing as a clever user. ;)
Jokes aside: they’d still need your pubkey password, and you should of course never cache that on the proxy server.

But if this really bothers you, you can also create a separate user account on the proxy server with each their own SSH keys and authorized_keys files. They won’t have access to eachother’s home directories, but you will have a lot of work storing all those passwords and adding all those keys.

Wednesday, 26 August 2015

MAGIX: Rescue Your Videotapes!

PB's blog » en | 22:48, Wednesday, 26 August 2015

Recently, I’m getting an increased number of requests if the package offered by MAGIX for digitization and archiving of old video tapes is any good.

As technician at the National Video-Archive, I probably have certain demands regarding digitization quality and archive suitability of video material, which might seem “overkill” for most end-users (e.g. lossless codecs).

In order to produce digital copies of analog videos, this product might hardly be underbid.
Yet, I strongly question the long-term archiving properties (and quality) of the output formats of the MAGIX suite.

I’ve done some research regarding this MAGIX product, and I’ve encountered several things that one should know/consider, before buying this product.

Summary / Overview

  • Questionable quality of the analog/digital (A/D) converter
  • Exclusively lossy output formats
  • WMV, as well as optical carriers (DVD, Blu-Ray, etc) are absolutely inadequate as archive format
  • Unclear number of generation losses during record/editing/export

Possible alternative hybrid solution:
Use the MAGIX A/D converter stick with VirtualDub (see below) and FFV1 or DV as video codec. PCM uncompressed for audio.
Store the original export files on harddisks and DVD/Blu-Ray only for access copies.
This enables you to re-create DVD/Blu-Rays later on, in case they decay. Or convert your videos in the future to “then-common” video formats for viewing.
Without additional generation loss.

Supported audio/video formats:
Under “Technical Data > Data Formats“, a list of supported formats for video/audio/image are listed.

The formats listed there are only listed according to their file suffixes (e.g. AVI, MOV, MP3, OGG). This may seem simple(r) on first sight, but it’s lacking concrete information about the actually supported codecs.

A video file always consists (at least) of 3 components:

  1. container
  2. video-codec
  3. audio-codec

Despite the fact, that the list of video formats given by MAGIX is a mixture of container formats (AVI/MOV) and codecs (DV, MPEG-1, MPEG-2, WMV), the only format for file-export is “WMV(HD)”.
Additionally, there is no information given about how the audio is stored.

The list of audio formats only lists lossy (!) codecs like MP3/WMA/Vorbis for import.

Analog-digital (A/D) converter:
The analog video signal is being converted using a small, sweet USB-stick with video inputs.
I was not able to find publicly accessible information about technical details about this converter.

Open questions:

  • Does the A/D converter provide the uncompressed digital signal – or only the already lossy compressed version?
  • Same question for audio…
  • Does it preserve video fields accurately?
  • Does it preserve color information, or the chroma subsampling (e.g. 4:2:2 to 4:2:0)?

Although the A/D converter stick seems to be usable by other video applications (e.g. VirtualDub), it’s unclear if recording to another codec already contains a generation loss.
This would be very relevant for eventual post-editing (e.g. cropping, color corrections, audio corrections, etc), since one would be to accept at least 3 generation losses:

  • Loss #1: Lossy compression in A/D converter
  • Loss #2: Image-/audio-recording in lossy codec (WMV/WMA?)
  • Loss #3: Export to a lossy format (DVD, Blu-Ray, etc)

UPDATE (26.Aug.2015):
I was told by a user that the program provided an export to MPEG-2 in recent versions.
Unfortunately, this “program bug” was “fixed” by MAGIX.

Quote MAGIX support (Translated from German to English):

“If MPEG-2 was listed as export option, that is a bug which was corrected automatically with the next program start.
So there is no possibility to export the video as MPEG-2. Additionally, this function can also not be activated.”

MAGIX support also gave us the tip that the MPEG, generated by the A/D converter stick, would intermediately be stored in the “My Record” folder.
At least with this option, you would have had only one generation loss.

Unfortunately, this “program bug” was also “fixed”:
The original MPEG-2 is not accessible any more – the video is now transcoded directly to a 16:9 WMV/WMA format.

So in the current version, this adds another 2 quality losses:

  • Loss #4: Interpolation by upscaling from SD to HD (720×576 auf 1920×1080)
  • Loss #5: If there are no black borders added left/right, then cropping occurs

I hope that at least the audio is stored lossless (e.g. uncompressed PCM) before export.
Yet, this is uncertain.

For those who would (still) like to copy and preserve their videos in a safe(r) way, I’ve written down some options and background informations here:

Formats (more) suitable for archiving:
The best option, of course, is if you can store audio/video uncompressed or in a mathematically lossless codec (e.g. FFV1).
Currently, this might still be an issue for end-users, due to the rather huge data size (compared to lossy).
For example, FFV1/PCM in AVI requires about 90 GB for 4h VHS (~370 MB/min).

Of course it’s tempting to have smaller files – but that has its price.

In case one decides to use compression without any loss, any other codec than Windows Media is to be preferred. Due to its Microsoft origin, WMV/WMA is strongly bound to Windows, and due to this format’s licensing- und patent-obstacles it’s unclear if (and under which conditions) one is able to open these formats in the future.
For Non-Windows environments, the license costs for creators of applications/devices that can play (or convert) WMV is currently at 300.000 USD per year.
See: “Windows Media Components Product Agreement, page 12.

The best compromise would probably be “DV” (=lossy, but widespread open standard) as video codec and PCM (=uncompressed. Quasi “WAV”) for audio in AVI. That would approximately be 55 GB for 4h VHS (~230 MB/min).
Within a reasonable value-for-money range, a A/D converter like the ADVC55 might make sense.

As recording program “VirtualDub” could be used.
Audio should be recorded uncompressed (PCM) – and also stored in that format in the video container file.
Presets for recording DV in the most exact way can be downloaded here.
These settings are part of DVA-Profession, and are used at the Austrian Mediathek (the National Audio/Video Archive).

A general rule of thumb for long-term preservation of media formats is, that the implementation of an open format/standard under a Free Software license (e.g. GPL) has the highest chance to “virtually immortal”.

For example, if a media format is supported by the tool “FFmpeg“, your changes are very good :)

DVD/Blu-ray as physical carrier:
Here a short quote from the product page (Translated from German to English):

“Digital is better: Advantages of DVDs & Blu-ray Discs In addition to the large disk space, long service life, and small size, they do not have any sensitive mechanical components, making them ideal for archiving!”

Of course, the part about the mechanical components is correct, but saying it’s “ideal for archiving?
Theoretically “yes” – practically “no”.

The times where archives stored everything on optical carriers are long gone.
Mainly, because it has quickly shown that self burned optical carriers are way more fragile and short-lived than analog material, hard disks or magnetic tapes.
Burned disks that are not readable without errors after 2 years are not the exception. The higher the density of the carrier, the more fragile of course are is the data stored on it…

Furthermore one should distinguish between “data disk” and “video disk” – the same applies to CD, as well as DVD and Blu-Ray.
If one stores their videos on a video-disk (e.g. Video-DVD), the audio/video format – including resolution and aspect ratio – is mostly fixed.
Currently, these are exclusively lossy video codecs:

  • CD: MPEG-1
  • DVD: MPEG-2
  • Blu-Ray: MPEG-4 (H.264)

At the moment, there is (currently) no perfect carrier. Especially not for digital data.
For the time being, I would suggest to store the originally captured files on hard disks – and a copy on DVD/Blu-Ray only as access copy.
This allows to choose the archiving format separately from the access format, increasing ones chances to more easily and without additional generation loss convert the videos for viewing.

Aspect ratio:
Analog video is stored in “standard definition” (SD) resolution, and was always recorded with the aspect ratio of 4:3 – and that’s also the way the image is stored on the tape.

In screenshots on the MAGIX Website however, the video is exclusively displayed in 16:9:

Even if you have black borders at top/bottom (=<a href="https://en.wikipedia.org/wiki/Letterboxing_(filming)"), the information on the tape originally isn't wide screen at all.

In Europe we have PAL as TV-/video norm.
If you digitize PAL-SD video, that usually results in a pixel resolution of 720×576. Due to quadratic pixel aspect ratio (PAR), that relates to 5:4 storage aspect ratio (SAR).

Even if one originally recorded 16:9 on e.g. DV (=”Digital Video”) or Digital Betacam, it is stored anamorph with 720×576 pixels (=5:4) – also not wide screen.

If 4:3 is stored as 16:9 full screen, information is always lost.

How does the MAGIX Rescue Your Videotapes handle that?
Does it automatically crop, or could one have black borders left/right (=”pillarbox“) instead?
That would at least offer a lossless video image format for archiving, in case one wants to convert to a 16:9 aspect ratio for viewing (e.g. Blu-Ray/HD).

I don’t even dare to ask how fields (half-images of a frame) and/or deinterlacing are handled…

MXV:
Justa short remark regarding the “MXV” format for video:
At my work at the national video archive, we encounter a variety of most diverse video formats as source. Until today, MXV was unknown to me.

During my recherche, I wasn’t able to find technical details about it, except of these:

  • It’s a MAGIX-internal format. Probably a container or project format
  • There are probably no tools (except for MAGIX’) that can open/convert it
  • It seems to store video in lossy-only formats (MPEG-2)
  • Which audio format is uses is completely unclear. PCM? MP3? WMA? MXA?

If you should have stored your videos in this format, I suggest to export/convert it to an open format as soon as possible. It is absolutely unclear if (and what-with) one can open MXV in the future at all.
Unfortunately, it is not impossible that one loses quality during that conversion (due to additional, lossy compression during export).

Please don’t send complaints to me, but to MAGIX ;)

A Long Voyage into Groupware

Paul Boddie's Free Software-related blog » English | 15:25, Wednesday, 26 August 2015

A while ago, I noted that I had moved on from attempting to contribute to Kolab and had started to explore other ways of providing groupware through existing infrastructure options. Once upon a time, I had hoped that I could contribute to Kolab on the basis of things I mostly knew about, whilst being able to rely on the solution itself (and those who made it) to take care of the things I didn’t really know very much about.

But that just didn’t work out: I ultimately had to confront issues of reliably configuring Postfix, Cyrus, 389 Directory Server, and a variety of other things. Of course, I would have preferred it if they had just worked so that I could have got on with doing more interesting things.

Now, I understand that in order to pitch a total solution for someone’s groupware needs, one has to integrate various things, and to simplify that integration and to lower the accompanying support burden, it can help to make various choices on behalf of potential users. After all, if they don’t have a strong opinion about what kind of mail storage solution they should be using, or how their user database should be managed, it can save them from having to think about such things.

One certainly doesn’t want to tell potential users or customers that they first have to go off, read some “how to” documents, get some things working, and then come back and try and figure out how to integrate everything. If they were comfortable with all that, maybe they would have done it all already.

And one can also argue about whether Kolab augments and re-uses or merely replaces existing infrastructure. If the recommendation is that upon adopting Kolab, one replaces an existing Postfix installation with one that Kolab provides in one form or another, then maybe it is possible to re-use the infrastructure that is already there.

It is harder to make that case if one is already using something else like Exim, however, because Kolab doesn’t support Exim. Then, there is the matter of how those components are used in a comprehensive groupware solution. Despite people’s existing experiences with those components, it can quickly become a case of replacing the known with the unknown: configuring them to identify users of the system in a different way, or to store messages in a different fashion, and so on.

Incremental Investments

I don’t have such prior infrastructure investments, of course. And setting up an environment to experiment with such things didn’t involve replacing anything. But it is still worthwhile considering what kind of incremental changes are required to provide groupware capabilities to an existing e-mail infrastructure. After all, many of the concerns involved are orthogonal:

  • Where the mail gets stored has little to do with how recipients are identified
  • How recipients are identified has little to do with how the mail is sent and received
  • How recipients actually view their messages and calendars has little to do with any of the above

Where components must talk to one another, we have the benefit of standards and standardised protocols and interfaces. And we also have a choice amongst these things as well.

So, what if someone has a mail server delivering messages to local users with traditional Unix mailboxes? Does it make sense for them to schedule events and appointments via e-mail? Must they migrate to another mail storage solution? Do they have to start using LDAP to identify each other?

Ideally, such potential users should be able to retain most of their configuration investments, adding the minimum necessary to gain the new functionality, which in this case would merely be the ability to communicate and coordinate event information. Never mind the idea that potential users would be “better off” adopting LDAP to do so, or whichever other peripheral technology seems attractive for some kinds of users, because it is “good practice” or “good experience for the enterprise world” and that they “might as well do it now”.

The difference between an easily-approachable solution and one where people give you a long list of chores to do first (involving things that are supposedly good for you) is more or less equivalent to the difference between you trying out a groupware solution or just not bothering with groupware features at all. So, it really does make sense as someone providing a solution to make things as easy as possible for people to get started, instead of effectively turning them away at the door.

Some Distractions

But first, let us address some of the distractions that usually enter the picture. A while ago, I had the displeasure of being confronted with the notion of “integrated e-mail and calendar” solutions, and it turned out that such terminology is coined as a form of euphemism for adopting proprietary, vendor-controlled products that offer some kind of lifestyle validation for people with relatively limited imagination or experience: another aspirational possession to acquire, and with it the gradual corruption of organisations with products that shun interoperability and ultimately limit flexibility and choice.

When standards-based calendaring has always involved e-mail, such talk of “integrated calendars” can most charitably be regarded as a clumsy way of asking for something else, namely an e-mail client that also shows calendars, and in the above case, the one that various people already happened to be using that they wanted to impose on everyone else as well. But the historical reality of the integration of calendars and e-mail has always involved e-mails inviting people to events, and those people receiving and responding to those invitation e-mails. That is all there is to it!

But in recent times, the way in which people’s calendars are managed and the way in which notifications about events are produced has come to involve “a server”. Now, some people believe that using a calendar must involve “a server” and that organising events must also involve doing so “on the server”, and that if one is going to start inviting people to things then they must also be present “on the same server”, but it is clear from the standards documents that “a server” was never a prerequisite for anything: they define mechanisms for scheduling based on peer-to-peer interactions through some unspecified medium, with e-mail as a specific medium defined in a standards document of its own.

Having “a server” is, of course, a convenient way for the big proprietary software vendors to sell their “big server” software, particularly if it encourages the customer to corrupt various other organisations with which they do business, but let us ignore that particular area of misbehaviour and consider the technical and organisational justifications for “the server”. And here, “server” does not mean a mail server, with all the asynchronous exchanges of information that the mail system brings with it: it is the Web server, at least in the standards-adhering realm, that is usually the kind of server being proposed.

Computer components

The Superfluous Server

Given that you can send and receive messages containing events and other calendar-related things, and given that you can organise attendance of events over e-mail, what would the benefit of another kind of server be, exactly? Well, given that you might store your messages on a server supporting POP or IMAP (in the standards-employing realm, that is), one argument is that you might need somewhere to store your calendar in a similar way.

But aside from the need for messages to be queued somewhere while they await delivery, there is no requirement for messages to stay on the server. Indeed, POP server usage typically involves downloading messages rather than leaving them on the server. Similarly, one could store and access calendar information locally rather than having to go and ask a server about event details all the time. Various programs have supported such things for a long time.

Another argument for a server involves it having the job of maintaining a central repository of calendar and event details, where the “global knowledge” it has about everybody’s schedules can be used for maximum effect. So, if someone is planning a meeting and wants to know when the potential participants are available, they can instantly look such availability information up and decide on a time that is likely to be acceptable to everyone.

Now, in principle, this latter idea of being able to get everybody’s availability information quickly is rather compelling. But although a central repository of calendars could provide such information, it does not necessarily mean that a centralised calendar server is a prerequisite for convenient access to availability data. Indeed, the exchange of such data – referred to as “free/busy” data in the various standards – was defined for e-mail (and in general) at the end of the last century, although e-mail clients that can handle calendar data typically neglect to support such data exchanges, perhaps because it can be argued that e-mail might not obtain availability information quickly enough for someone impatiently scheduling a meeting.

But then again, the routine sharing of such information over e-mail, plus the caching of such data once received, would eliminate most legitimate concerns about being able to get at it quickly enough. And at least this mechanism would facilitate the sharing of such data between organisations, whereas people relying on different servers for such services might not be able to ask each other’s servers about such things (unless they have first implemented exotic and demanding mechanisms to do so). Even if a quick-to-answer service provided by, say, a Web server is desirable, there is nothing to stop e-mail programs from publishing availability details directly to the server over the Web and downloading them over the Web. Indeed, this has been done in certain calendar-capable e-mail clients for some time, too, and we will return to this later.

And so, this brings us to perhaps the real reason why some people regard a server as attractive: to have all the data residing in one place where it can potentially be inspected by people in an organisation who feel that they need to know what everyone is doing. Of course, there might be other benefits: backing up the data would involve accessing one location in the system architecture instead of potentially many different places, and thus it might avoid the need for a more thorough backup regime (that organisations might actually have, anyway). But the temptation to look and even change people’s schedules directly – invite them all to a mandatory meeting without asking, for example – is too great for some kinds of leadership.

With few truly-compelling reasons for a centralised server approach, it is interesting to see that many Free Software calendar solutions do actually use the server-centric CalDAV standard. Maybe it is just the way of the world that Web-centric solutions proliferate, requiring additional standardisation to cover situations already standardised in general and for e-mail specifically. There are also solutions, Free Software and otherwise, that may or may not provide CalDAV support but which depend on calendar details being stored in IMAP-accessible mail stores: Kolab does this, but also provides a CalDAV front-end, largely for the benefit of mobile and third-party clients.

Decentralisation on Demand

Ignoring, then, the supposed requirement of a central all-knowing server, and just going along with the e-mail infrastructure we already have, we do actually have the basis for a usable calendar environment already, more or less:

  • People can share events with each other (using iCalendar)
  • People can schedule events amongst themselves (using iTIP, specifically iMIP)
  • People can find out each other’s availability to make the scheduling more efficient (preferably using iTIP but also in other ways)

Doing it this way also allows users to opt out of any centralised mechanisms – even if only provided for convenience – that are coordinating calendar-related activities in any given organisation. If someone wants to manage their own calendar locally and not have anything in the infrastructure trying to help them, they should be able to take that route. Of course, this requires them to have capable-enough software to handle calendar data, which can be something of an issue.

That Availability Problem Mentioned Earlier

For instance, finding an e-mail program that knows how to send requests for free/busy information is a challenge, even though there are programs (possibly augmented with add-ons) that can send and understand events and other kinds of objects. In such circumstances, workarounds are required: one that I have implemented for the Lightning add-on for Thunderbird (or the Iceowl add-on for Icedove, if you use Debian) fetches free/busy details from a Web site, and it is also able to look up the necessary location of those details using LDAP. So, the resulting workflow looks like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to that event.
  3. View that person’s availability.
  4. Lightning now uses the LDAP database to discover the free/busy URL.
  5. It then visits the free/busy URL and obtains the data.
  6. Finally, it presents the data in the availability panel.

Without LDAP, the free/busy URL could be obtained from a vCard property instead. In case you’re wondering, all of this is actually standardised or at least formalised to the level of a standard (for LDAP and for vCard).

If only I had the patience, I would add support for free/busy message exchange to Thunderbird, just as RFC 6047 would have had us do all along, and then the workflow would look like this:

  1. Create or open an event.
  2. Add someone to the list of people invited to an event.
  3. View that person’s availability.
  4. Lightning now uses the cached free/busy data already received via e-mail for the person, or it could send an e-mail to request it.
  5. It now presents any cached data. If it had to send a request, maybe a response is returned while the dialogue is still open.

Some people might argue that this is simply inadequate for “real-world needs”, but they forget that various calendar clients are likely to ask for availability data from some nominated server in an asynchronous fashion, anyway. That’s how a lot of this software is designed these days – Thunderbird uses callbacks everywhere – and there is no guarantee that a response will be instant.

Moreover, a request over e-mail to a recipient within the same organisation, which is where one might expect to get someone’s free/busy details almost instantly, could be serviced relatively quickly by an automated mechanism providing such information for those who are comfortable with it doing so. We will look at such automated mechanisms in a moment.

So, there are plenty of acceptable solutions that use different grades of decentralisation without needing to resort to that “big server” approach, if only to help out clients which don’t have the features one would ideally want to use. And there are ways of making the mail infrastructure smarter as well, not just to provide workarounds for clients, but also to provide genuinely useful functionality.

Public holidays

Agents and Automation

Groupware solutions offer more than just a simple means for people to organise things with each other: they also offer the means to coordinate access to organisational resources. Traditionally, resources such as meeting rooms, but potentially anything that could be borrowed or allocated, would have access administered using sign-up sheets and other simple mechanisms, possibly overseen by someone in a secretarial role. Such work can now be done automatically, and if we’re going to do everything via e-mail, the natural point of integrating such work is also within the mail system.

This is, in fact, one of the things that got me interested in Kolab to begin with. Once upon a time, back at the end of my degree studies, my final project concerned mobile software agents: code that was transmitted by e-mail to run once received (in a safe fashion) in order to perform some task. Although we aren’t dealing with mobile code here, the notion still applies that an e-mail is sent to an address in order for a task to be performed by the recipient. Instead of some code sent in the message performing the task, it is the code already deployed and acting as the recipient that determines the effect of the transaction by using the event information given in the message and combining it with the schedule information it already has.

Such agents, acting in response to messages sent to particular e-mail addresses, need knowledge about schedules and policies, but once again it is interesting to consider how much information and how many infrastructure dependencies they really need within a particular environment:

  • Agents can be recipients of e-mail, waiting for booking requests
  • Agents will respond to requests over e-mail
  • Agents can manage their own schedule and availability
  • Other aspects of their operation might require some integration with systems having some organisational knowledge

In other words, we could implement such agents as message handlers operating within the mail infrastructure. Can this be done conveniently? Of course: things like mail filtering happen routinely these days, and many kinds of systems take advantage of such mechanisms so that they can be notified by incoming messages over e-mail. Can this be done in a fairly general way? Certainly: despite the existence of fancy mechanisms involving daemons and sockets, it appears that mail transport agents (MTAs) like Postfix and Exim tend to support the invocation of programs as the least demanding way of handling incoming mail.

The Missing Pieces

So what specifically is needed to provide calendaring features for e-mail users in an incremental and relatively non-invasive way? If everyone is using e-mail software that understands calendar information and can perform scheduling, the only remaining obstacles are the provision of free/busy data and, for those who need it, the provision of agents for automated scheduling of resources and other typically-inanimate things.

Since those agents are interesting (at least to me), and since they may be implemented as e-mail handler programs, let us first look at what they would do. A conversation with an agent listening to mail on an address representing a resource would work like this (ignoring sanity checks and the potential for mischief):

  1. Someone sends a request to an address to book a resource, whereupon the agent provided by a handler program examines the incoming message.
  2. The agent figures out which periods of time are involved.
  3. The agent then checks its schedule to see if the requested periods are free for the resource.
  4. Where the periods can be booked, the agent replies indicating the “attendance” of the resource (that it reserves the resource). Otherwise, the agent replies “declining” the invitation (and does not reserve the resource).

With the agent needing to maintain a schedule for a resource, it is relatively straightforward for that information to be published in another form as free/busy data. It could be done through the sending of e-mail messages, but it could also be done by putting the data in a location served by a Web server. And so, details of the resource’s availability could be retrieved over the Web by e-mail programs that elect to retrieve such information in that way.

But what about the people who are invited to events? If their mail software cannot prepare free/busy descriptions and send such information to other people, how might their availability be determined? Well, using similar techniques to those employed during the above conversation, we can seek to deduce the necessary information by providing a handler program that examines outgoing messages:

  1. Someone sends a request to schedule an event.
  2. The request is sent to its recipients. Meanwhile, it is inspected by a handler program that determines the periods of time involved and the sender’s involvement in those periods.
  3. If the sender is attending the event, the program notes the sender’s “busy” status for the affected periods.

Similarly, when a person responds to a request, they will indicate their attendance and thus “busy” status for the affected periods. And by inspecting the outgoing response, the handler will get to know whether the person is going to be busy or not during those periods. And as a result, the handler is in a position to publish this data, either over e-mail or on the Web.

Mail handler programs can also be used to act upon messages received by individuals, too, just as is done for resources, and so a handler could automatically respond to e-mail requests for a person’s free/busy details (if that person chose to allow this). Such programs could even help support separate calendar management interfaces for those people whose mail software doesn’t understand anything at all about calendars and events.

Lifting materials for rooftop building activities

Building on Top

So, by just adding a few handler programs to a mail system, we can support the following:

  • Free/busy publishing and sharing for people whose software doesn’t support it already
  • Autonomous agents managing resource availability
  • Calendar interfaces for people without calendar-aware mail programs

Beyond some existing part of the mail system deciding who can receive mail and telling these programs about it, they do not need to care about how an organisation’s user information is managed. And through standardised interfaces, these programs can send messages off for storage without knowing what kind of system is involved in performing that task.

With such an approach, one can dip one’s toe into the ocean of possibilities and gradually paddle into deeper waters, instead of having to commit to the triathlon that multi-system configuration can often turn out to be. There will always be configuration issues, and help will inevitably be required to deal with them, but they will hopefully not be bound up in one big package that leads to the cry for help of “my groupware server doesn’t work any more, what has changed since I last configured it?” that one risks with solutions that try to solve every problem – real or imagined – all at the same time.

I don’t think things like calendaring should have to be introduced with a big fanfare, lots of change, a new “big box” product introducing different system components, and a stiff dose of upheaval for administrators and end-users. So, I intend to make my work available around this approach to groupware, not because I necessarily think it is superior to anything else, but because Free Software development should involve having a conversation about the different kinds of solutions that might meet user needs: a conversation that I suspect hasn’t really been had, or which ended in jeering about e-mail being a dead technology, replaced by more fashionable “social” or “responsive” technologies; a bizarre conclusion indeed when you aren’t even able to get an account with most fancy social networking services without an e-mail address.

It is possible that no-one but myself sees any merit in the strategy I have described, but maybe certain elements might prove useful or educational to others interested in similar things. And perhaps groupware will be less mysterious and more mundane as a result: not something that is exclusive to fancy cloud services, but something that is just another tiny part of the software available through each user’s and each organisation’s chosen operating system distribution, doing the hard work and not making a fuss about it. Which, like e-mail, is what it probably should have been all along.

Monday, 24 August 2015

Kolab Now: Learn, live, adapt in production

freedom bits | 08:13, Monday, 24 August 2015

Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.

So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).

Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.

After nice, steady growth during the early months, MyKolab.com took us on a wild ride.

Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.

Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto MyKolab.com as well.

What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.

In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.

Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.

On the highway to future improvements

So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.

The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.

That hardware has been installed last week.

So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.

For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.

Stay tuned.

Saturday, 22 August 2015

Can NGOs benefit more from Innovation and Technology?

Think. Innovation. » Blog | 09:44, Saturday, 22 August 2015

Recently we applied for UNICEF’s Wearables for Good Challenge and I read some interesting pieces on NGOs and innovation. Since these gave me the impression that NGOs could benefit more from innovation and new technologies, I decided to offer my expertise and part of my time to help NGOs innovate, or at least to assist in evaluating if innovation and technology could benefit them.

Below is the open application I am sending to a number of NGOs that are (also) operating in The Netherlands. I am curious to find out if it will stick and if I get any response. In case you are also working in the innovation field and interested in offering help to NGOs, feel free to use my application as a template or contact me to see how we can join forces.


Dear Sir, Madam,

How could [ORGANIZATION NAME] benefit more from established innovation techniques and technologies such as Open Innovation, Open Data, Internet of Things and Crowdfunding?

My name is Diderik van Wingerden and as a consultant I help my corporate clients innovate. I am also an entrepreneur and with our company we develop innovative products which we release under an ‘open source’ license.

The reason you receive this e-mail from me is that I offer my expertise and part of my time as a volunteer for your organization, with the intent for assisting in an innovative initiative.

If my offer is of interest to you, then please let me know and we could set up a telephone conversation or meeting to exchange ideas.

This recent article in the Huffington Post, our participation in the UNICEF Wearables for Good Challenge and UNICEF’s Do It Yourself Guide for Innovation Labs inspired me to send you this e-mail.

For more information about me, see my LinkedIn Profile and Website.

Besides your organization these organizations also received my offer: UNICEF NL, Oxfam Novib, Cordaid, Amnesty International, Greenpeace, Hivos and PAX. This initiative is also published on my website (LINK) and shared on social media.

Warm regards, hartelijke groet,

Diderik van Wingerden
+31621639148
http://www.think-innovation.com/

“Do what is right.”


 

photo credit: _D3S9849 via photopin (license)

Saturday, 15 August 2015

Getting hands-on with FireFox OS and Geeksphone

Think. Innovation. » Blog | 09:09, Saturday, 15 August 2015

By: Diderik van Wingerden and Mirjam Bekker

While we are developing our own Open Products we also always keep an eye out for cool Open Products that are out there. And once in a while we like to get our hands on one of these products, test it and learn what we can.

A few weeks ago we thought it was time to finally get our hands on one of the most “open” smartphones currently available: one running Mozilla FireFox OS and see what that is all about. Sure, we ran into the Mozilla guys a couple of times, talking to them about the phones they brought, but we never actually took the opportunity to check it out thoroughly.

In short about FireFox OS: it is Mozilla’s “Free and Open Source” version of an Operating System (OS) for a smartphone. The community version of FireFox is called Boot2Gecko, but for simplicity and since everyone calls it that way, we will refer to both as “FireFox OS”.

By the way: you may think that Google’s Android OS is also “Free and Open Source”. Well, even if technically this is the case, Android is looking more like Openwashing.

Is FireFox OS fit for everyday use?

So the question we were asking ourselves was:

“How far has FireFox OS come being a practical alternative to Android and iOS for everyday use?”

To answer this question, we needed to get our hands on a smartphone running FireFox OS. There is not too much choice there (yet) unfortunately and we quickly decided on the Revolution smartphone by Geeksphone: a European startup that has received quite some media attention and is all about “leading the mobile revolution” and with the “passion […] to give you the choice”.

The remainder of this article contains all we learned from getting our hands on the Geeksphone Revolution, testing FireFox OS and doing some research into the Geeksphone initiative.

In short: keep an eye on FireFox OS for now and forget about Geeksphone

We are jumping to conclusions now. Well, not really: you can read all our findings and arguments, but for those of you only interested in the main message, we put that right here:

FireFox OS is not yet a realistic alternative yet for Android or iOS, when looking at it from our and probably also from most of your perspectives. The reason is simple: too many of our favorite apps are missing from the Mozilla Marketplace. Being able to add bookmarks of websites to the home screen is a nice fall-back feature, but does not solve that.

However, the idea of using a smartphone with a real free and open source community-supported OS is just so cool that we are keeping an eye on developments and hope that in the near future FireFox OS can be a practical alternative. Hopefully with an Open Hardware phone to go with it!

Surprisingly, as we were testing the Revolution phone, Geeksphone published a press release stating that they would cease to produce phones. As we found that software support for the Revolution was already falling short, Geeksphone made it impossible to have community-made software upgrades and the founders were in fact walking away from their start-up, we state: forget about buying the Revolution and remember Geeksphone as having been a really sympathetic startup with the best of intentions striving for a good cause. Learn from it and try again.

Geeksphone: “Join us leading the mobile revolution”

Geeksphone is a Spanish startup founded in 2009 by Javier Agüera and Rodrigo Silva-Ramos. The promise of Geeksphone was to offer a smartphone for a “mobile revolution”: a smartphone that gives users the option to seamlessly switch Operating Systems of their choice.

With that Geeksphone has specialized in development and promotion of smartphones that run Open Source Software Operating Systems, having made it to Android and FireFox OS.

In the six years that Geeksphone has operated, the company has been able to bring five different smartphones to the market. This in itself is an incredible achievement! We have not been able to figure out exactly how many phones Geeksphone has sold, some forum members are talking about numbers less than 10,000, the recent press release mentions “several thousands”. That is not a whole lot compared to for example the independent phone manufacturer FairPhone which says it sold 60,000 units of the FairPhone 1 and today already has pre-sold 7,500 with a goal of 15,000.

We have not been able to figure out if Geeksphone designed the phones themselves, or if they licensed existing phone designs. Our guess is they licensed existing ones, looking at the speed and number of products released.

As for current progress: it looks like Geeksphone is going to be a dead end. The company recently published a press release stating that it would stop developing new phone models. Also, the founders are off to their next challenges. The press release does say that support for current phones will continue. However, we can read in the forums that support has not been that great for a while. For example, even though FireFox 2.2 has been out for quite some time, it is not available on the Revolution. Also, Mozilla does not officially support the Geeksphone Revolution and recommends not to get the phone.

Furthermore, the company has locked the device in such a way that it is not possible for the community to take on the task by itself. Geeksphone states that it has “done all it can” by releasing all materials it is legally allowed to given their contracts, but it looks like that is not going to be good enough.

Reading the comments on the Geeksphone forum, there is a small but active group of Revolution owners who understandably are not happy about the company’s recent behavior and constructively try to find a solution for the community to take over. We sincerely hope they can and will follow the conversation.

Getting hands-on with the Revolution phone and FireFox OS

Let’s continue with some actual product reviewing! What can we learn from the way Geeksphone put the Revolution out there and FireFox OS that is running on it?

First of all, the website with the Revolution information, photos and specifications looks just great! Exactly as expected for a consumer oriented mass market product: simple design, sleek, beautiful photos and an easy on-line purchasing process.

The on-line purchase set us back EUR 141.25 including shipping. This is far less than the original price of the phone and even though the specs are a bit outdated, it seems like a more than reasonable price given what you get. We ordered the Revolution on a Sunday and it arrived on Tuesday in the Netherlands. That was surprisingly fast and already made an excellent start.

The unboxing experience of the Geeksphone is very “Appley” and therefor amazing: white sturdy box, everything neatly organized inside, a minimalistic and “less is more” approach. Also, even though tastes vary, we think the phone just looks slick! It has a slim design, with a black front and a white back.

The Revolution comes with Android pre-installed and it is very easy to switch to FireFox OS, by selecting an option in the settings menu. We did that immediately and then it takes about 30 minutes to let the process finish. We have not tried switching back to Android, but understand that is not so easy and involves installing Google’s Android tools and doing command line stuff.

The phone: solid hardware, some problematic details

At first sight, the phone’s hardware is just great. For the price you get a really nice looking, solid feeling phone that is rather thin and has a large screen. The hardware buttons feel good and the screen shows everything sharp and bright, be it not at the resolution of more modern phones.

However, we did encounter some issues: it proved to be rather hard to remove the battery cover, messing up our nails in the process and at some point thinking we might break the cover just as it snapped open. When swiping the screen it has a bit of a weird feeling to it. It is difficult to explain, but it is like the surface is not soft or smooth enough and that gives a peculiar experience. Or maybe the surface just needs to become more greasy over time ;).

Furthermore, the touch screen tended to ‘forget’ swipe movements, resulting in moving your finger over the screen without anything actually happening. And at other times it would register taps and unintentionally open links on a website. These may seem like small issues, but make the whole touchscreen experience rather frustrating.

Also, every once in a while the phone just froze. It was then still possible to turn the screen on and off with the power button, but no touch action would register. Resetting the phone, which was possible with the power button, was the only solution. And we experienced ‘drag’ when scrolling through web pages. Perhaps these two issues are not hardware related, but have something to do with FireFox OS.

FireFox OS: too many shortcomings and glitches in version 2.0

When starting FireFox OS for the first time you get a setup wizard much like you get with Android. It is straightforward and simple, just as it should be. The home screen looks attractive and playful, with many colors and large icons.

After looking around the OS for a few minutes, it presented us with an update for Mozilla Marketplace. After the update the Marketplace only showed a blank screen and was not usable anymore. We had to do a factory reset to get it to work again. Only for the problem to repeat itself, until we found it that it is also possible to open the Marketplace website from the web browser and install apps from there. FireFox OS allows putting bookmarks on the home screen like they are apps, so that was the solution for quickly accessing the Marketplace. However, it is not possible to remove the original non-working Marketplace app icon.

And not being able to remove some apps/icons from the home screen is part of a more generic shortcoming: you do not have a lot of freedom in arranging the home screen. Basically you can just move around the order of icons and create new icon groups, but it is not possible to add additional home screens and remove icons of apps you do are not using.

Browsing around the Marketplace, we were missing many apps that we use everyday, like: OpenStreetMap, Whatsapp, Facebook, Evernote, TextSecure and the apps of our Dutch banks. For Whatsapp two alternative clients exist, however we noticed that these do not have complete functionality and many times messages just arrive much later or out of order. This is to be expected, as these apps are both in “beta” phase, but that did not help us.

To overcome some of the limitations of the missing apps, it is nice that FireFox OS allows creating bookmark buttons of websites on the home screen. And for websites which have a good mobile version this works quite well: FireFox OS makes switching between websites and apps seamless and transparent. However, many websites are not well optimized for mobile, especially when they have good apps for iOS and Android, and bring extra hurdles, like for banks. Also, FireFox OS uses the favicon of the website and blows it up as the app icon on the home screen, which looks rather ugly.

FireFox OS comes with a number of standard apps like obviously a web browser and also an e-mail client. Using the e-mail client we found some glitches: after setting up e-mail accounts it still displays the “set up new account” for 1 second when opening the app and then goes to the inbox. It also always opens the inbox of the last account you configured, not the first one or the one you had open the previous time.

We also noticed that during the day the data connection seems to drop from time to time: many messages at once come after some hours in or no messages come in for a long time and then restarting the phone or switching on and off airplane mode results in a message flood.

Another thing we tested as we use it frequently on our normal phones is the sharing functionality. Especially Android is good at this: pretty much every app has the ‘share’ icon which brings up many options for sending content to other apps. We did not find this kind of sharing in FireFox OS. For example, it is not possible to share the link of an interesting web page from the browser to the e-mail or messaging app. As an alternative it is not even possible to copy the URL and paste it into an e-mail or message, as FireFox OS 2.0 does not have a copy/paste function!

Final words

You already read our conclusions earlier in this article, so we are not repeating them here. We would like add to that even though our conclusions about FireFox OS and the Geeksphone Revolution are not positive regarding everyday use, the idea of a truly open and community-based Operating System for smartphones is awesome and the work that all the people in the Mozilla community and at Geeksphone have been doing, was and is revolutionary!

Which ‘open’ product should we put up for review next?

Friday, 14 August 2015

New Fairphone, New Features, Same Old Software Story?

Paul Boddie's Free Software-related blog » English | 23:41, Friday, 14 August 2015

I must admit that I haven’t been following Fairphone of late, so it was a surprise to see that vague details of the second Fairphone device have been published on the Fairphone Web site. One aspect that seems to be a substantial improvement is that of hardware modularity. Since the popularisation of the notion that such a device could be built by combining functional units as if they were simple building blocks, with a lot of concepts, renderings and position statements coming from a couple of advocacy initiatives, not much else has actually happened in terms of getting devices out for people to use and develop further. And there are people with experience of designing such end-user products who are sceptical about the robustness and economics of such open-ended modular solutions. To see illustrations of a solution that will presumably be manufactured takes the idea some way along the road to validation.

If it is possible to, say, switch out the general-purpose computing unit of the Fairphone with another one, then it can be said that even if the Fairphone initiative fails once again to deliver a software solution that is entirely Free Software, perhaps because the choice of hardware obliges the initiative to deliver opaque “binary-only” payloads, then the opportunity might be there for others to deliver a bottom-to-top free-and-open solution as a replacement component. But one might hope that it should not be necessary to “opt in” to getting a system whose sources can be obtained, rebuilt and redeployed: that the second Fairphone device might have such desirable characteristics out of the box.

Now, it does seem that Fairphone acknowledges the existence and the merits of Free Software, at least in very broad terms. Reading the support site provides us with an insight into the current situation with regard to software freedom and Fairphone:

Our goal is to take a more open source approach to be able to offer owners more choice and control over their phone’s OS. For example, we want to make the source code available to the developer community and we are also in discussions with other OS vendors to look at the possibility of offering alternative operating systems for the Fairphone 2. However, at the moment there are parts of the software that are owned or licensed by third parties, so we are still investigating the technical and legal requirements to accomplish our goals of open software.

First of all, ignoring vague terms like “open software” that are susceptible to “openwashing” (putting the label “open” on something that really isn’t), it should be noted that various parts of the deployed software will, through their licensing, oblige the Fairphone initiative to make the corresponding source code available. This is not a matter that can be waved away with excuses about people’s hands being tied, that it is difficult to coordinate, or whatever else the average GPL-violating vendor might make. If copyleft-licensed code ships, the sources must follow.

Now there may also be proprietary software on the device (or permissively-licensed software bearing no obligation for anyone to release the corresponding source, which virtually amounts to the same thing) and that would clearly be against software freedom and should be something Fairphone should strongly consider avoiding, because neither end-users nor anyone who may wish to help those users would have any control over such software, and they would be completely dependent on the vendor, who in turn would be completely dependent on their suppliers, who in turn might suddenly not care about the viability of that software or the devices on which it is deployed. So much for sustainability under such circumstances!

As I noted before, having control over the software is not a perk for those who wish to “geek out” over the internals of a product: it is a prerequisite for product viability, longevity and sustainability. Let us hope that Fairphone can not only learn and apply the lessons from their first device, which may indeed have occurred with the choice of a potentially supportable chipset this time around, but that the initiative can also understand and embrace their obligations to those who produced the bulk of their software (as well as to their customers) in a coherent and concrete fashion. It would be a shame if, once again, an unwillingness to focus on software led to another missed opportunity, and the need for another version of the device to be brought to market to remedy deficiencies in what is otherwise a well-considered enterprise.

Now, if only Fairphone could organise their Web site in a more coherent fashion, putting useful summaries of essential information in obvious places instead of being buried in some random forum post

A (or the) secret about the Randa Meetings

Mario Fux | 21:05, Friday, 14 August 2015

This year we hold the sixth edition of the Randa Meetings and during the year we had some really important (for KDE and the users of our software and products) and far-reaching events that happened in the middle of the Swiss Alps.

One good example is a huge step and big foundation of what is today known as the KDE Frameworks in their 5.x versions. A big collection of Addons for Qt and its users (aka developers). Another event was the dicussion with Qt Brisbane back in 2010 about the decision on how to continue with Phonon. As you can see today it was a good decision as our Phonon still exists and applications using it didn’t need to be ported to something else. But the Phonon in Qt is (afaik) deprecated. Even another thing is the new design and ideas for the KDE education apps and their new logo which you can see on the website edu.kde.org. And a last one to mention here (and I’m sure I forgot a lot of other important events and decisions) is a big part of the new energy put in one (if not the one) of the best non-linear video editors in the Free Software world: our Kdenlive.

So but what’s the secret behind these Meetings that you teased us with in the title? Mind you, it all started 6 years ago when I organized the first version of the Randa Meetings back then not under this name and unaware of the coming editions which grew much in size and range. In 2009 I invited the Plasma crowd to come to Randa. In the holiday house of my family I would (and did) cook for them, gave them a place to sleep and some electricity and internet connection but most important of all some place to meet, be creative and prosper in work and ideas. It was a huge success to say the least and people loved the family like feeling.

Then the next year we needed a bigger place and it became a bit more professional (I didn’t cook myself anymore) and there was a group already interested to come to Randa: Amarok and KDE Multimedia with Myriam and Markey. But there were some other groups and here starts the secret: these groups didn’t really come to Randa because they needed a place to sprint and we offered it but because I (or we?) thought it would be great to have a KDE edu sprint in 2010 as well and thus that it makes sense to push some more energy and ideas into the KDE edu group.

So is it about the fact that you decide or invite who should come to Randa? Yes, I think that’s a big part of the success of the Randa Meetings. For certain there are still groups that ask if they could come to Randa as it makes sense to participate and use the opportunity of a small organized location and sprints but it’s about bringing the right (IMHO) groups to Randa and push some energy into them. Not directly via deciding what they should work on but about offering a creative and productive environment to them and let them work for a whole week on this. I wouldn’t have time to really direct their development during this week (as I’m mostly too busy with organizational stuff and would really like to develop more myself) and it’s not really only on my plate who to invite to Randa (I always discuss my ideas beforehand with a lot of other people) but in the end it’s this thing that (IMHO) makes the Randa Meetings so successful and thus important for KDE itself.

And these Meetings are even more important then ever if you look at the decline of KDE Sprints on sprints.kde.org.

Oh and you might now think: but hey, the developers, documentation writers, translators, artists etc. do the work in the end in Randa. And that’s of course right. Their great minds and ideas and hacking hands are what culminates in great art, documentation and software and combined with the great place and the good organization we get a great end result. So the perfect combination is in the end the secret about the Randa Meetings.

So support us in doing more of these Meetings and other KDE Sprints by clicking the above banner and donate!

PS: I don’t want to say that only me can and should do this but I do it currently, I like it and I think I do it quite well.

flattr this!

Ubuntu Phone and Unity vs Jolla and SailfishOS

Seravo | 07:28, Friday, 14 August 2015

With billions of devices produced, Android is by far the most common Linux-based mobile operating system to date. Of the less known competitors, Ubuntu phone and Jolla are the most interesting. Both are relatively new and neither one has quite yet all the features Android provides, but they do have some areas of innovation where they clearly lead Android.

Jolla phone and Ubuntu phone (Bq Aquaris .45 model)

Jolla phone and Ubuntu phone (Bq Aquaris 4.5 model)

Jolla is the name of the company behind the SailfishOS. Their first device entered stores in the fall of 2013 and since then SailfishOS has received many updates and SailfishOS 2.0 is supposed to be released soon together with the new Jolla device. A review of the Jolla phone can be read in the Seravo blog article from 2013. Most of the Jolla staff are former Nokia employees with lots of experience from Maemo and Meego, which SailfishOS inherits a lot from.

Ubuntu phone is the name of the mobile operating system by Canonical, famous from the desktop and server operating system Ubuntu. The first Ubuntu phones entered stores in the winter of 2015. Even though Ubuntu and also Ubuntu phone have been developed for many years, they can still be considered runner-ups in comparison to Jolla, because they have much less production usage experience with the bug fixes and incremental improvements it brings. A small review of the Ubuntu phone can also be read in the Seravo blog.

In comparison to Android, both of these have the following architectural benefits:

  • based on full-stack Linux environments which are much more generic and universal than the Android’s somewhat limited flavour of Linux
  • utilizes Qt and QML technologies to deliver modern user experience with smooth and fast graphics instead of a Java virtual machine environment like Android does
  • are to their development model more open and provide better opportunities for third parties to customize and contribute
  •  are not tied to the Google ecosystem, which to some user groups is a vital security and policy benefit

The last point about not being tightly knit to an ecosystem can also be a huge drawback. Users have learned to expect that their computing is an integrated experience. The million dollar question here is, will either one grow big enough to form it’s own ecosystem? Even though there are billions of people in the world who want to use a mobile phone, there probably isn’t enough mindshare to support big ecosystems around both of these mobile operating systems, so it all boils down to which of these two is better, which one is more likely to please a bigger share of users?

To find an answer to that we did some basic comparisons.

Ease of use

Both of these fulfill the basic requirements of a customer grade product. They are localized to multiple languages, well packaged, include interactive tutorials to help users learn the new system and they include all the basic apps built-in, including phone, messages, contacts, email, camera, maps, alarm clock etc.

The Ubuntu phone UI is somewhat familiar to anyone who has used Ubuntu on the desktop as it uses the Unity user interface by Canonical. In phones the Unity version is 8, while the latest Ubuntu 15.04 for desktops still ships Unity 7 series. In Unity there is a vertical bar with favourite apps that appears to the left of the screen. Instead of a traditional home screen there is the Dash, with search based views and also notification type of views. To save screen estate most menus and bars only appear on swipe across one of the edges. Swipe is also used to switch between apps and to return to the Dash screen.

The UI in the Jolla phone is mostly unlike anything most people have ever seen. The general look is cool and futuristic with ambient themes. The UI interaction is completely built around swiping, much like it was in the Nokia N9 (Meego variant). Once you’ve used a little bit the device and get familiar with the gestures, it is becomes incredibly effortless and fast to use.

The Ubuntu phone UI looks crisp and clean, but it requires quite a lot of effort to do basic things. After using both devices for a few months Jolla and SailfishOS feels simply better to use. Most of the criticism of Ubuntu’s Unity on desktop also applies to Unity in Ubuntu phone:

  • In Ubuntu the app bar only fits a few favourite apps nicely. If you want browse the list of all apps, you need to click and swipe many times until you arrive at the app listing. In comparison to Gnome 3 on the desktop and how it is done in Jolla phones, accessing the list of installed applications is just one action away and very fast to do.
  • Switching between open apps in Ubuntu is slow. The deck of apps looks nice, but it only fits four apps at a time, while in Gnome 3 opening the shell immediately shows all open windows and in Jolla the main view also shows all open apps. In Jolla there is additionally so called cover actions, so you can control some features of the running apps directly from the overview without even opening them.
  • Search as the primary interaction model in a generic device does not work. Ubuntu on the desktop has shown that it is too much asked for users to always know what they want by name. In the Ubuntu phone search is a little bit less dominant, but still searches and scopes are quite central. The Unity approach is suboptimal, as users need to remember by heart all kinds of names. The Nokia Z launcher is a much better implementation of a search based UI, as it can anticipate what the user might want to search in the first place and the full list of apps is just one touch gesture away.

Besides having a fundamentally better UI, the Jolla phone seems to have most details also done better. For example, if a user does not touch the screen for a while, it will dim a bit before shutting down, and if a user quickly does some action, the screen wakes up again to the situation where it was. In Ubuntu, the screen will simply shut off after some time of inactivity and it requires the user to open the lock screen, possibly entering a PIN code, even if the screen was shut off only for a second. Another example is that in Jolla, if the user rotates the device but does not want the screen orientation to change, the user only needs to touch the screen while turning it. In Ubuntu the user needs to go to the settings and lock the rotation, and can only then return to the app they where using and turn the device without an undesired change in rotation. A third example is that in Jolla you can “go back” in most views by swiping back. That can be done easily with either thumb. In fact the whole SailfishOS can be used with just one hand, let it be the right or the left hand. In Ubuntu navigating backwards requires the user to press an arrow icon in the upper left corner, which is impossible to do with your thumb if you hold the device with your right hand, so you often end up needing to use two hands while interacting with the Ubuntu phone UI.

To be fair, Ubuntu phone is quite new and they might not have discovered these kind of shortcomings yet as they haven’t got real end user feedback that much. On the other hand, the Unity in Ubuntu desktops has not improved much over time despite all criticism received. Jolla and SailfishOS had mostly all things done correctly from the start, which maybe means it was simply designed by more competent UI designers.

App switching Apps list Settings view. Jolla ambient theme image visible in the background

Browser experience

Despite all cool native apps and the things they can do, our experience says that the single most app in any smart device is still the Internet browser. Therefore it is essential that the browser in mobile devices is nothing less than perfect.

Both Ubuntu and Jolla have their own browser implementations instead of using something like Google Chrome as such. As the screenshot below shows, both have quite similar look and feel in their browsers and there is also support for multiple tabs.

Built-in browser Browser tabs

Performance and battery life

As both Ubuntu phone with Unity and Jolla phone with SailfishOS are built using Qt and QML it is no surprise both have very fast and responsive UIs that render smoothly. This is a really big improvement over average Android devices, which often suffer from lagging rendering.

Ubuntu phone has however one big drawback. Many of the apps use HTML5 inside the Qt view, and those HTML5 apps load lots of external assets without prefetching or caching them properly like well made HTML5 apps with offline manifests should do. In practice this means for example that browsing the Ubuntu app store is very fast, but the app icons and screenshots in the active view load slower than what one could ever wait, that is for longer than tens of seconds. This phenomenon is visible in the Ubuntu app store screenshot below.

The Jolla battery life has been measured and documented in our blog previously. When we started using the Ubuntu phone the battery life was terrible and it ran out in a day even when with the screen off all the time. Later upgrades seem to however fixed some drain, as now the battery life is much better. We have however not measured and documented it properly yet.

App ecosystem, SDK and developer friendliness

Both Ubuntu and SailfishOS have their own SDK and QML based native apps. The Jolla phone however includes it’s own implementation of a virtual Java machine, so it supports also Android apps (though not always all features in them). Ubuntu has chosen not to be able to run any kind of Android apps. Oubuntu-jolla-storen the other hand the focus of Ubuntu seems to be on HTML5 apps. At least the maps app in Ubuntu is a plain HTML version of Google Maps and the Ubuntu store is filled with mostly HTML5 apps and real native apps are hard to find. In the Jolla store real native apps and Android apps are easy to spot as Android apps have a green icon next to their entry in the Jolla app store.

Both platforms include features to let the advanced users get a root shell on them. In Jolla one can go to the settings and enable developer mode, which includes activating remote SSH access so that developers can easily access their devices command line interfaces. In Ubuntu it is simply a matter of opening the command prompt app and entering the screen lock PIN code as the password to get access.

SailfishOS package management uses Zypper and RPM packages. In Ubuntu phone Snappy and Deb packages are used.

The interesting thing with Ubuntu is it’s potential to be integrated with the Ubuntu desktop experience. So far in our testing we didn’t notice any particular integration. In fact we even failed to get the Ubuntu phone connected to any of our Ubuntu laptops and desktops, while attaching a Jolla to a Linux desktop machine immediately registers as a USB device with the mount point name “Jolla”. To our knowledge this is however a dimension that is under heavy development at Ubuntu and they should soon reveal some big news regarding the convergence of the Ubuntu desktop and mobile.

For a company like Seravo, the openness of the technology is important. SailfishOS has some disadvantage here, because it includes closed source parts. Much of SailfishOS is though upstreamed into fully open source projects Mer and Nemo. Ubuntu seems to promise that Ubuntu Phone is open source and developed in the public with opportunities for external contributions.

Conclusion

Both of these Linux-based mobile operating systems are interesting. Both share many of pieces of their technology stack, most notably Qt. There really should be more competition to Android. Based on our experiences Jolla and SailfishOS would be the technically and usability wise superior alternative, but then again Ubuntu could be able to leverage on it’s position as the most popular Linux distribution in desktops and servers. The competition is tight, which can have both negative and positive effects. We hope that the competition will fuel innovation on all fronts.

Wednesday, 12 August 2015

Recording live events like a pro (part 2: video)

DanielPocock.com - fsfe | 14:55, Wednesday, 12 August 2015

In the first blog in this series, we looked at how to capture audio effectively for a range of different events. While recording audio appears less complicated than video, it remains fundamental to a good recording. For some types of event, like a speech or a debate, you can have quite a bad video with poor lighting and other defects but people will still be able to watch it if the audio is good. Therefore, if you haven't already looked at the previous blog, please do so now.

As mentioned in the earlier blog, many people now have high quality equipment for recording both audio and video and a wide range of opportunities to use it, whether it is a talk at a conference, a wedding or making a Kickstarter video.

The right camera for video recording

The fundamental piece of equipment is the camera itself. You may have a DSLR camera that can record video or maybe you have a proper handheld video camera. The leading DSLR cameras, combined with a good lens, make higher quality recordings than many handheld video cameras.

Unfortunately, although you pay good money to buy a very well engineered DSLR that could easily record many hours of video, most DSLRs are crippled to record a maximum of 30 minutes in one recording. This issue and some workarounds are discussed later in this blog.

If you don't have any camera at all you need to think carefully about which type to buy. If you are only going to use it once you may want to consider renting or borrowing or asking for other people attending the event to bring whatever cameras they have to help make multiple recordings (the crowdsourcing solution). If you are a very keen photographer then you will probably have a preference for a DSLR.

Accessories

Don't just look at the cost of your camera and conclude that is all the budget you need. For professional quality video recording, you will almost certainly need some accessories. You may find they are overpriced at the retail store where you bought your camera, but you still need some of them, so have a look online.


Recording a talk at a free software event with a Nikon D800 on a very basic tripod with Rode VideoMic Pro, headphones (white cable) and external power (black cable)

If you want to capture audio with the camera and record it in the video file (discussed in more detail below), you will need to purchase a microphone that mounts on the camera. The built-in microphones on cameras are often quite bad, even on the most expensive cameras. If you are just using the built-in microphone for reference audio (to help with time synchronization when you combine different audio files with the video later) then the built-in microphone may be acceptable. Camera audio is discussed in more detail below.

If your camera has a headphone socket, get some headphones for listening to the audio.

Make sure you have multiple memory cards. Look carefully at the speed of the memory cards, slow ones are cheaper but they can't keep up with the speed of writing 1080p video. At a minimum, you should aim to buy memory cards that can handle one or two days worth of data for whatever it is you do.

A tripod is essential for most types of video. If you use a particularly heavy camera or lens or if you are outdoors and it may be windy you will need a heavier tripod for stability. For video, it is very useful to have a tripod with a handle for panning left and right but if the camera will be stationary for the whole recording then the handle is not essential.

Carrying at least one spare battery is another smart move. On one visit to the Inca Trail in Peru, we observed another member of our group hiking up and down the Andes with a brand new DSLR that they couldn't use because the battery was flat.

For extended periods of recording, batteries will not be sufficient and you will need to purchase a mains power supply (PSU). These are available for most types of DSLR and video camera. The camera vendors typically design cameras with unusual power sockets so that you can only use a very specific and heavily overpriced PSU from the same company. Don't forget a surge protector too.

There are various smartphone apps that allow you to remotely control the camera from the screen of your phone, such as the qDslrDashboard app. These often give a better preview than the screen built-in to the camera and may even allow you to use the touch screen to focus more quickly on a specific part of the picture. A regular USB cable is not suitable for this type of app, you need to buy a USB On-The-Go (OTG) cable.


Screenshot of qDslrDashboard app on a smartphone, controlling a DSLR camera

If you plan to copy the video from the camera to a computer at the event, you will need to make sure you have a fast memory card reader. The memory card readers in some laptops are quite slow and others can be quite fast so you may not need to buy an extra card reader.

Camera audio

Most cameras, including DSLRs, have a built-in microphone and a socket for connecting an external microphone.

The built-in microphones obtain very poor quality sound. For many events, it is much better to have independent microphones, such as a lapel microphone attached to a smartphone or wireless transmitter. Those solutions are described in part one of this blog series.

Nonetheless, there are still some benefits of capturing audio in the camera. The biggest benefit is the time synchronization: if you have audio recordings in other devices, you will need to align them with the video using post-production software. If the camera recorded an audio stream too, even if the quality is not very good, you can visualize the waveform on screen and use it to align the other audio recordings much more easily and precisely.

If the camera will be very close to the people speaking then it may be acceptable to use a microphone mounted on the camera. This will be convenient for post-production because the audio will be synchronized with the video. It may still not be as good as a lapel microphone though, but the quality of these camera-mounted microphones is still far higher than the built-in microphones. I've been trying the Rode VideoMic Pro, it is definitely better than recording with the built-in microphone on the camera and also better than the built-in microphone on my phone.

One problem that most people encounter is the sound of the lens autofocus mechanism being detected by the microphone. This occurs with both the built-in microphone and any other microphone you mount on the camera. A microphone mounted on top of the camera doesn't detect this noise with the same intensity as the built-in microphone but it is still present in the recordings.

If using a camera-mounted microphone to detect the audio from an audience, you may need to have an omnidirectional microphone. Many camera-mounted microphones are directional and will not detect very much sound from the sides or behind the camera.

When using any type of external microphone with the camera, it is recommend to disable automatic gain control (AGC) in the camera settings and then manually adjust the microphone sensitivity/volume level.

Use headphones

A final word on audio - most good cameras have an audio output socket. Connect headphones and wear them, to make sure you are always capturing audio. Otherwise, if the microphone's battery goes flat or if a wireless microphone goes out of range you may not notice.

Choosing a lens

The more light you get, the better. Bigger and more expensive lenses allow more light into the camera. Many of the normal lenses sold with a DSLR camera are acceptable but if it is a special occasion you may want to rent a more expensive lens for the day.

If you already have a lens, it is a very good idea to test it in conditions similar to those you expect for the event you want to record.

Recording duration limits

Most DSLR cameras with video capability impose a 30 minute maximum recording duration

This is basically the result of a friendly arrangement between movie studios and politicians to charge an extra tax on video recording technology and potentially make the movie studio bosses richer, supposedly justified by the fact that a tiny but exaggerated number of people use their cameras to record movies at the cinema. As a consequence, most DSLR manufacturers limit the duration of video recording so their product won't be subject to the tax, ensuring the retail price is lower and more attractive.

On top of this, many DSLR cameras also have a 4GB file size limit if they use the FAT filesystem. Recording 1080p video at a high frame rate may hit the file size limit in 10 minutes, well before you encounter the 30 minute maximum recording duration.

To deal with the file size issue, you can record at 720p instead of 1080p and use the frame rate 24fps.

For events longer than 30 minutes or where you really want 1080p or a higher frame rate, there are some other options you can consider:

  • Buy or rent a proper video camera instead of using a DSLR camera
  • Using multiple cameras that can stop and be restarted at different times.
  • Manually stopping and restarting the camera if there are breaks in the event where it is safe to do so.
  • Use an app to control the camera and program it to immediately restart the recording each time it stops
  • Extract the raw output from the camera's HDMI socket and record into some other device or computer. There are several purpose-built devices
    that can be used this way with an embedded SSD for storage.
  • There are also some people distributing unofficial/alternative firmware images that remove the artificial 30 minute recording limit.

Camera settings

There are many online tutorials and demonstration videos on YouTube that will help you optimize the camera settings for video.

You may have already made recordings using the automatic mode. Adjusting some or all of the settings manually may help you create a more optimal recording. You will need to spend some time familiarizing yourself with the settings first.

The first thing to check is white balance. This tells the camera the type of lighting in the location. If you set this incorrectly then the colours will be distorted. Many cameras have the ability to automatically set the white balance.

For video, you may be able to change one or more settings that control the recording quality. These settings control the file compression ratio and image size. Typical image size settings are 720p and 1080p. Compression ratio may be controlled by a high/medium/low quality setting. Choosing the highest quality and biggest picture requires more space on the memory card and also means you reach the 4GB file size limit more quickly. A higher quality setting also implies a faster memory card is required, because the rate of megabytes per second written to the memory card is higher.

Next you need to think about the frame rate. Events that involve fast moving subjects, such as sports, typically benefit from a higher frame rate. For other events it is quite acceptable to use 24 frames per second (fps). Higher frame rates also imply bigger file size and a requirement for a faster memory card.

Once you have decided on the frame rate, the next thing to do is set the shutter speed. Use a shutter speed that is double the frame rate. For example, if using 24fps or 25fps, use a 1/50 shutter speed.

The final two settings you need to adjust are the ISO and aperture. Set these based on the lighting conditions and extent to which the subjects are moving. For example, if the setting is dark or if you are trying to record fast moving subjects like athletes, vehicles or animals, use an ISO value of 800 or higher. Once you have chosen ISO, adjust the aperture to ensure the picture is sufficiently illuminated. Aperture also has a significant impact on the depth of field.

Operating the camera: zoom and focus

Many people use zoom lenses. It is not always necessary to change the zoom while recording a video, you can use software to zoom in and out on individual parts of the picture when editing it in post-production. If you do change the zoom while recording, it may be more difficult to maintain focus.

Almost all lenses support manual focus (turning the focus ring by hand) and many support automatic focus.

When shooting photographs with a DSLR, the mirror is down and the camera can use dedicated sensors for focus and light sensing.

When shooting video, the mirror is up and the camera can not use the same focus sensors that are used in photography. Video recording uses a digital focussing algorithm based on contrast in the picture. If you take a lot of photos you are probably quite accustomed to the fast and precise autofocus for photography and you will notice that keeping a video in focus is more challenging.

As mentioned already, one of the first things you can do to keep focus simple is to avoid zooming while recording. Record in a higher resolution than you require and then zoom with software later. Some people record using 4k resolution even when they only want to produce a 720p video, as they can digitally zoom in to different parts of the 4k recording without losing detail.

If the subject is very stationary (people sitting at a desk for an interview is a typical example), you may be able to set the camera to manual focus and not change it at all while recording.

If you choose to enable autofocus while recording, any built-in camera microphone or microphone mounted on the camera is likely to detect sounds from the motorized focus system.

Ultimately, the autofocus mechanism is not accurate for all subjects and you may be unable to stop them moving around so you will need to change the focus manually while recording. It requires some practice to be able to do this quickly without overshooting the right focus. To make life more tricky, Nikon and Canon focus rings rotate in the opposite direction, so if you are proficient using one brand you may feel awkward if you ever have to use the other. A good way to practice this skill is to practice while in the car or on the train, pointing at different subjects outside the window and trying to stay in focus as you move from one subject to the next.

Make a trial run

Many events, from weddings right up to the Olympic Games opening ceremony, have a trial run the day before. One reason for that is to test the locations and settings of all the recording and broadcasting equipment.

If a trial run isn't possible for your event, you may find some similar event to practice recording and test your equipment. For example, if you are planning to record a wedding, you could try and record a Sunday mass in the same church.

Backup and duplicate the recordings before leaving the event

If you only have one copy of the recordings and the equipment is stolen or damaged you may be very disappointed. Before your event, make a plan to duplicate the raw audio and video recordings so that several people can take copies away with them. Decide in advance who will be responsible for this, ensure there will be several portable hard disks and estimate how much time it will take to prepare the copies and factor this into the schedule.

Conclusion

All the products described can be easily purchased from online retailers. You may not need every accessory that is mentioned as it depends on the type of event you record. The total cost of buying or renting the necessary accessories may be as much as the cost of the camera itself so if you are new to this you may need to think carefully about making a budget with a spreadsheet to do it correctly.

Becoming familiar with the camera controls and practicing the techniques for manual focus and zoom can take weeks or months. If you enjoy photography this can be time well spent but if you don't enjoy it then you may not want to commit the time necessary to make good quality video.

Don't rule out options like renting equipment instead of buying it or crowdsourcing, asking several participants or friends to help make recordings with their own equipment.

For many events, audio is far more indispensable than video and as emphasizing at the beginning of this article, it is recommended that you should be one hundred percent confident in your strategy for recording audio before you start planning to record video.

UK wakes up to airport tax and data grab

DanielPocock.com - fsfe | 14:50, Wednesday, 12 August 2015

The UK has just woken up to the fact that retailers at airports have been playing the VAT system.

The focus of the media coverage has been on the VAT money, failing to give much emphasis to the fact this is also a sophisticated and coordinated attack on personal privacy.

This situation has been obvious to me for years and it doesn't just occur in the UK. Whenever one of these retailers asks me for a boarding pass, I always refuse. Sometimes they lie to me and tell me that it is mandatory to let them scan my boarding pass: I tell them that a travel companion has the boarding passes and at this point they usually find some way to complete the transaction without further comment.

It is only necessary to pay the correct amount

With the rise of payment cards, people seem to be forgetting that you can pay for things in cash. If I have the right change to pay for something in this scenario, I typically put it on the counter and walk away. Why should I have to waste my time helping a poorly trained cashier understand VAT? I already deal with enough computer problems in the job I'm paid to do, why should I lose my time when on vacation explaining to a cashier that their computer is failing to correctly deduct 20% VAT?

Whenever showing a boarding pass or passport

It is far too common for people to try and scan or copy documents these days. When somebody in an airport shop or hotel asks to see a document, I don't let it out of my hands. It is not hard to cover the barcode with your fingers too. This prevents them making an unauthorized copy or using a barcode scanner to extract personal data. Some of these staff literally try to snatch the documents out of people's hands and drop them onto a scanner next to their till. In some countries hotels are obliged to look at your passport and make a record of the name but they often have no obligation to photocopy or scan it and it is often a huge risk if they do.

If the airports are genuinely concerned about the security of passengers, they would be just as thorough in protecting data as they are in the hunt for terrorists. For example, they could give VAT-free passengers colour-coded boarding passes or some other vouchers without any personal information on them.

VAT diversion funds customer data grab

Shops pocket an extra 20% of the price of a product and they condition everybody, staff and customers, to having all customer data expropriated from the boarding pass at the point of sale, even the customers not eligible for a tax refund are unnecessarily having their data hoovered up in this way. The VAT money diverted away from both the tax man and the customer is rolled back into the system to fund the data grab. UK law promises customers significant protection from unlawful use of personal data, so how can these retailers lie to passengers and tell them that scanning the boarding pass is mandatory? Why not ask the Information Commission's Office to check up on this?

Paying the correct amount and walking away

It is not hard for people to add up the amount of VAT included in a price, deduct it themselves, give the correct and exact amount of cash to the cashier and walk away. Just type the price into your smart phone, divide by 1.2 and you can see the amount to pay. For example, if a product costs £24.95, just type 24.95 ÷ 1.2 into the calculator on your phone and you find that you have to pay £20.79.

It may only be necessary to show a boarding pass or ID for purchasing duty-free alcohol or tobacco.

Not just for airports

Carrying a correct amount of cash doesn't just help navigate the data grab in airports. Consider some of the other scenarios:

  • A long queue at the cashier in a petrol station, typically on a Sunday afternoon. Did you ever notice somebody who just walks past the queue and puts the exact change (or a cheque) on the counter and drives away while everybody else is waiting to pay with some payment card and scan their loyalty card?
  • Restaurant in a tourist spot, you receive a bill and notice they have added VAT and service charges or something you didn't ask for like bread. In European countries and Australia, the price next to each dish on the menu is the only price you have to pay, just like the price on the shelf in a shop or the price on a web page. If you have the right change you can just pay that amount and walk away without a discussion. In the US the taxes are added later and some tourist hot spots in Europe try this with English speaking customers, thinking that if they are American they won't complain.
  • Hotel tries to insist on a currency conversion when charging your credit card. Maybe you've already realized that dynamic currency conversion (DCC) used by retailers often adds at least 3% to a purchase but some hotels try to go way beyond this. The booking confirmation page you printed from their web site gives you a total price in one currency, perhaps USD or EUR and they use some arbitrary exchange rate, with a whopping 30% margin or more, to convert to the local currency and ask you to insert your PIN number or sign a credit card authorization. The best answer to this practice is usually to carry banknotes in the currency used for the booking, paying that exact amount in cash and if they try to argue, just keep counting out the bank notes to prove it matches the confirmation page. Hotels only get away with this trick because few people check the rate, even fewer carry sufficient cash and you may not be able to use local cash machines to get the currency specified in your booking. This is a situation I've encountered in eastern Europe and South America where it is common to quote in EUR and USD even if that isn't the domestic currency.

Tuesday, 11 August 2015

Software sucks – put simply

Computer Floss | 10:03, Tuesday, 11 August 2015

Be very worried. Software is eating the world, and it sucks.

This is a quote from a great article called Why the Great Glitch of July 8th Should Scare You by Zeynep Tufekci. You should go and read it (but finish this one first).

It’s one of several articles that I’ve noticed recently explaining the sorry state of software quality. It’s nice finally to see some prominent articles emerging which put the record straight. I’m glad a few people are doing their best to reveal the dirty secret of programming: software mostly sucks. It’s full of bugs and it’s insecure.

This article (and others besides) go into great detail. If you want the quick and easy read, here’s my attempt:

Speaking as a software engineer, the problem is:

  • We write huge amounts of software and software is extraordinarily complex by nature.
  • At the same time, our software has to integrate with an even greater mass of existing software, which is made up of programs layered on top of one another most of which also barely work.
  • And all the time we are pressured into doing this as quickly as we possibly can.

It is possible to do software well and write programs that are stable and secure. For example, read how NASA do it. They work with extreme care at a relatively glacial pace; single changes to the code result in committee meetings and can throw up specifications running to hundreds of pages. But it’s the kind of working atmosphere most programmers and start-ups would baulk at. (From the article: “The culture is intolerant of creativity, the individual coding flourishes and styles that are the signature of the all-night software world.”) That kind of quality requires spending a lot of money. Of course, NASA, with their huge government budget, can do that. The rest of the world has to turn a profit.

So software sucks – and it’ll probably stay that way for a good long while, not because of any technical problem, but more a cultural one. Fast, cheap or high-quality? You can pick only two out of those three. Guess which two our culture ends up choosing time and again…

Monday, 10 August 2015

Going 4K

DanielPocock.com - fsfe | 15:59, Monday, 10 August 2015

I recently started using a 4K monitor. I've generally had a positive experience and thought I would share some comments about it.

What can you do with 4K resolution?

  • View the entire London Underground map or Swiss railway map without having to zoom or scroll
  • Run a mail client like Thunderbird/Icedove and an IRC client side-by-side
  • Run Eclipse and the Android emulator side-by-side
  • Run Eclipse and some other program you are debugging side-by-side
  • When reading a PDF, show a whole page at a time but only using half the screen

Are there hardware issues?

Originally the hardware was expensive and people would have to do things like using multiple cables to join their video card to their 4K screen. The latest generation of video cards and monitors don't have this problem.

I chose to use NVIDIA's entry-level Quadro K420 card, the smallest and least expensive Quadro supporting 4K. The PC is a HP Z800 workstation. HP suggests they only certify the K420 with their latest workstations (e.g. Z840) but I have observed no problem using it in a Z800.

The monitor is a 32" BenQ display. After reading comments from other 4K users, I felt that any monitor less than 30" would not help me get the most from the world of 4K.

The monitor connects to the video card using a DisplayPort 1.2 cable that was in the box with the monitor.

I've been able to get sound through the monitor but I found the volume is always a bit low and at one point sound stopped completely.

What about software issues?

Everything just appeared to work although the fonts are a bit smaller than what I'm used to. This appears to be related to DPI issues. DPI is not automatically detected and several applications have their own way of handling non-standard DPI. Discussion about this started on the debian-devel mailing list, given that non-standard DPI will be an issue with many newer monitors, handheld devices and more exotic displays such as televisions.

Some web sites just don't look so good when the browser window is maximized to 4K resolution while others use the extra space really well. For example, using the AirBNB search, the maps show more detail but using another hotel booking site I found that it was still trying to render a map within a constant size region using less than ten percent of the page.

Friday, 07 August 2015

post-vacations map editing

nikos.roussos | 11:48, Friday, 07 August 2015

the view

Greek islands are a great place for summer vacations. This year I visited Amorgos, part of the Cyclades island group, and had a short visit at Pano Koufonisi.

Being in a new place means you need some kind of map to guide you through the endless number of beaches, paths and villages. I've been using OpenStreetMap as a map source for a long time and occasionally I contribute back. OpenStreetMap is a collaborative project to create a free editable map of the world. Many people call it the Wikipedia of maps, and it is in some extent. In contrary with all the major industry map services, which utilize free labor from volunteer contributors and give nothing back, OpenStreetMap data are freely distributed to be used by anyone for any purpose.

You can find many places where OpenStreetMap has more rich data than other sources or read stories on how targeted mapping on specific incidents saved thousand of lives. But there are also many places where it lacks reliable data. Amorgos (and unfortunately many other Greek islands) is one of these cases.

During my vacations there I used the only equipment available (my phone) to keep notes that would help me later to enrich OpenStreetMap. I extensivly use Osmand as my main navigation tool so this was my first option of keeping notes. You can either add favorites to mark any POIs or use the notes plugin to take photos or record audio notes. Osmand has also an editing plugin that can help you edit data on the fly, but I prefer to do this later. If you are searching for a more simple app OSMtracker is a better choice, for tracking routes and keeping notes. If you don't have a smartphone during your vacations you can just use paper and pen. Field Papers will help you print the map area you are interested in and you can keep notes with a pen.

Getting back home I had many notes and plenty of work to do. OpenStretMap has a great in-browser editor and the Map Features (really long) wiki page can guide you through the supported map elements. I added/changed around 90 map elements (beaches, paths, roads, buildings, etc) and it took me about an hour. Less than a day later the changes were rendered to the live website and I could feel proud about my contributions :)

So did you enjoyed your vacations? Now start contributing to OpenStreetMap so more people can enjoy the travel to all the places you visited. Happy mapping :)

OSM editing

Wednesday, 05 August 2015

Endocode is hiring: Linux and Systemd Engineer

Creative Destruction & Me » FLOSS | 13:09, Wednesday, 05 August 2015

Endocode is looking to add skilled engineers to its existing team of Linux and systemd experts. We want engineers who are excited to contribute to projects that form the basis of modern Linux systems and have the experience and skills to do so.

Our engineers work at the cutting edge of Linux kernel development. Kernel features like cgroups and namespaces introduce exciting new capabilities like containers and lightweight Linux distros ideal for clustered environments, and these are areas we focus heavily on.

Another technology that Endocode focuses on is systemd, which makes use of many features that are unique to the Linux kernel, often driving the development of new kernel features or improvements to existing ones. Its adoption has seen rapid acceleration over the past couple of years. and this has driven increased demand for systemd expertise, one that Endocode is well positioned to meet. We work closely with upstream developers to make sure that we can provide the best support possible for our clients and improve systemd for everyone.

Considering all this, an ideal candidate would be someone who describes themselves as comfortable in both user and kernel space.

You’ll be joining a team of experienced, motivated engineers and have the chance to work with and/or on open source software on a daily basis. You’ll have the chance to do this in Berlin, a city with a vibrant technology scene, excellent nightlife, and ideal conditions for families.

Deadline for applications:

28th Aug 2015


Filed under: Coding, containers, English, FLOSS, KDE, Linux, Qt, systemd Tagged: coding, containers, Linux, systemd

Your opportunity for a front row seat: The economics of the Roundcube Next Indiegogo Campaign

freedom bits | 08:51, Wednesday, 05 August 2015

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:

bHosted

Contargo

cPanel

Fastmail

Sandstorm.io

sys4

Tucows

TransIP

XS4ALL

The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:

 

Roundcube Next Campaign Amount

$103,541.00

100.00%

Indiegogo Cost

-$4,141.64

4.00%

PayPal Cost

-$4,301.17

4.15%

Remaining Amount

$95,098.19

91.85%

So by the time the money was in our PayPal account, we are down 8.15%.

The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.

But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, XE.com lists the exchange rate as 0.9464749579 CHF per USD.

USD

CHF

% of total

Roundcube Next Campaign Amount

$103,541.00

SFr. 97,998.96

100.00%

Remaining at PayPal

$95,098.19

SFr. 90,008.06

91.85%

Final at bank in CHF

$92,783.23

SFr. 87,817.00

89.61%

So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.

The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.

PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.

But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.

In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.

We also remain open to others to come aboard.

The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.

It’s not too late to be part of the Next. Just drop a message to contact@kolabsystems.com.

Friday, 31 July 2015

Interview with Fellow Neil McGovern

FSFE Fellowship Interviews | 16:05, Friday, 31 July 2015

Neil McGovern

Neil McGovern is a Fellow of the FSFE from the United Kingdom and was recently elected as Debian Project Leader, starting his term of office in April. He has previously participated in local government and has served on the board of the Open Rights Group: a digital rights organisation operating in the UK.

Paul Boddie: Congratulations on your recent election as Debian Project Leader (DPL)! Looking at your election platform, you have been engaged in a number of different activities for quite some time. How do you manage to have time for everything you want to do? Is your employer supportive of your new role or does your free time bear most of the burden?

Neil McGovern: I’d say it’s a mix of both. My employer, Collabora is hugely supportive of the role, Debian and Free Software in general. However, being DPL isn’t just a 9 to 5 job – the timezones that all our contributors work in mean that there’s always work to be done.

Paul Boddie: You appear to be fortunate enough to work for an employer that promotes Free Software solutions. For many people interested in Free Software who have to squeeze that interest in around their job, that sounds like the perfect opportunity to combine your own interests with your professional objectives. But what started you off in the direction of Free Software and your current position? And, once on the path, did you deliberately seek out Free Software opportunities, or was it just a happy accident that you ended up where you are today?

Neil McGovern: My first exposure to free software was from a friend at secondary school who started selling CDs of Linux distributions. He initially introduced me to the concept. Before that, I’d mostly used Mac OS, in the olden days before OS X came along. When I went to university to study computer science, I joined University of Sheffield IT Committee. At the time, there wasn’t any facilities offered for students to host web pages. This was originally running Mandrake. In my second year, I moved in with a housemate, who was a Debian Developer, and I started packaging a client for LiveJournal called Drivel.

Since then, I guess it’s less that I’ve less sought out opportunities, but it’s more that the opportunities out there have been very much geared towards people who understand Free Software well, and can help with them. My current job however is very much more than just using and developing Free Software – it’s about enabling companies out there to use Free Software successfully, both for the immediate gain they can get, but also making sure that they understand the benefits of contributing back. A pretty ideal job for a Free Software enthusiast!

Paul Boddie: Your DPL platform states that you intend to “support and enable” the volunteers that get the work of Debian done. One of the challenges of volunteer-run organisations is that of keeping people happy and productive, knowing that they can walk away at any time. What lessons from your history of involvement with Debian and in other areas can you share with us about keeping volunteers happy, productive and, crucially, on board?

Neil McGovern: I think the key issue is about communications. You need to make sure that you actively listen to people, and understand their view point. Given the globally distributed nature of Debian, it’s easy for people to have disagreements – remembering that another human is at the other end of an email address isn’t the easiest thing in the world. Face to face meetings and conferences are essential for countering this – every year when I go to DebConf, I come back reinvigorated to continue working on Debian. :)

Paul Boddie: Especially in recent years, there has been a lot of discussion about Free Software solutions and platforms losing out to proprietary rivals, with special attention given to things like smartphones, “app” marketplaces, and so on. That some of these proprietary offerings build on Free Software makes the situation perhaps even more unpalatable. How do you see Free Software in general, and Debian in particular, having more of a visible role to play in delivering these solutions and services all the way to the end-user and perhaps getting more of the credit?

Neil McGovern: The key issue is trust – when Debian distributes a package, you know that it’s met various quality and stability standards. There’s a risk in moving to an entire container based model that people will simply download random applications from the internet. If a security problem is found in a shared library in Debian, we can fix it once. If that library is embedded in hundreds of different ‘apps’, then they’ll all need fixing independently. This would certainly be a challenge to overcome. Mind you, in our latest release we had over 45,000 binary packages, so I don’t think that there’s a lack of choice of software in Debian!

Debian logo

Paul Boddie: I see you were involved in local government for a while. Would you say that interacting with the Free Software and Debian communities led you to explore a broader political role, or did your political beliefs lead you instead to Debian and Free Software?

Neil McGovern: Well, secretly, the real reason I got involved in politics was that I had had quite a few beers in a local pub with some friends for a 30th birthday, and one of them asked if I wanted to get involved. The next day I woke up with a hangover, and a knocking on the door with said friend holding a bundle of leaflets for me to deliver. :) I don’t think it’s really that one led to another, but more that it stems from my desire to try and help people; be it through representing constituents, or helping create software that everyone can use for free.

Paul Boddie: One of the articles on your blog deals with an awkward situation in local government where you felt you had to support the renewal of proprietary software licences, specifically Microsoft Office. Given your interests in Free Software and open standards, this must have been a bittersweet moment, knowing that the local bureaucracy had fallen into the trap of macros and customisations that make migration to open standards and interoperable products very challenging. Was this a case of knowing which battles are the ones that are worth fighting? And how do you see organisations escaping from these vicious cycles of vendor lock-in? Do you perceive an appetite in public institutions for embracing interoperability and choice or do people just accept the “treadmill” of upgrades as inevitable, perhaps as not even perceiving it as a problem at all?

Neil McGovern: It wasn’t the most pleasant decision I’ve had to make, but it was a case of weighing up what was available and what wasn’t. Since then, ODF-supporting programs have improved greatly, and there’s even commercial support for these. (Disclosure: my company is one of these now.) Additionally, the UK Government announced that it would use ODF as the standard for distributing documents, which is a big win, so I think there is a change that’s happening – Free Software is something that is now being recognised as a real force compared to 5 years ago.

Paul Boddie: A lot of attention has been focused on the next generation of software developers, particularly in the UK education sector, with initiatives like the Raspberry Pi and BBC “Micro Bit” as well as a mainstream awareness of “apps” and “app” development. Do you think there might be a risk of people becoming tempted by arenas of activity where the lessons of Free Software are not being taught or learned, where the vendor’s products are the focus, and where people no longer experience or understand a sustainable and independent community like Debian? Is computing at risk of being dragged back to an updated version of the classic 1980s consumer-producer relationship? Or worse: a rehashed version of something like the “walled garden” networked computing visions of Apple and Microsoft from the early 1990s where the vendor even sets the terms of participation and interaction?

Neil McGovern: There’s certainly a renewed focus on computing education in the UK, but that’s mostly because it’s been so poor for the past 15 or so years! We’ve been teaching students how to use a spreadsheet, or a word processor in the guise of ICT, but no efforts have gone in to actual computing. Thankfully, this is actually changing. I do have the inner suspicions that the focus on “apps” is a civil servant somewhere thinking “Kids like apps, right? And they can sell them and everything, so that’s good. Apps! Teach them to make apps!”

Paul Boddie: Finally, noticing your connections with Cambridge, and having an appreciation myself for the role of the Cambridge area and its university in founding technology companies directly or indirectly, I cannot help but ask about the local attitudes to things like Free Software, open standards, and notions of openness in general. Once upon a time, there seemed to be a degree of remorse that the rather proprietary microcomputing-era companies had failed to dominate in the marketplace, and this led to companies like ARM that have done quite well licensing their technologies, albeit in a restrictive way. Do you sense that the Cambridge technology scene has been (or become) more accepting of notions of openness and freedoms around software and technology? Or are there still prominent local opinions about the need to make money by routing around such freedoms? How do you view your involvement in Debian and the Open Rights Group as a way of bringing about changes in attitudes and in wider society towards such issues?

Neil McGovern: Nothing really opened my eyes to the importance of Debian until I turned up to my running group, and got approximately 6 people offer to buy me a pint as they heard I’d been elected DPL. I’m not sure that would have happened in many other cities. I do think that it is a reflection on the main-streaming of Free Software within large companies. We’re now seeing that not only is Free Software being accepted, but experience with it is seen as an advantage. This is perhaps best highlighted by Microsoft throwing a birthday party for the release of Debian 8, a sight I never thought I’d see.

Paul Boddie is a Free Software developer currently residing in Norway, cultivating interests in open hardware, photography and retrocomputing. He joined the FSFE in 2008 and occasionally publishes his own opinions on his blog.

Thursday, 30 July 2015

MOOC about Free Software

Being Fellow #952 of FSFE » English | 23:22, Thursday, 30 July 2015

It’s been a few months already since Vitaly Repin pointed us to a project of his: a MOOC about Free Software and I still haven’t mentioned it here.

As they realized that most people are not aware about the complexities of the digital world we live in, the idea of creating a MOOC (Massive Open Online Course) dedicated to these important questions appeared. Richard Stallman liked it and suggested to re-use parts of his video tapes in order to create the course. It has been done but some of the parts had to be redone. They made recordings in Helsinki for this as well as the intro video to the course.

I haven’t seen much of it yet, but I can already tell that they put a lot of effort into it to get it done.

So, on May 4, 2015 (May the Forth be with you!) the course was released at Eliademy. The course contains videos, quizzes and forum discussions. Its contents are released under CC BY-NC-SA.

There’s already been some interesting discussion on FSFE’s mailing list about platforms like Eliademy. However, regardless of the platform where it is published, the videos are released under CC BY-ND license and the material is also available on Vitaly’s personal website.

I’m personally more concerned about the ND clause in the videos and the NC clause for the content than any proprietary platform that may use the content.

Vitaly plans to publish it in the Common Cartridge format any time soon now after the first course iteration has been completed. This format is supported by different LMS (e.g., Moodle) and it will allow the local teachers to use the course materials in their educational activities.

He would be more than happy if anybody decides to use the videos they made in other platforms and can spread the word about this course. And there is much more what you can do: provide any feedback on the content, provide translations, improve it, add quizzes, create subtitles, etc.

flattr this!

Free Real-time Communications (RTC) at DebConf15, Heidelberg

DanielPocock.com - fsfe | 09:23, Thursday, 30 July 2015

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for rtc.debian.org? It can successfully make federated SIP calls with users of other domains, including Fedora community members trying FedRTC.org.

You can use rtc.debian.org for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

Saturday, 25 July 2015

Announcing WikiFM

Riccardo (ruphy) Iaconelli - blog | 13:45, Saturday, 25 July 2015

Announcing WikiFM!

Earlier today I gave a talk at Akademy 2015 about WikiFM. Videos of the talk should shortly become available. Based on the feedback that I have received during and after the talk, I have written a short resume of the points which raised more interest. They are aimed at the general KDE developer community, who doesn’t seem completely aware of the project and its scope.

You can find my slides here (without some SVG images).

What is WikiFM?

The WikiFM Logo

WikiFM is a KDE project which aims to bring free and open knowledge to the world, in the form of textbooks and course notes. It aims to train students, researchers and continuous learner, with manuals and content ranging from basic calculus to “Designing with QML”. We want to revolutionize the way higher education is created and delivered.

What does it offer more than $randomproject?

The union of these three key elements: students, collaboration in the open and technology. This has proven to be invaluable to create massive and high quality content.
All other projects usually feature just two of these elements, or concentrate on other material (e.g. video).

Additionally, we have virtual machines instantiatable on the fly on which you can start to develop immediately: check out http://en.wikifm.org/Special:DockerAccess (for logged-in users). By opening that link we istantiate a machine in the blink of an eye, with all the software you need already pre-installed. We support compositing and OpenGL. Your home directory is persistent among reboots and you always get a friendly hacking environment. Try it out yourself to properly appreciate it. ;-)

Is it already used somewhere? Do you have some success stories?

The project started in Italy for personal usage. In spite of this, in just a few months we got national visibility and thousands of extemely high quality pages written. Students from other universities started to use the website and in spite of a content not planned for dissemination we get around 200 unique users/day.

In addition to this, the High Energy Physics Software Foundation (a scientifical group created among key people in istitutions such as CERN, Fermilab, Princeton University, Stanford Linear Accelerator, …) has decided to use WikiFM for their official training.

Moreover, we have been invited at CERN, Fermilab, and in the universities of Santiago (Chile) for delivering seminars about the project.

How can this help my KDE $existing_application if I am not a student?

This fits in the idea of the Evolving KDE project that started in this year’s Akademy.

Hosting developer documentation, together with a pre-built developer environment which can let library users and students test your techology and step up in the hacking within a few seconds is an invaluable feature. It is possible to demonstrate features or provide complicated tutorial showcases while giving the option of trying out the code immediately, without having to perform complicated procedures or waiting for big downloads to finish.

For existing developers it also provides a clean development environment, where testing of new application can happen without hassles.

Want to know more?

This is meant to be a brief list just to give you a taste of what we are doing.

I am at Akademy 2015 until the 29th. A strong encouragement: please come and speak to me in person! :-)

I will be happy to answer any questions and eventually re-show you a shortened version of my talk.
Or, if you prefer, we are having a BoF on Monday at 11:30, in room 2.0a.

Thursday, 23 July 2015

Unpaid work training Google's spam filters

DanielPocock.com - fsfe | 08:49, Thursday, 23 July 2015

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

Recording live events like a pro (part 1: audio)

DanielPocock.com - fsfe | 07:14, Thursday, 23 July 2015

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.

Gotchas

Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.

Conclusion

If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video (see part two) and doing post-production with free software.

Monday, 20 July 2015

RTC status on Debian, Ubuntu and Fedora

DanielPocock.com - fsfe | 14:04, Monday, 20 July 2015

Zoltan (Zoltanh721) recently blogged about WebRTC for the Fedora community and Fedora desktop.

https://fedrtc.org has been running for a while now and this has given many people a chance to get a taste of regular SIP and WebRTC-based SIP. As suggested in Zoltan's blog, it has convenient integration with Fedora SSO and as the source code is available, people are welcome to see how it was built and use it for other projects.

Issues with Chrome/Chromium on Linux

If you tried any of FedRTC.org, rtc.debian.org or meet.jit.si using Chrome/Chromium on Linux, you may have found that the call appears to be connected but there is no media. This is a bug and the Chromium developers are on to it. You can work around this by trying an older version of Chromium (it still works with v37 from Debian wheezy) or Firefox/Iceweasel.

WebRTC is not everything

WebRTC offers many great possibilities for people to quickly build and deploy RTC services to a large user base, especially when using components like JSCommunicator or the DruCall WebRTC plugin for Drupal.

However, it is not a silver bullet. For example, there remain concerns about how to receive incoming calls. How do you know which browser tab is ringing when you have many tabs open at once? This may require greater browser/desktop integration and that has security implications for JavaScript. Whether users on battery-powered devices can really leave JavaScript running for extended periods of time waiting for incoming calls is another issue, especially when you consider that many web sites contain some JavaScript that is less than efficient.

Native applications and mobile apps like Lumicall continue to offer the most optimized solution for each platform although WebRTC currently offers the most convenient way for people to place a Call me link on their web site or portal.

Deploy it yourself

The RTC Quick Start Guide offers step-by-step instructions and a thorough discussion of the architecture for people to start deploying RTC and WebRTC on their own servers using standard packages on many of the most popular Linux distributions, including Debian, Ubuntu, RHEL, CentOS and Fedora.

My interview for the keynote at Akademy published

I LOVE IT HERE » English | 07:18, Monday, 20 July 2015

I am invited to give a keynote at KDE’s Akademy on Saturday 25 July. In the preparation for the conference Devaja Shah interviewed me, and his questions made me look up some things in my old mail archives from the early 2000s.

The interview covers questions about my first GNU/Linux distribution, why I studied politics and management, how I got involved in FSFE, how Free Software is linked to the progress of society, my involvement in wilderness first aid seminars, as well as my favourite music. (Thanks to Victorhck who translated the interview into Spanish and also added corresponding videos.)

I am looking forward to interesting discussions with KDE contributors and the local organisers from GPUL during the weekend.

Wednesday, 15 July 2015

We don’t use Free Software, we want something that just works!

Being Fellow #952 of FSFE » English | 09:50, Wednesday, 15 July 2015

Joinup reports: Using Free Software in school greatly reduces the time needed to troubleshoot PCs

After migrating to Free Software in the Augustinian College of León, Spain: “For teachers and staff, the amount of technical issues decreased by 63 per cent and in the school’s computer labs by 90 per cent.” (emphasis added)

Good to have something to refer to when I hear: “We don’t have the time to fiddle with this, we need a solution that ‘just works’.”

Also further down in the article, they have a working solution for their whiteboards and document incompatibilities are mentioned. Another proof that raising awareness about open standards beyond Document Freedom Day is really important. BTW, do you already have something planned for next year’s DFD? It should be March 30, 2016.

flattr this!

Galicia introducing over 50 000 students to free software

Being Fellow #952 of FSFE » English | 06:59, Wednesday, 15 July 2015

Galicia introducing over 50 000 students to free software tools, making it part of their 2014-2015 curriculum.

In May Amtega, Galicia’s agency for technological modernisation signed a acontract with the three universities in the region, the Galician Association of Free Software (AGASOL) and six of the region’s free software user groups.

The last paragraph of the article also mentions some changes after the recent elections. Does anybody with more insight can explain to me what this may mean for the future of Free Software in Galicia? Thanks!

 

 

flattr this!

Thursday, 02 July 2015

Continuous integration testing for WordPress plugins on Github using Travis CI

Seravo | 12:04, Thursday, 02 July 2015

seravo-travis-testing-builds

Intro

We have open sourced and published some plugins on wordpress.org. We only publish them to wordpress.org and do the development in Github. Our goal is to keep them simple but effective. Quite a few people are using them actively and some of them have contributed back by creating additional features or fixing bugs/docs. It’s super nice to have contributions from someone else but it’s hard to see if those changes break your existing features. We all do mistakes from time to time and it’s easier to recover if you have good test coverage. Automated integration tests can help you out in these situations.

Choosing Travis CI

As we use Github.com for hosting our code and wanted a tool which integrates really well with Github. Travis works seamlessly with Github and it’s free to use in open source projects. Travis gives you ability to run your tests in coordinated environments which you can modify to your preferences.

Requirements

You need to have a Github account in order to setup Travis for your projects.

How to use

1. Sign up for free Travis account

Just click the link on the page and enter your Github credentials

2. Activate testing in Travis. Go to your accounts page from right corner.

travis-accounts-onni-hakala

Then go to your Organisation page (or choose a project of your own) and activate the projects you want to be tested in Travis.

activate-travis-testing-buttons

3. Add .travis.yml into the root of your project repository. You can use samples from next section.

travis-yml-in-github

After you have pushed to Github just wait for couple of seconds and your tests should activate automatically.

Configuring your tests

I think the hardest part of Travis testing is just getting started. That’s why I created testing template for WordPress projects. You can find it in our Github repository. Next I’m going to show you a few different cases of how to use Travis. We are going to split tests into unit tests with PHPUnit and integration tests with RSpec, Poltergeist and PhantomJS.

#1 Example .travis.yml, use Rspec integration tests to make sure your plugin won’t break anything else

This is the easiest way to use Travis with your WordPress plugin. This installs latest WP and activates your plugin. It checks that your frontpage is working and that you can log into admin panel. Just drop this .travis.yml  into your project and start testing :)!


sudo: false
language: php

notifications:
  on_success: never
  on_failure: change

php:
  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

env:
  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

matrix:
  allow_failures:
    - php: nightly

before_script:
  - git clone https://github.com/Seravo/wordpress-test-template wp-tests
  - bash wp-tests/bin/install-wp-tests.sh test root '' localhost $WP_VERSION

script:
  - cd wp-tests/spec && bundle exec rspec test.rb

#2 Example .travis.yml, which uses phpunit and rspec integration tests

  1. Copy phpunit.xml and tests folder from: https://github.com/Seravo/wordpress-test-template into your project

  2. Edit tests/bootstrap.php line containing PLUGIN_NAME according to your plugin:

define('PLUGIN_NAME','your-plugin-name-here.php');
  1. Add .travis.yml file

sudo: false
language: php

notifications:
  on_success: never
  on_failure: change

php:
  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

env:
  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

matrix:
  allow_failures:
    - php: nightly

before_script:
  # Install composer packages before trying to activate themes or plugins
  # - composer install
  - git clone https://github.com/Seravo/wordpress-test-template wp-tests
  - bash wp-tests/bin/install-wp-tests.sh test root '' localhost $WP_VERSION

script:
  - phpunit
  - cd wp-tests/spec && bundle exec rspec test.rb

For this to be useful you need to add the tests according to your plugin.

To get you started see how I did it for our plugins HTTPS Domain Alias & WP-Dashboard-Log-Monitor.

Few useful links:

If you want to contribute for better WordPress testing put an issue or pull request in our WordPress testing template.

Seravo can help you using PHPUnit, Rspec and Travis in your projects,
please feel free to ask us about our WordPress testing via email at wordpress@seravo.fi or in the comment section below.

 

Applying the most important lesson for non-developers in Free Software through Roundcube Next

freedom bits | 08:01, Thursday, 02 July 2015

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night Sandstorm.io became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Monday, 29 June 2015

The FSFE.org buildscript

Told to blog - Entries tagged fsfe | 17:20, Monday, 29 June 2015

At the start of this month I deployed the new build script on the running test instance of fsfe.org.
I'd like to give an overview over its features and limitations. In the end you should be able to understand the build logs on our web server and to test web site changes on your own computer.

General Concept

The new build script (let's call it the 2015 revision) emulates the basic behaviour of the old (around 2002ish revision) of the build script. The rough idea is that the web site starts as a collection of xhtml files, which get turned into html files.

The Main build

A xhtml file on the input side contains a page text, usually a news article or informational text. When it is turned into its corresponding html output, it will be enriched with menu headers, the site footer, tag based cross-links, etc. In essence however it will still be the same article and one xhtml input file normally corresponds to one html output file. The rules for the transition are described using the xslt language. The build script finds the transition rules for each xhtml file in a xsl file. Each xsl file will normally provide rules for a number of pages.

Some xhtml files contain special references which will cause the output to include data from other xhtml and xml files. I.e. the news page will contain headings from all news articles and the front page has some quotes rolling through, which are loaded from a different file.

The build script coordinates the tools which perform the build process. It selects xsl rules for each file, handles different language versions, and the fallback for non-existing translations, collects external files for inclusion into a page, and calls the XSLT processor, RSS generator, etc.

Files like PNG images and PDF documents simply get copied to the output tree.

The pre build

Aside from commiting images, changing XML/XHTML code, and altering inclusion rules, authors have the option to have dynamic content generated at build time. This is mostly used for our PDF leaflets but occasionally comes in handy for other things as well. At different places the source directory contains some files called Makefile. Those are instruction files for the GNU make program, a system used for running compilers and converters to generate output files from source code. A key feature of make is, that it regenerates output files only if their source code has changed since the last generator run.
GNU make is called by the build script prior to its own build run. This goes for both, the 2002 and 2015 revision of the build script. Make itself runs some xslt-based conversions and PDF generators to set up news items for later processing and to build PDF leaflets. The output goes to the websites source tree for later processing by the build script. When building locally you must be careful, not to commit generated files to the SVN repository.

Build times

My development machine "Vulcan" uses relatively lightweighted hardware by contemporary standards: an Intel Celeron 2955U with two Haswell CPU Cores at 1.4 GHz and a SSD for mass storage.
I measured the time for some build runs on this machine, however our web server "Ekeberg" despite running older hardware seems to perform slightly faster. Our future web server "Claus" which isn't yet productively deployed seems to work a little slower. The script performs most tasks multi threaded and can profit greatly from multiple CPU cores.

Pre build

The above mentioned pre build will take a long time when it is first run. However once its output files are set up, they will hardly be ever altered.

Initial pre build on Vulcan: ~38 minutes
Subsequent pre build on Vulcan: < 1 minute

2002 Build script

When the build script is called it first runs the pre build. All timing tests listed here where performed after an initial pre build. This way, as in normal operation, the time required for the pre build has an almost negligible impact on the total build time.

Page rebuild on Vulcan: ~17 minutes
Page rebuild on Ekeberg: ~13 minutes

2015 Build script

The 2015 revision of the build script is written in Shell Script while the 2002 implementation was in Perl. The Perl script used to call the XSLT processor as a library and passed a pre parsed XML tree into the converter. This way it was able to keep a pre parsed version of all source files in cache which was advantageous as it saved reparsing a file which would be included repeatedly. For example news collections are included in differnet places on the site and missing language versions of an article are usually all filled with the same english text version while retaining menu links and page footers in their respective translation.
The shell script does not ever parse an XML tree itself, instead it uses quicker shortcuts for the few XML modifications it has to perform. This means however, that it has to pass raw XML input to the XSLT program, which then has to perform the parsing over and over again. On the plus site this makes operations more atomic from the scripts point of view, and aids in implementing a dependency based build which can save it completely from rebuilding most of the files.

For performing a build, the shell script first calculates a dependency vector in the form of a set of make rules. It then uses make to perform the build. This dependency based build is the basic mode of operation for the 2015 build script.
This can still be tweaked: When the build script updates the source tree from our version control system it can use the list of changes to update the dependency rules generated in a previous build run. In this differential build even the dependency calculation is limited to a minimum with the resulting build time beeing mostly dependent on the actual content changes.

timings taken on the development machine Vulcan
Dependency build, initial run: 60+ minutes
Dependency build, subsequent run: ~9 to ~40 minutes
Differential build: ~2 to ~40 minutes

Local builds

In the simplest case you check out the web page from subversion, choose a target directory for the output and and build from the source directory directly into the target. Note that in the process the build script will create some additional files in the source directory. Ideally all of those files should be ignored by SVN, so they cannot be accidentally commited.

There are two options that will result in additional directories being set up beside the output.

  1. You can set up a status directory, where log files for the build will be placed. In case you are running a full build this is recommendable because it allows you to introspect the build process. The status directory is also required to run the differential builds. If you do not provide a status directory some temporary files will be created in your /tmp directory. The differential builds will then behave identical to the regular dependency builds.
  2. You can set up a stage directory. You will not normally need this feature unless you are building a live web site. When you specify a stage directory, updates to the website will first be generated there and only after the full build the stage directory will be synchronised into the target folder. This way you avoid having a website online that is half deprecated and half updated. Note that even the choosen synchronisation method (rsync) is not fully atomic.

Full build

The full build is best tested with a local web server. You can easily set one up using lighttpd. Set up a config file, e.g. ~/lighttpd.conf:

server.modules = ( "mod_access" )
$HTTP["remoteip"] !~ "127.0.0.1" {
  url.access-deny = ("")    # prevent hosting to the network
}

# change port and document-root accordingly
server.port                     = 5080
server.document-root            = "/home/fsfe/fsfe.org"
server.errorlog                 = "/dev/stdout"
server.dir-listing              = "enable"
dir-listing.encoding            = "utf-8"
index-file.names                = ("index.html", "index.en.html")

include_shell "/usr/share/lighttpd/create-mime.assign.pl"

Start the server (I like to run it in foreground mode, so I can watch the error output):

/usr/sbin/lighttpd -Df ~/lighttpd.conf

...and point your browser to http://localhost:5080

Of course you can configure the server to run on any other port. Unless you want to use a port number below 1024 (e.g. the standard for HTTP is port 80) you do not need to start the server as root user.

Finally build the site:

~/fsfe-trunk/build/build_main.sh -statusdir ~/status/ build_into /home/fsfe/fsfe.org/

Testing single pages

Unless you are interested in browsing the entire FSFE website locally, there is a much quicker way, to test changes you make to one particular page, or even to .xsl files. You can build each page individually, exaclty as it would be generated during the complete site update:

~/fsfe-trunk/build/build_main.sh process_file ~/fsfe-trunk/some-document.en.xhtml > ~/some-document.html

The resulting file can of course be opened directly. However since it will contain references to images and style sheets, it may be useful to test it on a local web server providing the referenced files (that is mostly the look/ and graphics/ directory).

The status directory

There is no elaborate status page yet. Instead we log different parts of the build output to different files. This log output is visible on http://status.fsfe.org.

File Description
Make_copy The file is part of the generated make rules, it all rules for files that are just copied as they are. The file may be reused in the differential build.
Make_globs Part of the generated make rules. The file contains rules for preprocessing XML file inclusions. It may be reused during the differential build.
Make_sourcecopy Part of the generated make rules. Responsible for copying xhtml files to the source/ directory of the website. May be reused in differential build runs.
Make_xhtml Part of the generated make rules. Contains the main rules for XHTML to HTML transitions. May be reused in differential builds.
Make_xslt Part of generated make rules. Contains rules for tracking interdependencies between XSL files. May be reused in differential builds.
Makefile All make rules. This file is a concatenation of all rule files above. While the differential build regenerates the other Make_-files selectively, this one gets always assembled from the input files which may or may not have been reused in the process. The make program which builds the site uses this file. Note the time stamp: this file is the last one to be written into, before make takes over the build.
SVNchanges List of changes pulled in with the latest SVN update. Unfortunately it gets overwritten with every non-successful update attempt (normally every minute).
SVNerrors SVN error output. Should not contain anything ;-)
buildlog Output of the make program performing the build. Possibly the most valuable source of information when investigating a build failure. The last file to be written to during the make run.
debug Some debugging output of the build system, not too informative because it is used very sparsely.
lasterror Part of the error output. Gets overwritten with every run attempt of the build script.
manifest List of all files which should be contained in the output directory. Gets regenerated for every build along with the Makefile. The list is used for removing obsolete files from the output tree.
premake Output of the make-based pre build. Useful for investigating issues that come up during this stage.
removed List of files that were removed from the output after the last run. That is, files that have been part of a previous website revision but do no longer appear in the manifest.
stagesync Output of rsync when copying from a stage directory to the http root. Basically contains a list of all updated, added, and removed files.

Roadmap

Roughly in that order:

  • move *.sources inclusions from xslt logic to build logic
  • split up translation files
    • both steps will shrink the dependency network and give build times a more favourable tendency
  • deploy on productive site
  • improve status output
  • auto detect build requirements on startup (to aid local use)
  • add support for markdown language in documents
  • add sensible support for other, more distinct, language codes (e.g. pt and pt-br)
  • deploy on DFD site
  • enable the script to remove obsolete directories (not only files)

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Björn Schießle's Weblog » English  Blog of Martin Husovec  Blog » English  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Commons Machinery » FSFE  Communicating freely  Computer Floss  Creative Destruction & Me » FLOSS  Daniel Martí's blog  DanielPocock.com - fsfe  Don't Panic » English Planet  ENOWITTYNAME  Escape to freedom  FSFE Fellowship Interviews  FSFE Fellowship Vienna » English  Fellowship News  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  GLOG » Free Software  Gianf:) » free software  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  I LOVE IT HERE » English  Inductive Bias  Intuitionistically Uncertain » Technology  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Max's weblog » English  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Think. Innovation. » Blog  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Torsten's Thoughtcrimes » » Free Software  Viktor's notes » English  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  blog.padowi.se » English  drdanzs blog » freesoftware  emergency exit  free software blog  freedom bits  gollo's blog » English  hesa's Weblog » Free Software  irl:/dev/blog » fsfe-planet  julia.e.klein's blog  marc0s on Free Software  mina86.com  mkesper's blog » English  nikos.roussos  pichel's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog