Planet Fellowship (en)

Monday, 25 September 2017

Nextcloud Conference 2017: Free Software licenses in a Nutshell

English on Björn Schießle - I came for the code but stayed for the freedom | 13:44, Monday, 25 September 2017

At this years Nextcloud conference I gave a lightening talk about Free Software licenses. Free Software developers often like to ignore the legal aspects of their project, still I think it is important to know at least some basics. The license you chose and other legal decisions you make are a important cornerstone to define the basic rules of the community around your code. Making good choices can enable a level playing field for a large, diverse and growing community.

Explaining this huge topic in just five minutes was a tough challenge. The goal was to explain why we are doing things the way we are doing it. For example why we introduced the Developer Certificate of Origin, a tool to create legal certainty, used by many large Free Software initiatives such as Linux, Docker or Eclipse these days. Further the goal was to transfer some knowledge about license compatibility and give some useful pointers for app developers how to decide whether a third party license is compatible or not. If the five minute lightening talk was to fast (and yes, I talked quite fast to match the time limit) or if you couldn’t attend, here are the slides to reread it:

[Note: This blog contain some presentation slides, go to the original page to see them.]

Sunday, 24 September 2017

Free Software Efforts (2017W38)

Iain R. Learmonth | 11:00, Sunday, 24 September 2017

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Debian

I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite.

In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of git.tasktools.org with a view to moving our git repositories there.

I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog.

Bugs closed (fixed/wontfix): #774962, #850570

Tor Project

I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists.

I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517.

Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting.

Bugs closed (fixed/wontfix): #22146, #22297, #23511

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed.

Onion Services

Iain R. Learmonth | 09:45, Sunday, 24 September 2017

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS.

Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button.

For now though, I have my onion service hosted elsewhere. I’ve got a regular onion service (version 2) and also now a next generation onion service (version 3). My setup works like this:

  • A cronjob polls my website’s git repository that contains a Hugo static site
  • Two versions of the site are built with different base URLs set in the Hugo configuration, one for the regular onion service domain and one for the next generation onion service domain
  • Apache is configured for two virtual hosts, one for each domain name
  • tor from the Debian archives is configured for the regular onion service
  • tor from git (to have next generation onion service support) is configured for the next generation onion service

The main piece of advice I have for anyone that would like to have an onion service version of their static website is to make sure that your static site generator is handling URLs for you and that your sources have relative URLs as far as possible. Hugo is great at this and most themes should be using the baseURL configuration parameter where appropriate.

There may be some room for improvement here in the polling process, perhaps this could be triggered by a webhook instead.

I’m not using HTTPS on these services as the HTTPS private key for the domain isn’t even controlled by me, it’s controlled by Netlify, so wouldn’t really be a great method of authentication and Tor already provides strong encryption and its own authentication through the URL of the onion service.

Of course, this means you need a secure way to get the URL, so here’s a PGP signed couple of URLs:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

As of 2017-09-23, the website at iain.learmonth.me is mirrored by me at
the following onion addresses:

w6d6vblb6vhuqxt6.onion
tvin5bvfwew3ldttg5t6ynlif4t53y3mbmb7sgbyud7h5q6gblrpsnyd.onion

This declaration was written and signed for publication in my blog.
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCgAdFiEEfGEElJRPyB2mSFaW0hedW4oe0BEFAlnG1FMACgkQ0hedW4oe
0BGtTwgAp9PK6x1X9lnPLaeOOEALxn2BkDK5Q6PBt7OfnTh+f53oRrrxf0fmfNMH
Qz/IDY+tULX3TZYbjDsuu+aDpk6YIdOnOzFpIYW9Qhm6jAsX4RDfn1cZoHg1IeM7
bCvrYHA5u753U3Mm+CsLbGihpYZE/FBdc/nE5S6LxYH83QZWLIW19EPeiBpBp3Hu
VB6hUrDz3XU23dXn2U5/7faK7GKbC6TrBG/Z6dUtaXB62xgDIrPEMorwfsAZnWv4
3mAEsYJv9rnIyLbWamXDas8fJG04DOT+2C1NYmZ5CNJ4C7PKZuIYkaoVAp+pzLGJ
6BEBYaRvYIjd5g8xdVC3kmje6IM9cg==
=lUvh
-----END PGP SIGNATURE-----

Note: For the next generation onion service, I do currently have some logging enabled in the tor daemon as I’m running this service as an experiment to uncover any bugs that appear. There is no logging beyond the default for the version 2 hidden service’s tor daemon.

Another note: Current stable releases of Tor Browser do not support next generation onion services, you’ll have to grab an experimental build to try them out.

<figure> <figcaption>Viewing my next generation onion service in Tor Browser</figcaption> </figure>

Saturday, 23 September 2017

VM on bhyve not booting

Iain R. Learmonth | 09:45, Saturday, 23 September 2017

Last night I installed updates on my FreeNAS box and rebooted it. As expected my network died, but then it never came back, which I hadn’t expected.

My FreeNAS box provides backup storage space, a local Debian mirror and a mirror of talks from recent conferences. It also runs a couple of virtual machines and one of these provides my local DNS resolver.

I hooked up the VNC console to the virtual machine and the problem looked to be that it was booting from the Debian installer CD. I removed the CD from the VM and rebooted, thinking that would be the end of it, but nope:

<figure> <figcaption>The EFI shell presented where GRUB should have been</figcaption> </figure>

I put the installer CD back and booted in “Rescue Mode”. For some reason, the bootloader installation wasn’t working, so I planned to reinstall it. The autopartition layout for Debian with EFI seems to use /dev/sda2 for the root partition. When you choose this it will see that you have an EFI partition and offer to mount it for you too.

When I went to install the bootloader, I saw another option that I didn’t know about: “Force GRUB installation in removable media path”. In the work I did on live-wrapper I had only ever dealt with this method of booting, I didn’t realise that there were other methods. The reasoning behind this option can be found in detail in Debian bug #746662. I also found mjg59’s blog post from 2011 useful in understanding this.

Suffice is to say that this fixed the booting issue for me in this case. I haven’t investigated this much futher so I can’t be certain of any reproducable steps to this problem, but I did also stumble across this forum post which essentially gives the manual steps that are taken by that Rescue Mode option in order to fix the problem. I think the only reason I hadn’t run into this before now is that the VMs hadn’t been rebooted since their installation.

Walkaway by Cory Doctorow

Evaggelos Balaskas - System Engineer | 09:36, Saturday, 23 September 2017

Walkaway by Cory Doctorow

Are you willing to walk-away without anything in the world to build a better world ?

walkaway.jpg

Tag(s): books

Friday, 22 September 2017

It Died: An Update

Iain R. Learmonth | 07:30, Friday, 22 September 2017

Update: I’ve had an offer of a used workstation that I’m following up. I would still appreciate any donations to go towards costs for cables/converters/upgrades needed with the new system but the hard part should hopefully be out the way now. (:

Thanks for all the responses I’ve received about the death of my desktop PC. As I updated in my previous post, I find it unlikely that I will have to orphan any of my packages as I believe that I should be able to get a new workstation soon.

The responses I’ve had so far have been extremely uplifting for me. It’s very easy to feel that no one cares or appreciates your work when your hardware is dying and everything feels like it’s working against you.

I’ve already received two donations towards a new workstation. If you feel you can help then please contact me. I’m happy to accept donations by PayPal or you can contact me for BACS/SWIFT/IBAN information.

I’m currently looking at an HP Z240 Tower Workstation starting with 8GB RAM and then perhaps upgrading the RAM later. I’ll be transplanting my 3TB hybrid HDD into the new workstation as that cache is great for speeding up pbuilder builds. I’m hoping for this to work for me for the next 10 years, just as the Sun had been going for the last 10 years.

Somebody buy this guy a computer. But take the Sun case in exchange. That sucker's cool: It Died @iainlearmonth http://ow.ly/oLEI30fk0yN
-- @BrideOfLinux - 11:00 PM - 21 Sep 2017

For the right donation, I would be willing to consider shipping the rebooty Sun if you like cool looking paperweights (send me an email if you like). It’s pretty heavy though, just weighed it at 15kg. (:

Thursday, 21 September 2017

It Died

Iain R. Learmonth | 09:10, Thursday, 21 September 2017

On Sunday, in my weekly report on my free software activities, I wrote about how sustainable my current level of activites are. I had identified the risk that the computer that I use for almost all of my free software work was slowly dying. Last night it entered an endless reboot loop and subsequent efforts to save it have failed.

I cannot afford to replace this machine and my next best machine has half the cores, half the RAM and less than half of the screen real estate. As this is going to be a serious hit to my productivity, I need to seriously consider if I am able to continue to maintain the number of packages I currently do in Debian.

Update: Thank you for all the responses I’ve received on this post. While I have not yet resolved the situation, the level of response has me very confident that I will not have to orphan any packages and I should be back to work soon.

<figure> <figcaption>The Sun Ultra 24</figcaption> </figure>

Wednesday, 20 September 2017

Easy APT Repository

Iain R. Learmonth | 07:30, Wednesday, 20 September 2017

The PATHspider software I maintain as part of my work depends on some features in cURL and in PycURL that have only just been mereged or are still awaiting merge. I need to build a docker container that includes these as Debian packages, so I need to quickly build an APT repository.

A Debian repository can essentially be seen as a static website and the contents are GPG signed so it doesn’t necessarily need to be hosted somewhere trusted (unless availability is critical for your application). I host my blog with Netlify, a static website host, and I figured they would be perfect for this use case. They also support open source projects.

There is a CLI tool for netlify which you can install with:

sudo apt install npm
sudo npm install -g netlify-cli

The basic steps for setting up a repository are:

mkdir repository
cp /path/to/*.deb repository/
cd repository
apt-ftparchive packages . > Packages
apt-ftparchive release . > Release
gpg --clearsign -o InRelease Release
netlify deploy

Once you’ve followed these steps, and created a new site on Netlify, you’ll be able to manage this site also through the web interface. A few things you might want to do are set up a custom domain name for your repository, or enable HTTPS with Let’s Encrypt. (Make sure you have apt-transport-https if you’re going to enable HTTPS though.)

To add this repository to your apt sources:

gpg --export -a YOURKEYID | sudo apt-key add -
echo "deb https://SUBDOMAIN.netlify.com/ /" | sudo tee -a /etc/apt/sources.list
sudo apt update

You’ll now find that those packages are installable. Beware of APT pinning as you may find that the newer versions on your repository are not actually the preferred versions according to your policy.

Update: If you’re wanting a solution that would be more suitable for regular use, take a look at repropro. If you’re wanting to have end-users add your apt repository as a third-party repository to their system, please take a look at this page on the Debian wiki which contains advice on how to instruct users to use your repository.

Update 2: Another commenter has pointed out aptly, which offers a greater feature set and removes some of the restrictions imposed by repropro. I’ve never use aptly myself so can’t comment on specifics, but from the website it looks like it might be a nicely polished tool.

Tuesday, 19 September 2017

Small Glowing Thing

Iain R. Learmonth | 07:30, Tuesday, 19 September 2017

Quite a while ago I obtained an Adafruit NeoPixel Stick. It was cheap enough to be an impulse buy but it took me some time to get around to actually doing something with it.

I’ve been wanting to play a little more with the ATtiny range of microcontrollers so these things seemed to go together nicely. It turns out that getting an ATtiny programmed is actually rather simple using an Arduino as an ISP programmer. I’ve written up some notes on the procedure at the 57North Hacklab wiki.

The thing that caught me out is that while you don’t actually need to burn a bootloader to the ATtiny, you do still need to select that option from the menu, as this sets the soft fuses to determine the speed it will run at. For a good hour I was wondering why all my Neopixels were just white and not changing and it was because the ATtiny was still running at its default 1MHz.

<figure> <figcaption>The Neopixels in action commanded by the ATtiny85</figcaption> </figure>

Monday, 18 September 2017

New blog

Posts on Hannes Hauswedell's homepage | 14:00, Monday, 18 September 2017

I have moved my blog from https://blogs.fsfe.org/h2 to my own web-space. On the way I have switched from Wordpress to GoHugo. I hope to publish more often and will increasingly cover programming topics.

Sunday, 17 September 2017

Free Software Efforts (2017W37)

Iain R. Learmonth | 17:00, Sunday, 17 September 2017

I’d like to start making weekly reports again on my free software efforts. Part of the reason for these reports is for me to see how much time I’m putting into free software. Hopefully I can keep these reports up.

Debian

I have updated txtorcon (a Twisted-based asynchronous Tor control protocol implementation used by ooniprobe, magic-wormhole and tahoe-lafs) to its latest upstream version. I’ve also added two new binary packages that are built by the txtorcon source package: python3-txtorcon and python-txtorcon-doc for Python 3 support and generated HTML documentation respectively.

I have gone through the scapy (Python module for the forging and dissection of network packets) bugs and closed a couple that seem to have been silently fixed by new upstream releases and not been caught in the BTS. I’ve uploaded a minor revision to include a patch that fixes the version number reported by scapy.

I have prepared and uploaded a new package for measurement-kit (a portable C++11 network measurement library) from the Open Observatory of Network Interference, which at time of writing is still in the NEW queue. I have also updated ooniprobe (probe for the Open Observatory of Network Interference) to its latest upstream version.

I have updated the Swedish debconf strings in the xastir (X Amateur Station Tracking and Information Reporting) package, thanks to the translators.

I have updated the direwolf (soundcard terminal node controller for APRS) package to its latest upstream version and fixed the creation of the system user to run direwolf with systemd to happen at the time the package is installed. Unfortunately, it has been necessary to drop the PDF documentation from the package as I was unable to contact the upstream author and acquire the Microsoft Word sources for this release.

I have reviewed and sponsored the uploads of the new packages comptext (GUI based tool to compare two text streams), comptty (GUI based tool to compare two radio teletype streams) and flnet (amateur radio net control station software) in the hamradio team. Thanks to Ana Custura for preparing those packages, comptext and comptty are now available in unstable.

I have updated the Debian Hamradio Blend metapackages to include cubicsdr (a software defined radio receiver). This build also refreshes the list of packages that can now be included as they had not been packaged at the time of the last build.

I have produced and uploaded an initial package for python-azure-devtools (development tools for Azure SDK and CLI for Python) and have updated python-azure (the Azure SDK for Python) to a recent git snapshot. Due to some issues with python-vcr it is currently not possible to run the test suite as part of the build process and I’m watching the situation. I have also fixed the auto dependency generation for python3-azure, which had previously been broken.

Bugs closed (fixed/wontfix): #873036, #871940, #869566, #873083, #867420, #861753, #855385, #855497, #684727, #683711

Tor Project

I have been working through tickets for Atlas (a tool for looking up details about Tor relays and bridges) and have merged and deployed a number of fixes. Some highlights include: bandwidth sorting in search results is now semantically correct (not just an alphanumeric sort ignoring units), added when a relay was first seen to the details page along with the host name if a reverse DNS record has been found for the IP address of the relay and added support for the NoEdConsensus flag (although happily no relays had this flag at the time this support was added).

The metrics team has been working on merging projects into the metrics team website to give a unified view of information about the Tor network. This week I have been working towards a prototype of a port of Atlas to the metrics website’s style and this work has been published in my personal Atlas git repository. If you’d like to have a click around, you can do so.

A relay operators meetup will be happening in Montreal on the 14th of October. I won’t be present, but I have taken this opportunity to ask operators if there’s anything that they would like from Atlas that they are not currently getting. Some feedback has already been received and turned into code and trac tickets.

I also attended the weekly metrics team meeting in #tor-dev.

Bugs closed (fixed/wontfix): #6787, #9814, #21958, #21636, #23296, #23160

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I continue to be happy to spend my time on this work, however I do find myself in a position where it may not be sustainable when it comes to hardware. My desktop, a Sun Ultra 24, is now 10 years old and I’m starting to see random reboots which so far have not been explained. It is incredibly annoying to have this happen during a long build. Further, the hard drives in my NAS which are used for the local backups and for my local Debian mirror are starting to show SMART errors. It is not currently within my budget to replace any of this hardware. Please contact me if you believe you can help.

<figure> <figcaption>This week's energy was provided by Club Mate</figcaption> </figure>

Friday, 15 September 2017

End of Support for Fairphone 1: Some Unanswered Questions

Paul Boddie's Free Software-related blog » English | 23:28, Friday, 15 September 2017

I previously followed the goings-on at Fairphone a lot more closely than I have done recently, so after having mentioned the obsolescence risks of the first model in an earlier article, it was interesting to discover a Fairphone blog post explaining why the company will no longer support the Fairphone 1. Some of the reasons given are understandable: they went to market with an existing design, focusing instead on minimising the use of conflict minerals; as a result various parts are no longer manufactured or available; the manufacturer they used even stopped producing phones altogether!

A mention of batteries is made in the article, and in community reaction to the announcement, a lot of concern has been expressed about how long the batteries will be good for, whether any kind of replacements might be found, and so on. With today’s bewildering proliferation of batteries of different shapes and sizes, often sealed into devices for guaranteed obsolescence, we are surely storing up a great deal of trouble for the future in this realm. But that is a topic for another time.

In the context of my previous articles about Fairphone, however, what is arguably more interesting is why this recent article fails to properly address the issues of software longevity and support. My first reaction to the Fairphone initiative was caution: it was not at all clear that the company had full control over the software stack, at least within the usual level of expectations. Subsequent information confirmed my suspicions: critical software components were being made available only as proprietary software by MediaTek.

To be fair to Fairphone, the company did acknowledge its shortcomings and promise to do better for Fairphone 2, although I decided to withhold judgement on that particular matter. And for the Fairphone 1, some arrangements were apparently made to secure access to certain software components that had been off-limits. But as I noted in an article on the topic, despite the rather emphatic assurances (“Fairphone has control over the Fairphone 1 source code”), the announcement perhaps raised more questions than it gave answers.

Now, it would seem, we do not get our questions answered as such, but we appear to learn a few things nevertheless. As some people noted in the discussion of the most recent announcement – that of discontinuing support for the device altogether – ceasing the sale of parts and accessories is one thing, but what does that have to do with the software? The only mention of software with any kind of detail in the entire discussion appears to be this:

This is a question of copyright. All the stuff that we would be allowed to publish is pretty boring because it is out there already. The juicy parts are proprietary to Mediatek. There are some Fairphone related changes to open source parts. But they are really really minor…

So what do we learn? That “control over the Fairphone 1 source code” is, in reality, the stuff that is Free Software already, plus various Android customisations done by their software vendor, plus some kind of licence for the real-time operating system deployed on the device. But the MediaTek elephant in the room kept on standing there and everyone agreed not to mention it again.

Naturally, I am far from alone in having noticed the apparent discrepancy between the assurances given and the capabilities Fairphone appeared to have. One can now revisit “the possibility of replacing the Android software by alternative operating systems” mentioned in the earlier, more optimistic announcement and wonder whether this was ever truly realistic, whether it might have ended up being dependent on reverse-engineering efforts or MediaTek suddenly having an episode of uncharacteristic generosity.

I guess that “cooperation from license holders and our own resources” said it all. Although the former thing sounds like the pipedream it always seemed to be, the latter is understandable given the stated need for the company to focus on newer products and keep them funded. We might conclude from this statement that the licensing arrangements for various essential components involved continuing payments that were a burdensome diversion of company resources towards an increasingly unsupportable old product.

If anything this demonstrates why Free Software licensing will always be superior to contractual arrangements around proprietary software that only function as long as everyone feels that the arrangement is lucrative enough. With Free Software, the community could take over the maintenance in as seamless a transition as possible, but in this case they are instead presumably left “high and dry” and in need of a persuasive and perpetually-generous “rich uncle” character to do the necessary deals. It is obvious which one of these options makes more sense. (I have experienced technology communities where people liked to hope for the latter, and it is entertaining only for a short while.)

It is possible that Fairphone 2 provides a platform that is more robust in the face of sourcing and manufacturing challenges, and that there may be a supported software variant that will ultimately be completely Free Software. But with “binary blobs” still apparently required by Fairphone 2, people are right to be concerned that as new products are considered, the company’s next move might not be the necessary step in the right direction that maintains the flexibility that modularity should be bringing whilst remedying the continuing difficulties that the software seems to be causing.

With other parties now making loud noises about phones that run Free Software, promising big things that will eventually need to be delivered to be believed, maybe it would not be out of place to suggest that instead of “big bang” funding campaigns for entirely new one-off products, these initiatives start to work together. Maybe someone could develop a “compute module” for the Fairphone 2 modular architecture, if it lends itself to that. If not, maybe people might consider working towards something that would allow everyone to deliver the things they do best. Otherwise, I fear that we will keep seeing the same mistakes occur over and over again.

Three-and-a-half years’ support is not very encouraging for a phone that should promote sustainability, and the software inputs to this unfortunate situation were clear to me as an outsider over four years ago. That cannot be changed now, and so I just hope Fairphone has learned enough from this and from all the other things that have happened since, so that they may make better decisions in the future and make phones that truly are, as they claim themselves, “built to last”.

Thursday, 14 September 2017

Public Money, Public Code, Public Control

Paul Boddie's Free Software-related blog » English | 15:48, Thursday, 14 September 2017

An interesting article published by the UK Government Digital Service was referenced in a response to the LWN.net coverage of the recently-launched “Public Money, Public Code” campaign. Arguably, the article focuses a little too much on “in the open” and perhaps not enough on the matter of control. Transparency is a good thing, collaboration is a good thing, no-one can really argue about spending less tax money and getting more out of it, but it is the matter of control that makes this campaign and similar initiatives so important.

In one of the comments on the referenced article you can already see the kind of resistance that this worthy and overdue initiative will meet. There is this idea that the public sector should just buy stuff from companies and not be in the business of writing software. Of course, this denies the reality of delivering solutions where you have to pay attention to customer needs and not just have some package thrown through the doorway of the customer as big bucks are exchanged for the privilege. And where the public sector ends up managing its vendors, you inevitably get an under-resourced customer paying consultants to manage those vendors, maybe even their own consultant colleagues. Guess how that works out!

There is a culture of proprietary software vendors touting their wares or skills to public sector departments, undoubtedly insisting that their products are a result of their own technological excellence and that they are doing their customers a favour by merely doing business with them. But at the same time, those vendors need a steady – perhaps generous – stream of revenue consisting largely of public money. Those vendors do not want their customers to have any real control: they want their customers to be obliged to come back year after year for updates, support, further sales, and so on; they want more revenue opportunities rather than their customers empowering themselves and collaborating with each other. So who really needs whom here?

Some of these vendors undoubtedly think that the public sector is some kind of vehicle to support and fund enterprises. (Small- and medium-sized enterprises are often mentioned, but the big winners are usually the corporate giants.) Some may even think that the public sector is a vehicle for “innovation” where publicly-funded work gets siphoned off for businesses to exploit. Neither of these things cultivate a sustainable public sector, nor do they even create wealth effectively in wider society: they lock organisations into awkward, even perilous technological dependencies, and they undermine competition while inhibiting the spread of high-quality solutions and the effective delivery of services.

Unfortunately, certain flavours of government hate the idea that the state might be in a role of actually doing anything itself, preferring that its role be limited to delegating everything to “the market” where private businesses will magically do everything better and cheaper. In practice, under such conditions, some people may benefit (usually the rich and well-represented) but many others often lose out. And it is not unknown for the taxpayer to have to pick up the bill to fix the resulting mess that gets produced, anyway.

We need sustainable public services and a sustainable software-producing economy. By insisting on Free Software – public code – we can build the foundations of sustainability by promoting interoperability and sharing, maximising the opportunities for those wishing to improve public services by upholding proper competition and establishing fair relationships between customers and vendors. But this also obliges us to be vigilant to ensure that where politicians claim to support this initiative, they do not try and limit its impact by directing money away from software development to the more easily subverted process of procurement, while claiming that procured systems not be subject to the same demands.

Indeed, we should seek to expand our campaigning to cover public procurement in general. When public money is used to deliver any kind of system or service, it should not matter whether the code existed in some form already or not: it should be Free Software. Otherwise, we indulge those who put their own profits before the interests of a well-run public sector and a functioning society.

Saturday, 09 September 2017

We need to talk about the social media silos

agger's Free Software blog | 19:17, Saturday, 09 September 2017

O Jaraguá é Guaraní

The photo displayed above is from a protest in São Paulo on August 30. The protest was because a federal court in São Paulo has decided to strip the area called Jaraguá of its status as indigenous land, effectively expulsing 700 men, women and children from the land they’re currently occupying, no doubt to satisfy hungry developers and real estate vendors in their dreams of seeing soy fields and shopping malls wherever clean forests and rivers can still be found.

The Guaraní will, as may be appreciated from these photos and from this video, not be taking this lying down, in fact the Jaraguá’s designation as indigenous land in 2015 was the result of decades of struggle on their part. With the current Brazilian government, however, and the possibility of the arrival of an even more right-wing one with the presidential elections next years, the prospects might be very bleak indeed.

The aspect of this case which I want to discuss stems from the fact that the photo above as well as the album it links to was taken by my friend Rafael Frazão, an active member of the technoshamanism network who has been instrumental in establishing a collaboration with the Guaraní in Mbya, Pico de Jaraguá (see here, in Portuguese, for an example of this collaboration). As many members of the network use Facebook to communicate, everyone posted the photos of the videos there – and something strange happens: The photos and videos from the protest got much less attention than everything else the same people were posting, as if Facebook was deciding that some things are best left unsaid and consequently declined to show this very protest to anyone.

I’m guessing that the reason is that Facebook’s algorithms figured out the pictures are about a protest and that protests are given low priority because they don’t sit well with ad buyers, e.g., they fall afoul of the algorithms that maximize ad revenue. All in all, a non-political consequence of some people’s reaction to this kind of material.

The effect, however, of this algorithmic decision is highly political. I apologize for speaking in all caps here, but effectively, and especially for those millions of people who use Facebook as their main communication channel, this means that Facebook SPECIFICALLY silenced news about a protest against more than 700 people having their land STOLEN beneath them as just one small step of an ONGOING GENOCIDE against the indigenous population in Brazil. Censorship hardly gets any more serious than that.

But what would the company’s general attitude to that kind of controversy be? Well, in his highly readable review of three books about Facebook, John Lanchester notes that

An early experiment came in the form of Free Basics, a program offering internet connectivity to remote villages in India, with the proviso that the range of sites on offer should be controlled by Facebook. ‘Who could possibly be against this?’ Zuckerberg wrote in the Times of India. The answer: lots and lots of angry Indians. The government ruled that Facebook shouldn’t be able to ‘shape users’ internet experience’ by restricting access to the broader internet. A Facebook board member tweeted that ‘anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?’ As Taplin points out, that remark ‘unwittingly revealed a previously unspoken truth: Facebook and Google are the new colonial powers.’

This kind of censorship, and Google’s and Facebook’s arrogance and colonial attitudes, would not be a problem if these companies were just players among players, but they’re not. For a large majority of Internet users, Google and Facebook are the Internet. With the site itself, Messenger, Instagram and Whatsapp, Facebook is sitting on a near-monopoly on communication between human beings. It’s come to the point where the site is seriously difficult to abandon, with sports clubs, schools and religious organizations using it as their only communication infrastructure.

During the twelve years I’ve followed the free software movement, I’ve seen the movement go back and forth, sometimes winning, sometimes losing, never gaining much ground, but never losing in a big way either.

I suppose with the rise of Google and especially Facebook, this has changed: Free software has lost the battle for nothing less than electronic communication between human beings to a proprietary behemoth, and it is already – exemplified in a very minor and random way by the Guaraní – doing serious damage to democracy, to freedom of speech and to civil society in general.

So, dear lovers of free software, how do we turn this around? Ideally, we could solve the problem for ourselves by creating interoperable platforms built on free software and open standards and convince everybody we want to communicate with to follow us there. So easy, and yet so difficult. How do we do it?

Monday, 04 September 2017

Spyware Dolls and Intel's vPro

DanielPocock.com - fsfe | 06:09, Monday, 04 September 2017

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking

  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?

Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

Thursday, 31 August 2017

Robotnik Utaite – A modern Singing Computer

tobias_platen's blog | 18:56, Thursday, 31 August 2017

Hatsune Miku is now 10 years old, but I do not use the Vocaloid Software,
because it is non-free. It’s note editor that is not fully accessibile.
The other Singing Computer from Milan Zamazal is no longer maintained
and only supports English and Czech languages and singing-mode.scm is broken
in modern distributions of GNU/Linux.

So I decided to replace the Festival Speech Synthesis System with a patched
espeak-ng that has its own Singing Mode and Sinsy as a MusicXML parser.
The user can type in Lilypond sourcecode in Emacs. Robotnik Utaite, the new
Singing Computer that I am currently working on uses python-ly to convert
Lilypond source code into MusicXML.

I also plan to package this software for GNU Guix, a new
package management tool that is much more advanced than pacman and apt.
Therefore I won’t provide any binary packages for Trisquel or Parabola.

Wednesday, 30 August 2017

Paper - security record of Free Software

Matthias Kirschner's Web log - fsfe | 07:40, Wednesday, 30 August 2017

In April I participated in a workshop about security of Free Software; now the workshop paper is published.

Dog securing this blog

In April 2017, the Digital Society Institute hosted a workshop entitled "How Secure is free software? Security record of open source and free software." The workshop included contributions from Matthias Kirschner (Free Software Foundation Europe), Kathrin Noack (Karlsruhe Institute of Technology, Projekt secUnity), Michael Kranawetter (Microsoft) and Carl- Daniel Hailfinger (German Federal Office for Information Security).

The paper is written by Martin Schallbruch, former IT director for the German federal administration and available in English and German including recommendations for the private and public sector.

Monday, 28 August 2017

Stupid Git

Paul Boddie's Free Software-related blog » English | 13:50, Monday, 28 August 2017

This kind of thing has happened to me a lot in recent times…

$ git pull
remote: Counting objects: 1367008, done.
remote: Compressing objects: 100% (242709/242709), done.
remote: Total 1367008 (delta 1118135), reused 1330194 (delta 1113455)
Receiving objects: 100% (1367008/1367008), 402.55 MiB | 3.17 MiB/s, done.
Resolving deltas: 100% (1118135/1118135), done.
fatal: missing blob object '715c19c45d9adbf565c28839d6f9d45cdb627b15'
error: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git did not send all necessary objects

OK, but now what? At least it hasn’t downloaded gigabytes of data and thrown it all away this time. No, just a few hundred megabytes instead.

Looking around on the Internet, I see various guides that are like having your car engine repeatedly stall at traffic lights and being told to crack open the bonnet/hood and start poking at random components, hoping that the thing will jump back to life. After all, isn’t that supposed to be one of the joys of motoring?

Or to translate the usual kind of response about Git when anyone dares to question its usability: “Do you not understand that you are not merely driving a car but are instead interacting with an extensible automotive platform?” And to think that the idea was just to get from A to B conveniently.

I cannot remember a single time when a Mercurial repository failed to update in such a fashion. But I suppose that we will all have to continue to endure the fashionable clamour for projects to move to Git and onto GitHub so that they can become magically popular and suddenly receive a bounty of development attention from the Internet pixies. Because they all love Git, apparently.

Friday, 25 August 2017

Final GSoC Blog Post – Results

vanitasvitae's blog » englisch | 11:21, Friday, 25 August 2017

This is my final GSoC update post. My name is Paul Schaub and I participated in the Google Summer of Code for the XMPP Standards Foundation. My project was about implementing encrypted Jingle File Transfer for the client library Smack.

Google Summer of Code was a great experience! This is a sentence you probably read in almost every single students GSoC resumé. Let’s take a look at how I experienced GSoC.

GSoC for me was…

  • tiring. Writing tests is something almost everybody hates. I do especially. In the beginning I imposed on myself to do test-driven development, but I quickly began to focus on real coding instead. Nevertheless, often I sat down for hours and wrote tests for classes and methods that were designed to allow easy tests, so I have an okayish test coverage, but it is far from perfect.
  • nerve-wracking. Dealing with bugs of sub-protocols I never worked with before drove me crazy sometimes. Especially that SOCKS5 Transport bug has a special place on my hit list. All in all I think the Jingle protocol has some flaws which make it harder to implement than it should be and those flaws (more precise – the decisions I derived from the Jingle XEP design) often really bugged me.
  • highly demotivational. Can you imagine how devastating it can be, when you spend days and days getting your implementation working with itself, only to see it miserably fail when you test it the first time with another implementation? It didn’t help that I tested my coded not only against one, but two other applications.
  • desocializing. I had to dedicate a huge part of my day to coding. As a result I often had no time left for friends and family.
  • devastating. In the end I wrote far more code than I should have. Most of it was discarded along the way and got replaced. It really annoyed me having to start from zero over and over again. Also giving up on goals like using NIO for asynchronous event handling pricked my pride.
  • to the highest degree depressing. Especially near the end I sometimes sat down and starred at my screen for some time without really knowing what to do. Those days I ended up making small meaningless changes like fixing typos in my documentation or refactoring. Going to bed later on those days made me feel bad and kinda guilty.

But on the other hand, GSoC also…

  • taught me, that coding is not only fun. Annoying parts like tests are (sometimes) important and in the end it is very satisfying to see that I managed to tame my code to a degree where all test cases run successful.
  • gave me deep insights into areas and protocols that were completely new to me. Also I think you can only fully gain understanding of how things work if you tinker with it for more than just one day. Finding and fixing bugs are a good exercise for this purpose.
  • was super motivational. Contrary (or let’s say additional) to what I wrote above, it is highly motivational to see your code finally work hand in hand with other implementations. That’s the magic of decentralized protocols – knowing that on the other end is a completely different device, running a completely different operating system with a language you might have never seen before, but still in some magical way, both implementations harmonize with each other like two musicians from different countries do when playing together.
  • was very social by itself. Meeting online with members of the community was a real pleasure and I hope to be able to participate in conversations and meetings in the future too. Also there were one or two (or three…) evenings that I spent gaming with my friends, but *shht* don’t tell anyone ;P.
  • taught me important lessons. While I usually aim too high with my goals, I learned that sometimes you have to take a step back to set more reasonable goals. Nevertheless ambitious goals can’t hurt and I heard that you grow together with your challenges.
  • was satisfying as f***. Sure, sometimes I was feeling bad because I could have done more on that day, but in the end I was a little surprised seeing the whole picture and how much I really accomplished.

The Result

An overview about what I achieved during the Google Summer of Code can be found on my project page.

This week I did some more changes to my code and wrote more tests *gah*. I’m really proud that my JET proposal found it’s way into the XSFs inbox. I’m really excited, what the future may bring for my specification :)

I want to take the chance to thank everyone in the XMPP community for welcoming me and allowing me to become a part of it. I also want to thank Florian Schmaus for mentoring me and giving me the idea to participate in GSoC in the first place.

As a last impulse -  why do we need Google (thank you too btw :D ) to initiate programs like GSoC? Shouldn’t there be more public, government financiered programs? I certainly think so, even though Google did a very good job.

So in the end GSoC was really a great experience. Sometimes it was hard and challenging, but I’m sure it would have been boring without a certain degree of difficulty. I’d absolutely do it again (either as a student or maybe a mentor?) and I can only encourage students interested in free software to apply next year :)

Happy Hacking!

Thursday, 24 August 2017

FSFE at FrOSCon 2017

Matthias Kirschner's Web log - fsfe | 07:48, Thursday, 24 August 2017

At this year's FrOSCon from 19 to 20 August 2017 the FSFE participated again with a booth and I gave a talk about the Limux project.

The FSFE booth at FrOSCon 2017

As in the last years the FSFE was present with a booth, answering questions about Free Software and the FSFE's current activities. This years we especially enjoyed that many visitors did not know who the FSFE is, and what we are doing. So beside handing out info material for people to use in their region and selling merchandise, we reached new several new people there with our booth.

Beside that meeting many known and new people, I was giving a talk about the Limux project in German. You can either watch or download the recording on the CCC's media server or on youtube. There is also an English recording of a similar talk which I gave as keynote at OpenSUSE conference 2017.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/watch?v=U101Sn2VenI" width="560"></iframe>

If you have additional questions or comments about what we can learn from Limux, feel free to ask them on our discussion lists. On the German and English one, the FSFE's German coordinator Björn Schießle -- who has published several articles about Limux in the past -- and our new deputy coordinator for Germany -- Florian Snow are also subscribed. Looking forward to read you there.

On Sunday Elisa Lindinger also gave a talk about the Prototype Fund. Afterwards she was answering questions at our booth about it. If you are a German resident, interested to develop Free Software prototypes, and be paid by the German Government for doing that, get in touch with the Prototype Fund team.

Finally, I would like to thank the FrOSCon organisation team as well as Gabriele, Lusy, Selva, and Volker for their help with the FSFE booth. Especially I would like to thank Michael Kesper (FSFE group Bonn) and Guido Arnold (FSFE group Frankfurt) for taking care about the main organisation of our booth also before and after it was set up: thank you very much!

Technoshamanism at Aarhus University

agger's Free Software blog | 04:13, Thursday, 24 August 2017

by Fabiane M. Borges

(originally)

DSC05262

Conversation around technoshamanism in the center of Digital Living Research Commons in the Department of Information Studies & Digital Design at the University of Aarhus.

We talked about the traditions of festivals before the festivals of technoshamanism, such as Brazilian tactical media, Digitofagy, Submidialogy, MSST (Satellitless Movement), etc. We presented the Baobáxia and the indigenous / quilombola struggles in the city and the countryside. The aesthetic manifestations of encounters of technoshamanism as well as ideas about free or postcolonial thoughts, ancestorfuturism and new Subjective territories.

Organized by Martin Brynskov and Elyzabeth Joy Holford (directors of the departament and hacklab) and colaborators as Kata Börönte, Winnie Soon and Kristoffer Thyrrestrup and students of the departament. Connection and introduction of Carsten Agger, with the participation of Fabiane M. Borges, Raisa Inocêncio and Ariane Stolfi.

Facebook of DLRC: Digital Living Research Commons

Facebook event: Technoshamanism at the DLRC

PHOTOS

VIDEO

Wednesday, 23 August 2017

How to check if your touch screen is really sending touch events

TSDgeos' blog | 23:19, Wednesday, 23 August 2017

I've had this problem twice in the last year, I'm testing something related to touch in my laptop and I'm stuck trying to figure out if it's my code that is wrong or if my screen is misconfigured and it's only sending mouse events.

Thanks to Shawn of Qt fame for having helped me the two times and explained me how to test if my screen is sending touch events, I'm writing this blog so i don't forget and ask him a third time :D

First step is figuring out the xinput id of the touch screen of my laptop

tsdgeos@yoga:~:$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Wacom Co.,Ltd. Pen and multitouch sensor Finger id=9 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech TrackPoint id=13 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech Touchpad id=14 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ Wacom Co.,Ltd. Pen and multitouch sensor Pen id=10 [slave keyboard (3)]
↳ Integrated Camera id=11 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=12 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=15 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]

In this case would be id=9

Then you can do

tsdgeos@yoga:~:$ xinput test-xi2 9

and if the output contains RawTouchBegin and RawTouchEnd events, it means that the screen is correctly sending touch events :)

Next you probably want to check if Qt actually is seeing those events, for that there's a few ready to use demos in the qtdeclarative source code, so I would do

tsdgeos@yoga:~:$ qml qt5/qtdeclarative_58/examples/quick/touchinteraction/multipointtouch/multiflame.qml

And after putting my five fingers on the screen I would see

So all is good, the bug is my code and not in Qt or the configuration of my touch screen :D

Monday, 21 August 2017

Technoshamanism in the Dome of Visions, Aarhus – review

agger's Free Software blog | 07:35, Monday, 21 August 2017

by Fabiane M. Borges

(originally)

DCIM100GOPROG0181439.

In 12 of August of 2017 we did the second meeting of technoshamanism in Aarhus/ Denmark, here is the open call “Technoshamanism in Aarhus, Rethinking ancestrality and technology” (2017).

The first one was in november of 2014 with the name “technomagic and technoshamanism meeting in Aarhus“. It was made at the Open Space, a hacker space in Aarhus.

The second one was made at Dome of Visions,  a ecological geodesic located in the region of Port of Aarhus, supported by a group of eco-activists. The meeting was organized by Carsten Agger with  Ariane Stolfi, Fabiane M. Borges and Raisa Inocêncio. With the participation of: Amalia FonfaraRune Hjarno RasmussenWinnie Soon and Sebastian Tranekær. Here you can see the complete programme.

First, we did a radio discussion, after performance/ritual presentations, in the end a jam session of voice, noise with analogical and digital instruments, Smoking and clay bath.

AUDIO (by Finetanks): https://archive.org/details/II-tcnxmnsm-aarhus

PHOTOS (by Domo of Visions):

 

 

 

VIDEOS (by tcnxmsnm):

Part 1

Part 2

 

“The venue was really beautiful and well-equipped, its staff was helpful
and people in the audience were friendly and interested. Everything went
completely smoothly and according to plan, and the final ritual was
wonderful with its combination of Arab flute, drumming, noise and visual
performance. All in all a wonderful event.” (Carsten Agger)

“I think it was really instructive and incredibly cool to be with people who have so much knowledge and passion about the subjects they are dealing with. Communication seems to be the focal point, and there was a great willingness to let people express their minds.” (Sebastian Tranekær)

“The meeting was very diverse, the afternoon with speeches and discussion of some topics linked to the network of technoshamanism, such as self-organization, decolonization of thought, then we discussed technology and future cyborg and at the end we talked about noise and feminism. Ritual open with the participation of other people and was very curious to see the engagement, – it was a rite of rock!! ” (Raisa Inocêncio)”

It was so nice to see Aarhus again, this dome of vision is really special place, thank you to all of you!! We did just one day of meeting and we could not listen everybody, but I am sure it is just the beginning!!! I agree with Raisa, it was a rite of rock-noise. (Fabiane M. Borges)

Saturday, 19 August 2017

Technoethical T400s review

FSFE Fellowship Vienna » English | 16:54, Saturday, 19 August 2017

T400s review

This is just to share my experience. (I am in no way affiliated with Technoethical.)

My background

I am a satisfied Debian user since I moved away from Windows in 2008. Back then I thought I could trick the market by ordering one of the very few systems that didn’t come pre-installed with proprietary software. Therefore I went for a rather cheap Acer Extensa 5220 that came with Linplus Linux. Unfortunately it didn’t even have a GUI and I was totally new to GNU/Linux. So the first thing I did was to install Debian because I value the concept this community driven project. I never regretted it. But the laptop had the worst possible wireless card built in. It never really worked with free software.

In the mean time I have learned a lot and I started to help others to switch to free software. In my experience it is rather daunting to check new hardware for compatibility and even if you manage to avoid all possible issues you end up with a system that you can not fully trust because of the bios and the built in hardware (Intel ME for example).

The great laptop

Therefore I am very excited that you can actually order hardware nowadays that others have checked for best compatibility already. Since my old laptop got very unreliable recently I wanted to do better this time and I went for the Technoethical T400s, which comes pre-installed with Trisquel.

I am very pleased with the excellent customer care and the quality of the laptop itself. I was especially surprised how lightweight and slim this not so recent device is.

When the ThinkPad T400s was first released in 2009 it was reviewed as an excellent, well built but rather expensive system for about 2000 Euros. The weakest point was considered the mediocre screen. The Technoethical team put in a brand new screen which has perfectly neutral colours, very good contrast as well as good viewing angles. I’ve got 8 GB RAM (the maximum possible), an 128 GB SSD (instead of 64 GB) and the stronger dualcore SP9600 with 2.53 GHz (instead of the SP9400 with 2.40 GHz) CPU. In addition I’ve received a caddy adapter for replacing the CD/DVD drive with another hard disk. And all this for less than 900 Euros.

This is the most recent laptop of the very few devices worldwide that come with Libreboot and the FSF RYF label out of the box. The wireless does flawlessly work right away with totally free software. This system fulfills everything I need from a PC as a graphic designer. Image editing, desktop publishing, multimedia and even light 3D gaming. Needless to say that common office tasks as emailing and web browsing do of course work flawlessly. To get everything done properly only few people do actually need more powerful working machines.

Even the webcam does work out of the box without any issues and the laptop comes back from its idle state well too. I didn’t test the fingerprint reader and bluetooth.

The battery is a little weak

The only downside for power users on the go might be the limited battery life of about two hours with wireless enabled. It is possible to get a new battery which might extend the life to about 3 hours but because the battery is positioned on the bottom front you can’t use a bigger one. (The only sensible option would be a docking station, but I was never fond of those bulky things that crowd my working space even when the laptop isn’t on the desk.)

Summary

Over all this is a great device that just works with entirely free software. I thank the Technoethical team for offering this fantastic service. I can only recommend buying one of those T400s laptops from Technoethical.

Wednesday, 16 August 2017

GSoC Week 11.5: Success!

vanitasvitae's blog » englisch | 14:49, Wednesday, 16 August 2017

Newsflash: My Jingle File Transfer is compatible with Gajim!

The Gajim developers recently fixed a bug in their Jingle Socks5 transport implementation and now I can send and receive files without any problems between Gajim and my test client. Funny side note: The bug they fixed was very familiar to me *cough* :P

Happy Hacking!

Where does our money go?

free software - Bits of Freedom | 08:51, Wednesday, 16 August 2017

Where does our money go?

Each year, the FSFE spends close to half a million Euro raising awareness of and working to support the ecosystem around free software. Most of our permanent funds come from our supporters -- the people who contribute financially to our work each month or year and continue to do so from month to month, and year to year. They are the ones who support our ability to plan for the long term and make long term commitments towards supporting free software.

Thank you!

You might be interested to know a bit more about how our finances are structured and where the money goes. Some of this, especially around the specific activities we have engaged in, you can also read about in our regular yearly reports, like this one. However, here are some further details about what this looks like on the inside, how this differs from what you see in our public reports, and why.

Our budget process and results

Each year, around October, I start putting together a draft for a budget for the following year, taking input from the people who are directly involved in deciding on how we spend our funds: our office staff, president, our executive council, and (from 2017) also our coordinator's team.

As more than half of our budget (about 55%) is employee costs, we are not thinking about how to divide half a million Euro, but about how to divide about 200k. And in reality, it's not even about dividing the budget: the budget is driven by our activities and the need they have for the next fiscal year. No area should get more than it needs, but each area should have a reasonable budget to be able to carry out its work.

If you are interested in our employee costs, our budgeted costs for 2017 are 273k. You do not see this in our public reports, and this is one area where they differ. When we calculate the results at the end of each year, we collect the time reports from each staff and divide the total cost for that staff according to the focus areas on which they have worked.

Our office manager, for example, works almost exclusively on administration and so the cost for her time gets included under the heading for basic infrastructure costs. Our president takes part in this too, but he does much more on public awareness and policy, and so the cost for his time gets split over those areas, according to what he has reported time on.

Basic infrastructure costs

To stay with our basic infrastructure costs, this also includes costs for our office rent in Berlin, staff meetings, our General Assembly of members, lawyers and legal fees, bank and similar fees, fundraising and donor management, and technical infrastructure. The total budget for this in 2017 has been 64k, and the last couple of years' result have looked about the same (55k in 2014, 63k in 2015, 57k in 2016).

Community support

The next budget is community support, which has was a new budget area in 2016 when it included the costs for our FSFE summit and an experiment to cover some costs for a local coordinator in one country. We subsequently decided not to continue with that experiment, but we kept the budget in 2017 for local activities and a potential volunteer meeting. We set the total at 11k, and I have recently delegated to the coordinators' team to manage the part for local activities.

Public awareness

Our work on public awareness, which comes next, has a budget of 35k, most of which is event participation (conferences, booths and talks at a total of 18k). The budget includes costs for FOSDEM, the Chaos Communication Congress, and many other events. We also have costs for information material (flyers and similar material at a total of 8k), technical support for our web pages (at 6k), and a smaller budget for public awareness campaigns (like our I Love Free Software campaign and similar, at 3k).

Our budget for 2016 for public awareness was similar, but as you can see on our public pages, the spending for public awareness was 142k that year. The difference between the budget and the spending is the personnel costs which gets included in the published results.

Legal work

In our legal work, aside from a limited travel budget, the only expense we have is the annual legal and licensing workshop. The budget for this in 2016 was 40k. Compared to the spending of 117k, and you again see the personnel costs accounted for in the public report.

Policy work

And so we then come to our policy work, where I feel we need to elaborate a bit on what has changed in 2017. Our budget for 2016 was 4k for policy work (most of the work on policy is staff time). You will see that when we publish the results for 2017, the costs for policy has shot up remarkably. We have increased the budget to 29k for 2017 to be able to invest in our Public Money - Public Code campaign, which we hope will be a major driver for our work in 2018.

Merchandise

The last and least interesting budget item, in some ways, is our costs for promotional merchandise (t-shirts and so on), as well as related shipping and packaging costs. We have a budget of 23k in total for this in 2017.

Incoming saldo, and what it means for us

By November, we have the results for the current fiscal year up until Q3, so we use that to project the spending for the entire fiscal year to know what we have in "incoming saldo" for the next year.

The incoming saldo is important to us because it is one of the metrics we use internally to get a feeling for the relative health of our finances. We use this in two ways. First, we calculate the income which we know with some certainty we will have in the next fiscal year (like our supporter contributions, and donations which have already been agreed to). This known income, together with our incoming saldo, is what we have to work with for the next fiscal year.

If our budget is larger than what we know we will have, we get a "funding gap", and it becomes the job of primarily the president and executive director to find donors or other ways to close this gap over the year.

The other way in which we use this incoming saldo as a metric is that we calculate how much of our budget for the year is covered by our incoming saldo. A value of 100% or more here means we would be able to survive the full budget year even without getting any other contribution. We have gradually increased this over the years and are now at 54% for the fiscal year of 2017.

Once all the numbers are in place, we send this first as a draft to our members to look at, and then based on their feedback, we finalise the budget and send a final version to the members.

Expenses for events & outreach

Some of the expenses incurred over the year, like bank fees, rent, and so on, get booked directly on the appropriate account. For some budgets, notably, travel costs and event costs, we have an expense request system. The person who would like to make an expense makes a request, which gets automatically sent to the budget owner (typically me, the president or our office manager) who then decides on it.

I will not go through all the possible expenses, but since I am responsible for the event budget (2513 in our accounts), I thought to give an overview of the requests I've approved on this budget for 2017. These are the approved numbers only though, not the actual expense. In many cases, the expense has been less.

  • 2017-01-31: 1900 EUR for booth at ShaCamp 2017
  • 2017-02-21: 309 EUR for booth at Chemnitzer Linuxtag
  • 2017-02-22: 600 EUR for our president to give keynote at OSCAL17
  • 2017-05-04: 750 EUR for our legal coordinator to give talk as OSCAL17
  • 2017-05-29: 200 EUR for our president to give keynote at OpenSUSE Conference
  • 2017-05-29: 660 EUR for booth at the Open Technology Fair in Madrid
  • 2017-06-14: 250 EUR for booth and travel at OpenAg Zurich
  • 2017-06-22: 350 EUR for booth at RMLL
  • 2017-07-31: 60 EUR to design a new booth rollup
  • 2017-08-09: 600 EUR for our president, legal coordinator and one intern to participate at CopyCamp 2017
  • 2017-08-16: 1800 EUR for a booth/assembly at Chaos Communication Congress 2017

There was an experiment earlier to leave part of the decision making for this budget to a group of coordinators, but the way it was done didn't work out in practice. If the delegation of the local activity budget works out well, I believe it would also be time to delegate the events budget to a team and I'll be thinking about this as we get into the budget for 2018.

Tuesday, 15 August 2017

Nagios with PNP4Nagios on CentOS 6.x

Evaggelos Balaskas - System Engineer | 18:18, Tuesday, 15 August 2017

nagios_logo.png

In many companies, nagios is the de-facto monitoring tool. Even with new modern alternatives solutions, this opensource project, still, has a large amount of implementations in place. This guide is based on a “clean/fresh” CentOS 6.9 virtual machine.

Epel

An official nagios repository exist in this address: https://repo.nagios.com/
I prefer to install nagios via the EPEL repository:

# yum -y install http://fedora-mirror01.rbc.ru/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# yum info nagios | grep Version
Version     : 4.3.2

# yum -y install nagios

Selinux

Every online manual, suggest to disable selinux with nagios. There is a reason for that ! but I will try my best to provide info on how to keep selinux enforced. To write our own nagios selinux policies the easy way, we need one more package:

# yum -y install policycoreutils-python

Starting nagios:

# /etc/init.d/nagios restart

will show us some initial errors in /var/log/audit/audit.log selinux log file

Filtering the results:

# egrep denied /var/log/audit/audit.log | audit2allow

will display something like this:

#============= nagios_t ==============
allow nagios_t initrc_tmp_t:file write;
allow nagios_t self:capability chown;

To create a policy file based on your errors:

# egrep denied /var/log/audit/audit.log | audit2allow -a -M nagios_t

and to enable it:

# semodule -i nagios_t.pp

BE AWARE this is not the only problem with selinux, but I will provide more details in few moments.

Nagios

Now we are ready to start the nagios daemon:

# /etc/init.d/nagios restart

filtering the processes of our system:

# ps -e fuwww | egrep na[g]ios

nagios    2149  0.0  0.1  18528  1720 ?        Ss   19:37   0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
nagios    2151  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2152  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2153  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2154  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2155  0.0  0.0  18076   712 ?        S    19:37   0:00  _ /usr/sbin/nagios -d /etc/nagios/nagios.cfg

super!

Apache

Now it is time to start our web server apache:

# /etc/init.d/httpd restart

Starting httpd: httpd: apr_sockaddr_info_get() failed
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

This is a common error, and means that we need to define a ServerName in our apache configuration.

First, we give an name to our host file:

# vim /etc/hosts

for this guide, I ‘ll go with the centos69 but you can edit that according your needs:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 centos69

then we need to edit the default apache configuration file:

# vim /etc/httpd/conf/httpd.conf

#ServerName www.example.com:80
ServerName centos69

and restart the process:

# /etc/init.d/httpd restart

Stopping httpd:      [  OK  ]
Starting httpd:      [  OK  ]

We can see from the netstat command that is running:

# netstat -ntlp | grep 80

tcp        0      0 :::80                       :::*                        LISTEN      2729/httpd      

Firewall

It is time to fix our firewall and open the default http port, so that we can view the nagios from our browser.
That means, we need to fix our iptables !

# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

this is want we need. To a more permanent solution, we need to edit the default iptables configuration file:

# vim /etc/sysconfig/iptables

and add the below entry on INPUT chain section:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Web Browser

We are ready to fire up our web browser and type the address of our nagios server.
Mine is on a local machine with the IP: 129.168.122.96, so

http://192.168.122.96/nagios/

User Authentication

The default user authentication credentials are:

nagiosadmin // nagiosadmin

but we can change them!

From our command line, we type something similar:

# htpasswd -sb /etc/nagios/passwd nagiosadmin e4j9gDkk6LXncCdDg

so that htpasswd will update the default nagios password entry on the /etc/nagios/passwd with something else, preferable random and difficult password.

ATTENTION: e4j9gDkk6LXncCdDg is just that, a random password that I created for this document only. Create your own and dont tell anyone!

Selinux, Part Two

at this moment and if you are tail-ing the selinux audit file, you will see some more error msgs.

Below, you will see my nagios_t selinux policy file with all the things that are needed for nagios to run properly - at least at the moment.!

module nagios_t 1.0;

require {
        type nagios_t;
        type initrc_tmp_t;
        type nagios_spool_t;
        type nagios_system_plugin_t;
        type nagios_exec_t;
        type httpd_nagios_script_t;
        class capability chown;
        class file { write read open execute_no_trans getattr };
}

#============= httpd_nagios_script_t ==============
allow httpd_nagios_script_t nagios_spool_t:file { open read getattr };

#============= nagios_t ==============
allow nagios_t initrc_tmp_t:file write;
allow nagios_t nagios_exec_t:file execute_no_trans;
allow nagios_t self:capability chown;

#============= nagios_system_plugin_t ==============
allow nagios_system_plugin_t nagios_exec_t:file getattr;

Edit your nagios_t.te file accordingly and then build your selinux policy:

# make -f /usr/share/selinux/devel/Makefile

You are ready to update the previous nagios selinux policy :

# semodule -i nagios_t.pp

Selinux - Nagios package

So … there is an rpm package with the name: nagios-selinux on Version: 4.3.2
you can install it, but does not resolve all the selinux errors in audit file ….. so ….
I think my way is better, cause you can understand some things better and have more flexibility on defining your selinux policy

Nagios Plugins

Nagios is the core process, daemon. We need the nagios plugins - the checks !
You can do something like this:

# yum install nagios-plugins-all.x86_64

but I dont recommend it.

These are the defaults :

nagios-plugins-load-2.2.1-4git.el6.x86_64
nagios-plugins-ping-2.2.1-4git.el6.x86_64
nagios-plugins-disk-2.2.1-4git.el6.x86_64
nagios-plugins-procs-2.2.1-4git.el6.x86_64
nagios-plugins-users-2.2.1-4git.el6.x86_64
nagios-plugins-http-2.2.1-4git.el6.x86_64
nagios-plugins-swap-2.2.1-4git.el6.x86_64
nagios-plugins-ssh-2.2.1-4git.el6.x86_64

# yum -y install nagios-plugins-load nagios-plugins-ping nagios-plugins-disk nagios-plugins-procs nagios-plugins-users nagios-plugins-http nagios-plugins-swap nagios-plugins-ssh

and if everything is going as planned, you will see something like this:

nagios_checks.jpg

PNP4Nagios

It is time, to add pnp4nagios a simple graphing tool and get read the nagios performance data and represent them to graphs.

# yum info pnp4nagios | grep Version
Version     : 0.6.22

# yum -y install pnp4nagios

We must not forget to restart our web server:

# /etc/init.d/httpd restart

Bulk Mode with NPCD

I’ve spent toooooo much time to understand why the default Synchronous does not work properly with nagios v4x and pnp4nagios v0.6x
In the end … this is what it works - so try not to re-invent the wheel , as I tried to do and lost so many hours.

Performance Data

We need to tell nagios to gather performance data from their check:

# vim +/process_performance_data /etc/nagios/nagios.cfg

process_performance_data=1

We also need to tell nagios, what to do with this data:

nagios.cfg

# *** the template definition differs from the one in the original nagios.cfg
#
service_perfdata_file=/var/log/pnp4nagios/service-perfdata
service_perfdata_file_template=DATATYPE::SERVICEPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tSERVICEDESC::$SERVICEDESC$tSERVICEPERFDATA::$SERVICEPERFDATA$tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$tSERVICESTATE::$SERVICESTATE$tSERVICESTATETYPE::$SERVICESTATETYPE$
service_perfdata_file_mode=a
service_perfdata_file_processing_interval=15
service_perfdata_file_processing_command=process-service-perfdata-file

# *** the template definition differs from the one in the original nagios.cfg
#
host_perfdata_file=/var/log/pnp4nagios/host-perfdata
host_perfdata_file_template=DATATYPE::HOSTPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tHOSTPERFDATA::$HOSTPERFDATA$tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$
host_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
host_perfdata_file_processing_command=process-host-perfdata-file

Commands

In the above configuration, we introduced two new commands

service_perfdata_file_processing_command  &
host_perfdata_file_processing_command

We need to define them in the /etc/nagios/objects/commands.cfg file :

#
# Bulk with NPCD mode
#
define command {
       command_name    process-service-perfdata-file
       command_line    /bin/mv /var/log/pnp4nagios/service-perfdata /var/spool/pnp4nagios/service-perfdata.$TIMET$
}

define command {
       command_name    process-host-perfdata-file
       command_line    /bin/mv /var/log/pnp4nagios/host-perfdata /var/spool/pnp4nagios/host-perfdata.$TIMET$
}

If everything have gone right … then you will be able to see on a nagios check something like this:

nagios_perf.png

Verify

Verify your pnp4nagios setup:

# wget -c http://verify.pnp4nagios.org/verify_pnp_config

# perl verify_pnp_config -m bulk+npcd -c /etc/nagios/nagios.cfg -p /etc/pnp4nagios/ 

NPCD

The NPCD daemon (Nagios Performance C Daemon) is the daemon/process that will translate the gathered performance data to graphs, so let’s started it:

# /etc/init.d/npcd restart
Stopping npcd:                                             [FAILED]
Starting npcd:                                             [  OK  ]

You should see some warnings but not any critical errors.

Templates

Two new template definition should be created, one for the host and one for the service:

/etc/nagios/objects/templates.cfg

define host {
   name       host-pnp
   action_url /pnp4nagios/index.php/graph?host=$HOSTNAME$&srv=_HOST_' class='tips' rel='/pnp4nagios/index.php/popup?host=$HOSTNAME$&srv=_HOST_
   register   0
}

define service {
   name       srv-pnp
   action_url /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$' class='tips' rel='/pnp4nagios/popup?host=$HOSTNAME$&srv=$SERVICEDESC$
   register   0
}

Host Definition

Now we need to apply the host-pnp template to our system:

so this configuration: /etc/nagios/objects/localhost.cfg

define host{
        use                     linux-server            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               localhost
        alias                   localhost
        address                 127.0.0.1
        }

becomes:

define host{
        use                     linux-server,host-pnp            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               localhost
        alias                   localhost
        address                 127.0.0.1
        }

Service Definition

And we finally must append the pnp4nagios service template to our services:

srv-pnp

define service{
        use                             local-service,srv-pnp         ; Name of service template to use
        host_name                       localhost

Graphs

We should be able to see graphs like these:

nagios_ping.png

Happy Measurements!

appendix

These are some extra notes on the above article, you need to take in mind:

Services

# chkconfig httpd on
# chkconfig iptables on
# chkconfig nagios on
# chkconfig npcd on 

PHP

If you are not running the default php version on your system, it is possible to get this error msg:

Non-static method nagios_Core::SummaryLink()

There is a simply solution for that, you need to modify the index file to exclude the deprecated php error msgs:

# vim +/^error_reporting /usr/share/nagios/html/pnp4nagios/index.php   

// error_reporting(E_ALL & ~E_STRICT);
error_reporting(E_ALL & ~E_STRICT & ~E_DEPRECATED);

Monday, 14 August 2017

GSoC Week 11: Practical Use

vanitasvitae's blog » englisch | 14:43, Monday, 14 August 2017

You know what makes code of a software library even better? Client code that makes it do stuff! I present to you xmpp_sync!

Xmpp_sync is a small command line tool which allows you to sync file from one devices to one or more other devices via XMPP. It works a little bit like you might now it from eg. owncloud or nextcloud. Just drop the files into one folder and they automagically appear on your other devices. At the moment it works only unidirectional, so files get synchronized in one direction, but not in the other.

The program has to modes; master mode and slave mode. In general, a client started in master mode will send files to all clients started in slave mode. So lets say we want to mirror contents from one directory to another. We start the client on our master machine and give it a path to the directory we want to monitor. On the other machines we start the client in slave mode, and then add them to the master client. Whenever we now drop a file into the directory, it will automatically be sent to all registered slaves via Jingle File Transfer. Files do also get send when they get modified by the user. I registered a FileWatcher in order to get notified of such events. For this purpose I got in touch with java NIO again.

Currently the transmission is made unencrypted (as described in XEP-0234), but I plan to also utilize my Jingle Encrypted Transports (JET) code/spec to send the files OMEMO encrypted in the future. My plan for the long run is to further improve JET, so that it might get implemented by other clients.

Besides that I found the configuration error in my ejabberd configuration which prevented my Socks5 proxy from working. The server was listening at 127.0.0.1 by default, so the port was not reachable from the outside world. Now I can finally test on my own server :D

I also tested my code against Gajims implementation and found some more mistakes I made, which are now fixed. The Jingle InBandBytestream Transport is sort of working, but there are some more smaller things I need to change.

Thats all for the week.

Happy Hacking :)

Saturday, 12 August 2017

IPv6 Unique Local Address

Ramblings of a sysadmin (Posts about planet-fsfe) | 09:00, Saturday, 12 August 2017

Since IPv6 is happening, we should be prepared. During the deployment of a new accesspoint I was in need of a Unique Local Address. IPv6 Unique Local Addresses basically are comparable to the IPv4 private address ranges.

Some sites refer to Unique-Local-IPv6.com, but that is offline nowadays. Others refer to kame.net generator, which is open source and still available yay.

But wait, there must be an offline method for this right? There is! subnetcalc (nowadays available here apparently) to the rescue:

subnetcalc fd00:: 64 -uniquelocal | grep ^Network

Profit :-)

Thursday, 10 August 2017

One Step Forward, Two Steps Back

Paul Boddie's Free Software-related blog » English | 15:00, Thursday, 10 August 2017

I have written about the state of “The Free Software Desktop” before, and how change apparently for change’s sake has made it difficult for those of us with a technology background to provide a stable and reliable computing experience to others who are less technically inclined. It surprises me slightly that I have not written about this topic more often, but the pattern of activity usually goes something like this:

  1. I am asked to upgrade or troubleshoot a computer running a software distribution I have indicated a willingness to support.
  2. I investigate confusing behaviour, offer advice on how to perform certain tasks using the available programs, perhaps install or upgrade things.
  3. I get exasperated by how baroque or unintuitive the experience must be to anyone not “living the dream” of developing software for one of the Free Software desktop environments.
  4. Despite getting very annoyed by the lack of apparent usability of the software, promising myself that I should at least mention how frustrating and unintuitive it is, I then return home and leave things for a few days.
  5. I end up calming down sufficiently about the matter to not to be so bothered about saying something about it after all.

But it would appear that this doesn’t really serve my interests that well because the situation apparently gets no better as time progresses. Back at the end of 2013, it took some opining from a community management “expert” to provoke my own commentary. Now, with recent experience of upgrading a system between Kubuntu long-term support releases, I feel I should commit some remarks to writing just to communicate my frustration while the experience is still fresh in my memory.

Back in 2013, when I last wrote something on the topic, I suppose I was having to manage the transition of Kubuntu from KDE 3 to KDE 4 on another person’s computer, perhaps not having to encounter this yet on my own Debian system. This transition required me to confront the arguably dubious user interface design decisions made for KDE 4. I had to deal with things like the way the desktop background no longer behaved as it had done on most systems for many years, requiring things like the “folder view” widget to show desktop icons. Disappointingly, my most recent experience involved revisiting and replaying some of these annoyances.

Actual Users

It is worth stepping back for a moment and considering how people not “living the dream” actually use computers. Although a desktop cluttered with icons might be regarded as the product of an untidy or disorganised user, like the corporate user who doesn’t understand filesystems and folders and who saves everything to the desktop just to get through the working day, the ability to put arbitrary icons on the desktop background serves as a convenient tool to present a range of important tasks and operations to less confident and less technically-focused users.

Let us consider the perspective of such users for a moment. They may not be using the computer to fill their free time, hang out online, or whatever the kids do these days. Instead, they may have a set of specific activities that require the use of the computer: communicate via e-mail, manage their photographs, read and prepare documents, interact with businesses and organisations.

This may seem quaint to members of the “digital native” generation for whom the interaction experience is presumably a blur of cloud service interactions and social media posts. But unlike the “digital natives” who, if you read the inevitably laughable articles about how today’s children are just wizards at technology and that kind of thing, probably just want to show their peers how great they are, there are also people who actually need to get productive work done.

So, might it not be nice to show a few essential programs and actions as desktop icons to direct the average user? I even had this set up having figured out the “folder view” widget which, as if embarrassed at not having shown up for work in the first place, actually shows the contents of the “Desktop” directory when coaxed into appearing. Problem solved? Well, not forever. (Although there is a slim chance that the problem will solve itself in future.)

The Upgrade

So, Kubuntu had been moaning for a while about a new upgrade being available. And being the modern way with KDE and/or Ubuntu, the user is confronted with parade after parade of notifications about all things important, trivial, and everything in between. Admittedly, it was getting to the point where support might be ending for the distribution version concerned, and so the decision was taken to upgrade. In principle this should improve the situation: software should be better supported, more secure, and so on. Sadly, with Ubuntu being a distribution that particularly likes to rearrange the furniture on a continuous basis, it just created more “busy work” for no good reason.

To be fair, the upgrade script did actually succeed. I remember trying something similar in the distant past and it just failing without any obvious remedy. This time, there were some messages nagging about package configuration changes about which I wasn’t likely to have any opinion or useful input. And some lengthy advice about reconfiguring the PostgreSQL server, popping up in some kind of packaging notification, seemed redundant given what the script did to the packages afterwards. I accept that it can be pretty complicated to orchestrate this kind of thing, though.

It was only afterwards that the problems began to surface, beginning with the login manager. Since we are describing an Ubuntu derivative, the default login manager was the Unity-styled one which plays some drum beats when it starts up. But after the upgrade, the login manager was obsessed with connecting to the wireless network and wouldn’t be denied the chance to do so. But it also wouldn’t connect, either, even if given the correct password. So, some work needed to be done to install a different login manager and to remove the now-malfunctioning software.

An Empty Desk

Although changing the login manager also changes the appearance of the software and thus the experience of using it, providing an unnecessary distraction from the normal use of the machine and requiring unnecessary familiarisation with the result of the upgrade, at least it solved a problem of functionality that had “gone rogue”. Meanwhile, the matter of configuring the desktop experience has perhaps not yet been completely and satisfactorily resolved.

When the machine in question was purchased, it was running stock Ubuntu. At some point, perhaps sooner rather than later, the Unity desktop became the favoured environment getting the attention of the Ubuntu developers, and finding that it was rather ill-suited for users familiar with more traditional desktop paradigms, a switch was made to the KDE environment instead. This is where a degree of peace ended up being made with the annoyingly disruptive changes introduced by KDE 4 and its Plasma environment.

One problem that KDE always seems to have had is that of respecting user preferences and customisations across upgrades. On this occasion, with KDE Plasma 5 now being offered, no exception was made: logging in yielded no “folder view” widgets with all those desktop icons; panels were bare; the desktop background was some stock unfathomable geometric form with blurry edges. I seem to remember someone associated with KDE – maybe even the aforementioned “expert” – saying how great it had been to blow away his preferences and experience the awesomeness of the raw experience, or something. Well, it really isn’t so awesome if you are a real user.

As noted above, persuading the “folder view” widgets to return was easy enough once I had managed to open the fancy-but-sluggish widget browser. This gave me a widget showing the old icons that was too small to show them all at once. So, how do you resize it? Since I don’t use such features myself, I had forgotten that it was previously done by pointing at the widget somehow. But because there wasn’t much in the way of interactive help, I had to search the Web for clues.

This yielded the details of how to resize and move a folder view widget. That’s right: why not emulate the impoverished tablet/phone interaction paradigm and introduce a dubious “long click” gesture instead? Clearly because a “mouseover” gesture is non-existent in the tablet/phone universe, it must be abolished elsewhere. What next? Support only one mouse button because that is how the Mac has always done it? And, given that context menus seem to be available on plenty of other things, it is baffling that one isn’t offered here.

Restoring the desktop icons was easy enough, but showing them all was more awkward because the techniques involved are like stepping back to an earlier era where only a single control is available for resizing, where combinations of moves and resizes are required to get the widget in the right place and to be the right size. And then we assume that the icons still do what they had done before which, despite the same programs being available, was not the case: programs didn’t start but also didn’t give any indication why they didn’t start, this being familiar to just about anyone who has used a desktop environment in the last twenty years. Maybe there is a log somewhere with all the errors in it. Who knows? Why is there never any way of troubleshooting this?

One behaviour that I had set up earlier was single-click activation of icons, where programs could be launched with a single click with the mouse. That no longer works, nor is it obvious how to change it. Clearly the usability police have declared the unergonomic double-click action the “winner”. However, some Qt widgets are still happy with single-click navigation. Try explaining such inconsistencies to anyone already having to remember how to distinguish between multiple programs, what each of them does and doesn’t do, and so on.

The Developers Know Best

All of this was frustrating enough, but when trying to find out whether I could launch programs from the desktop or whether such actions had been forbidden by the usability police, I found that when programs were actually launching they took a long time to do so. Firing up a terminal showed the reason for this sluggishness: Tracker and Baloo were wanting to index everything.

Despite having previously switched off KDE’s indexing and searching features and having disabled, maybe even uninstalled, Tracker, the developers and maintainers clearly think that coercion is better than persuasion, that “everyone” wants all their content indexed for “desktop search” or “semantic search” (or whatever they call it now), the modern equivalent of saving everything to the desktop and then rifling through it all afterwards. Since we’re only the stupid users, what would we really have to say about it? So along came Tracker again, primed to waste computing time and storage space, together with another separate solution for doing the same thing, “just in case”, because the different desktop developers cannot work together.

Amongst other frustrations, the process of booting to the login prompt is slower, and so perhaps switching from Upstart to systemd wasn’t such an entirely positive idea after all. Meanwhile, with reduced scrollbar and control affordances, it would seem that the tendency to mimic Microsoft’s usability disasters continues. I also observed spontaneous desktop crashes and consequently turned off all the fancy visual effects in order to diminish the chances of such crashes recurring in future. (Honestly, most people don’t want Project Looking Glass and similar “demoware” guff: they just want to use their computers.)

Of Entitlement and Sustainable Development

Some people might argue that I am just another “entitled” user who has never contributed anything to these projects and is just whining incorrectly about how bad things are. Well, I do not agree. I enthusiastically gave constructive feedback and filed bugs while I still believed that the developers genuinely wanted to know how they might improve the software. (Admittedly, my enthusiasm had largely faded by the time I had to migrate to KDE 4.) I even wrote software using some of the technologies discussed in this article. I always wanted things to be better and stuck with the software concerned.

And even if I had never done such things, I would, along with other users, still have invested a not inconsiderable amount of effort into familiarising people with the software, encouraging others to use it, and trying to establish it as a sustainable option. As opposed to proprietary software that we neither want to use, nor wish to support, nor are necessarily able to support. Being asked to support some Microsoft product is not only ethically dubious but also frustrating when we end up having to guess our way around the typically confusing and poorly-designed interfaces concerned. And we should definitely resent having to do free technical support for a multi-billion-dollar corporation even if it is to help out other people we know.

I rather feel that the “entitlement” argument comes up when both the results of the development process and the way the development is done are scrutinised. There is this continually perpetuated myth that “open source” can only be done by people when those people have “enough time and interest to work on it”, as if there can be no other motivations or models to sustain the work. This cultivates the idea of the “talented artist” developer lifestyle: that the developers do their amazing thing and that its proliferation serves as some form of recognition of its greatness; that, like art, one should take it or leave it, and that the polite response is to applaud it or to remain silent and not betray a supposed ignorance of what has been achieved.

I do think that the production of Free Software is worthy of respect: after all, I am a developer of Free Software myself and know what has to go into making potentially useful systems. But those producing it should understand that people depend on it, too, and that the respect its users have for the software’s development is just as easily lost as it is earned, indeed perhaps more easily lost. Developers have power over their users, and like anyone in any other position of power, we expect them to behave responsibly. They should also recognise that any legitimate authority they have over their users can only exist when they acknowledge the role of those users in legitimising and validating the software.

In a recent argument about the behaviour of systemd, its principal developer apparently noted that as Free Software, it may be forked and developed as anyone might wish. Although true, this neglects the matter of sustainable software development. If I disagree with the behaviour of some software or of the direction of a software project, and if there is no reasonable way to accommodate this disagreement within the framework of the project, then I must maintain my own fork of that software indefinitely if I am to continue using it.

If others cannot be convinced to participate in this fork, and if other software must be changed to work with the forked code, then I must also maintain forks of other software packages. Suddenly, I might be looking at having to maintain an entire parallel software distribution, all because the developers of one piece of software are too precious to accept other perspectives as being valid and are unwilling to work with others in case it “compromises their vision”, or whatever.

Keeping Up on the Treadmill

Most people feel that they have no choice but to accept the upgrade treadmill, the constant churn of functionality, the shiny new stuff that the developers “living the dream” have convinced their employers or their peers is the best and most efficient way forward. It just isn’t a practical way of living for most users to try and deal with the consequences of this in a technical fashion by trying to do all those other people’s jobs again so that they may be done “properly”. So that “most efficient way” ends up incurring inefficiencies and costs amongst everybody else as they struggle to find new ways of doing the things that just worked before.

How frustrating it is that perhaps the only way to cope might be to stop using the software concerned altogether! And how unfortunate it is that for those who do not value Free Software in its own right or who feel that the protections of Free Software are unaffordable luxuries, it probably means that they go and use proprietary software instead and just find a way of rationalising the decision and its inconvenient consequences as part of being a modern consumer engaging in yet another compromised transaction.

So, unhindered by rants like these and by more constructive feedback, the Free Software desktop seems to continue on its way, seemingly taking two steps backward for every one step forward, testing the tolerance even of its most patient users to disruptive change. I dread having to deal with such things again in a few years’ time or even sooner. Maybe CDE will once again seem like an attractive option and bring us full circle for “Unix on the desktop”, saving many people a lot of unnecessary bother. And then the tortoise really will have beaten the hare.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog