Free Software, Free Society!
Thoughts of the FSFE Community (English)

Tuesday, 31 March 2020

System Hackers meeting - Lyon edition

For the 4th time, and less than 5 months after the last meeting, the FSFE System Hackers met in person to coordinate their activities, work on complex issues, and exchange know-how. This time, we chose yet another town familiar to one of our team members as venue – Lyon in France. What follows is a report of this gathering that happened shortly before #stayhome became the order of the day.

For those who do not know this less visible but important team: The System Hackers are responsible for the maintenance and development of a large number of services. From the fsfe.org website’s deployment to the mail servers and blogs, from Git to internal services like DNS and monitoring, all these services, virtual machines and physical servers are handled by this friendly group that is always looking forward to welcoming new members.

Interestingly, we have gathered in the same constellation as in the hackathon before, so Albert, Florian, Francesco, Thomas, Vincent and me tackled large and small challenges in the FSFE’s systems. But we have also used the time to exchange knowledge about complex tasks and some interconnected systems. The official part was conducted in the fascinating Astech Fablab, but word has it that Ninkasi, an excellent pub in Lyon, was the actual epicentre of this year’s meeting.

Sharing is caring

Saturday morning after reviewing open tasks and setting our priorities, we started to share more knowledge about our services to reduce bottlenecks. For this, I drew a few diagrams to explain how we deploy our Docker containers, how our community database interacts with the mail and lists server, and how DNS works at the FSFE.

To also help the non-present system hackers and “future generations”, I’ve added this information to a public wiki page. This could also be the starting point to transfer more internal knowledge to public pages to make maintenance and onboarding easier.

Todo? Done!

Afterwards, we focused on closing tasks that have been open for a longer time:

  • The DNS has been a big issue for a long time. Over the past months we’ve migrated the source for our nameserver entries from SVN to Git, rewrote our deployment scripts, and eventually upgraded the two very sensitive systems to Debian 10. During the meeting, we came closer to perfection: all Bind configuration cleaned from old entries, uniformly formatted, and now featuring SPF, DMARC and CAA records.
  • For a better security monitoring of the 100+ mailing lists the FSFE hosts, we’ve finalised the weekly automatic checks for sane and safe settings, and a tool that helps to easily update the internal documentation.
  • Speaking of monitoring: we did lack proper monitoring of our 20+ hosts for availability, disk usage, TLS certificates, service status and more. While we tried for a longer time to get Prometheus and Grafana doing what we need, we performed a 180° turn: now, there is a Icinga2 installation running that already monitors a few hosts and their services – deployed with Ansible. In the following weeks we will add more hosts and services to the watched targets.
  • We plan to migrate our user-unfriendly way to share files between groups to Nextcloud, including using some more of the software’s capabilities. During the weekend, we’ve tested the instance thoroughly, and created some more LDAP groups that are automatically transposed to groups in Nextcloud. In the same run, Albert shared some more knowledge about LDAP with Vincent and me, so we get rid of more bottlenecks.

Then, it was time to deal with other urgent issues:

  • Some of us worked on making our systems more resilient against DDoS attacks. Over the Christmas season, we became a target of an attack. The idea is to come up with solutions that are easy to deploy on all our web services while keeping complexity low. We’ve tested some approaches and will further work on coming up with solutions.
  • Regarding webservers, we’ve updated the TLS configurations on various services to the recommended settings, and also improved some other settings while touching the configuration files.
  • We intend to ease people encrypting their emails with GnuPG. That is why we experimented with WKD/WKS and will work on setting up this service. As it requires some interconnection with others services, this will take us some more time unfortunately.
  • On the maintenance side of things, we have upgraded all servers except one to the latest Debian version, and also updated many of our Docker images and containers to make use of the latest security and stability improvements.
  • The FSFE hosts a few third party services, and unfortunately they have been running on unmaintained systems. That is why we set up a brand new host for our sister organisation in Latin America so they can eventually migrate, and moved the fossmarks.org website to our automatic CI/CD setup via Drone/Docker.

The next steps and developments

As you can see, we completed and started to tackle a lot of issues again, so it won’t become boring in our team any time soon. However, although we should know better, we intend to “change a running system”!

While the in-person meetings have been highly important and also fun, we are in a state where knowledge and mutual trust are further distributed between the members, the tasks separated more clearly and the systems mostly well documented. So part of our feedback session was the question whether these meetings in the 6-12 month rhythm are still necessary.

Yes, they are, but not more often than once a year. Instead, we would like to try virtual meetings and sprints. Before a sprint session, we would discuss all tasks (basically go through our internal Kan board), plan the challenges, ask for input if necessary, and resolve blockers as early as possible. Then, we would be prepared for a sprint day or afternoon during which everyone can work on their tasks while being able to directly contact other members. All that should happen over a video conference to have a more personal atmosphere.

For the analogue meetings, it was requested to also plan tasks and priorities beforehand together, and focus on tasks that require more people from the group. Also, we want to have more trainings and system introductions like we’ve just had to reduce dependencies on single persons.

All in all, this gathering has been another successful meeting and will set a corner stone for exciting new improvements for both the systems and the team. Thanks to everyone who participated, and a big applause to Vincent who organised the venue and the social activities!

Sunday, 29 March 2020

RSIBreak 0.12.12 released!

All of you that are in using a computer for a long time should use it!

https://userbase.kde.org/RSIBreak

Changes from 0.12.11:
* Don't reset pause counter on very short inputs that can just be accidental.
* Improve high dpi support
* Translation improvements
* Compile with Qt 5.15 beta
* Minor code cleanup

http://download.kde.org/stable/rsibreak/0.12/rsibreak-0.12.12.tar.xz

Friday, 27 March 2020

Cruising sailboat electronics setup with Signal K

I haven’t mentioned this on the blog earlier, but in the end of 2018 we bought a small cruising sailboat. After some looking, we went with a Van de Stadt designed Oceaan 25, a Dutch pocket cruiser from the early 1980s. S/Y Curiosity is an affordable and comfortable boat for cruising with 2-4 people, but also needed major maintenance work.

Curiosity sailing on Havel with Royal Louise

The refit has so far included osmosis repair, some fixes to the standing rigging, engine maintenance, and many structural improvements. But this post will focus on the electronics and navigation aspects of the project.

12V power

When we got it, the boat’s electrics setup was quite barebones. There was a small lead-acid battery, charged only when running the outboard. Light control was pretty much all-or-nothing, either we were running inside and navigation lights, or not. Everything was wired with 80s spec components, using energy-inefficient lightbulbs.

Looking at the state of the setup, it was also unclear when the electrics had been used for anything else than starting the engine last time.

Before going further with the electronics setup, all of this would have to be rebuilt. We made a plan, and scheduled two weekends in summer 2019 for rewiring and upgrading the electricity setup of the boat.

First step was to test all existing wiring with a multimeter, and label and document all of it. Surprisingly, there were only couple of bad connections from the main distribution panel to consumers, so for most part we decided to reuse that wiring, but just with a modern terminal block setup.

All wires labeled and being reconnected

For most part we used a dymo label printer, with the labels covered with a transparent heat shrink.

We replaced the old main control panel with a modern one with the capability to power different parts of the boat separately, and added some 12V and USB sockets next to it.

New battery charger and voltmeter

All internal lighting was replaced with energy-efficient LEDs, and we added the option of using red lights all through the cabin for preserving night vision. A car charger was added to the system for easier battery charging while in harbour.

Next steps for power

With this, we had a workable lighting and power setup for overnight sailing. But next obvious step will be to increase the range of our boat.

For that, we’re adding a solar panel. We already have most parts for the setup, but are still waiting for the customized NOA mounting hardware to arrive. And of course the current COVID-19 curfews need to lift before we can install it.

Until we have actual data from our Victron MPPT charge controller, I’ve run some simulations using NASA’s insolation data for Berlin on how much the panel ought to increase our cruising range.

Range estimates for Curiosity solar setup

The basis for boat navigation is still the combination of a clock, a compass, and a paper chart (as well as a sextant on the open ocean). However, most modern cruising boats utilize some electrical tools to aid the process of running the boat. These typically come in form a chartplotter and a set of sensors to get things like GPS position, speed, and the water depth.

Commercial marine navigation equipment is a bit like computer networking in the 90s - everything is expensive, and you pretty much have to buy the whole kit from a single vendor to make it work. Standards like NMEA 0183 exist, but “embrace and extend” is typical vendor behaviour.

Signal K

Being open source hackerspace people, that was obviously not the way we wanted to do things. Instead of getting locked into an expensive proprietary single-vendor marine instrumentation setup, we decided to roll our own using off-the-shelf IoT components. To serve as the heart of the system, we picked Signal K.

Signal K is first of all a specification on how marine instruments can exchange data. It also has an open source implementation in Node.js. This allows piping in data from all of the relevant marine data buses, as well as setting up custom data providers. Signal K then harmonizes the data, and makes it available both via modern web APIs, and in traditional NMEA formats. This enables instruments like chartplotters also to utilize the Signal K enriched data.

We’re running Signal K on a Raspberry Pi 3B+ powered by the boat battery. With a GPS dongle, this was already enough to give some basic navigation capabilities like charts and anchor watch. We also added a WiFi hotspot with a LTE uplink to the boat.

Tracking some basic sailing exercises via Signal K

To make the system robust, installation is automated via Ansible, and easy to reproduce. Our boat GitHub repo also has the needed functionality to run a clone of our boat’s setup on our laptops via Docker, which is great when developing new features.

Signal K has a very active developer community, which has been great for figuring out how the extend the capabilities of our system.

Chartplotter

We’re using regular tablets for navigation. The main chartplotter is a cheap old waterproof Samsung Galaxy Tab Active 8.0 tablet that can show both the Freeboard web-based chartplotter with OpenSeaMap charts, and run the Navionics Boating app to display commercial charts. Navionics is also able to receive some Signal K data over the boat WiFi to show things like AIS targets, and to utilize the boat GPS.

Samsung T360 with Curiosity logo

As a backup we have our personal smartphones and tablets.

Anchor watch with Freeboard and a tablet

Inside the cabin we also have an e-ink screen showing the primary statistics relevant to the current boat state.

e-ink dashboard showing boat statistics

Environmental sensing

Monitoring air pressure changes is important for dealing with the weather. For this, we added a cheap barometer-temperature-humidity sensor module wired to the Raspberry Pi, driven with the Signal K BME280 plugin. With this we were able to get all of this information from our cabin into Signal K.

However, there was more environmental information we wanted to get. For instance, the outdoor temperature, the humidity in our foul weather gear locker, and the temperature of our icebox. For these we found the Ruuvi tags produced by a Finnish startup. These are small weatherproofed Bluetooth environmental sensors that can run for years with a coin cell battery.

Ruuvi tags for Curiosity with handy pouches

With Ruuvi tags and the Signal K Ruuvi tag plugin we were able to bring a rich set of environmental data from all around the boat into our dashboards.

Anchor watch

Like every cruising boat, we spend quite a lot of nights at anchor. One important safety measure with a shorthanded crew is to run an automated anchor watch. This monitors the boat’s distance to the anchor, and raises an alarm if we start dragging.

For this one, we’re using the Signal K anchor alarm plugin. We added a Bluetooth speaker to get these alarms in an audible way.

To make starting and stopping the anchor watch easier, I utilized a simple Bluetooth remote camera shutter button together with some scripts. This way the person dropping the anchor can also start the anchor watch immediately from the bow.

Camera shutter button for starting anchor watch

AIS

Automatic Identification System is a radio protocol used by most bigger vessels to tell others about their course and position. It can be used for collision avoidance. Having an active transponder on a small boat like Curiosity is a bit expensive, but we decided we’d at least want to see commercial traffic in our chartplotter in order to navigate safely.

For this we bought an RTL-SDR USB stick that can tune into the AIS frequency, and with the rtl_ais software, receive and forward all AIS data into Signal K.

Tracking AIS targets in Freeboard

This setup is still quite new, so we haven’t been able to test it live yet. But it should allow us to see all nearby bigger ships in our chartplotter in realtime, assuming that we have a good-enough antenna.

Putting it all together

All together this is quite a lot of hardware. To house all of it, we built a custom backing plate with 3D-printed brackets to hold the various components. The whole setup is called Voronoi-1 onboard computer. This is a setup that should be easy to duplicate on any small sailing vessel.

The Voronoi-1 onboard computer

The total cost so far for the full boat navigation setup has been around 600€, which is less than just a commercial chartplotter would cost. And the system we have is both easy to extend, and to fix even on the go. And we get a set of capabilities that would normally require a whole suite of proprietary parts to put together.

Next steps for navigation setup

We of course have plenty of ideas on what to do next to improve the navigation setup. Here are some projects we’ll likely tackle over the coming year:

  • Adding a timeseries database and some data visualization
  • 9 degrees of freedom sensor to track the compass course, as well as boat heel
  • Instrumenting our outboard motor to get RPMs into Signal K and track the engine running time
  • Wind sensor, either open source or commercial

If you have ideas for suitable components or projects, please get in touch!

Source code

Huge thanks to both the Signal K and Hackerfleet communities and the Curiosity crew for making all this happen.

Now we just wait for the curfews to lift so that we can get back to sailing!

Curiosity Crew Badge

Thursday, 26 March 2020

Jitsi and the power of shortcuts

During the last weeks I have used more video calls than in the past; and often the software used for that was Jitsi meet. That also meant that it made sense for me to look into how I and others can use this software more efficiently -- which for me means looking into the available shortcuts.

If you are using Jitsi meet, e.g. on one of its public instances, you can press ? and then will see the list of shortcuts:

F - Show or hide video thumbnails
M - Mute or unmute your microphone
V - Start or stop your camera
A - Manage call quality
C - Open or close the chat
D - Switch between camera and screen sharing
R - Raise or lower your hand
S - View or exit full screen
W - Toggle tile view
? - Show or hide keyboard shortcuts
SPACE - Push to talk
T - Show speaker stats
0 - Focus on your video
1-9 - Focus on another person's video

What I use most of the time is Mto quickly switch between being muted or unmuted; sometimes then in combination with first muting and then press SPACE while quickly saying something in a larger group and as soon as I stop pressing it, I am muted again.

Another often used one for me is V to turn off / turn on my webcam in combination with A to quickly reduce the video quality (unfortunately I have not found a way that the default is lower video quality).

And finally, especially when I am moderating meetings, I encourage people to use Rto indicate if someone wants to say something. This way I do not have to ask several times in a meeting if someone wants to add a point, or if there is another question. (This is also a feature for which I am missing a quick access in the Jitsi meet mobile application.)

In general I encourage you to check what shortcuts are available in a software you have to use more often, as in my experience you will highly benefit from that knowledge over time.

Monday, 23 March 2020

How and why to properly write copyright statements in your code

This blog post was not easy to write as it started as a very simple thing intended for developers, but later, when I was digging around, it turned out that there is no good single resource online on copyright statements. So I decided to take a stab at writing one.

I tried to strike a good balance between 1) keeping it short and to the point for developers who just want to know what to do, and 2) FOSS compliance officers and legal geeks who want to understand not just best practices, but also the reasons behind them.

If you are extremely short on time, the TL;DR should give you the bare minimal instructions, but if you have just 2 minutes I would advise you to read the actual HowTo a bit lower below.

Of course, if you have about 18 minutes of time, the best way is always to start reading at the beginning and finish at the end.

Where else to find this article

A copy of this blog is available also on Liferay Blog.
Haksung Jang (장학성) was awesome enough to publish a Korean translation.

TL;DR

Use the following format:

SPDX-FileCopyrightText: © {$year_of_file_creation} {$name_of_copyright_holder} <{$contact}>

SPDX-License-Identifier: {$SPDX_license_name}

… put that in every source code file and go check out (and follow) REUSE.software best practices.

E.g. for a file that I created today and I released under the BSD-3-Clause license, I would use put the following as a comment at the top of the source code file:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <matija@suklje.name>

SPDX-License-Identifier: BSD-3-Clause

Introduction and copyright basics

Copyright is automatic (since the Berne convention) and any work of authorship is automatically protected by it – essentially giving the copyright holder1 exclusive power over its work. In order for your downstream to have the rights to use any of your work – be that code, text, images or other media – you need to give them a license to it.

So in order for you to copy, implement, modify etc. the code from others, you need to be given the needed rights – i.e. a license2 –, or make use of a statutory limitation or exception3. And if that license has some obligations attached, you need to meet them as well.

In any case, you have to meet the basic requirements of copyright law as well. At the very least you need to have the following two in place:

  • attribution – list the copyright holders and/or authors (especially in jurisdictions which recognise moral rights);
  • license(s) – since a license is the only thing that gives anybody other than the copyright holder themself the right to use the code, you are very well advised to have a notice of the the license and its full text present – this goes for both for your outbound licenses and the inbound licenses you received from others by using 3rd party works, such as copied code or libraries.

Inbound vs. outbound licenses

The license you give to your downstream is called an outbound license, because it handles the rights in the code that flow out of you. In turn that same license in the same work would then be perceived by your downstream as their inbound license, as it handles the rights in the code that flows into them.

In short, licenses describing rights flowing in are called inbound licenses, and the licenses describing rights flowing out are called outbound licenses.

The good news is that attribution is the author’s right, not obligation. And you are obliged to keep the attribution notices only insofar as the author(s) made use of that right. Which means that if the author has not listed themselves, you do not have to hunt them down yourself.

Why have the copyright statement?

Which brings us to the question of whether you need to write your own copyright statement4.

First, some very brief history …

The urge to absolutely have to write copyright statements stems from the inertia in the USA, as it only joined the Berne convention in 1989, well after computer programs were a thing. Which means that until then the US copyright law still required an explicit copyright statement in order for a work to be protected.

Copyright statements are useful

The copyright statement is not required by law, but in practice very useful as proof, at best, and indicator, more likely, of what the copyright situation of that work is. This can be very useful for compliance reasons, traceability of the code etc.

Attribution is practically unavoidable, because a) most licenses explicitly call for it, and if that fails b) copyright laws of most jurisdictions require it anyway.

And if that is not enough, then there is also c) sometimes you will want to reach the original author(s) of the code for legal or technical reasons.

So storing both the name and contact information makes sense for when things go wrong. Finding the original upstream of a runaway file you found in your codebase – if there are no names or links in it – is a huge pain and often includes (currently still) expensive specialised software. I would suspect the onus on a FOSS project to be much lower than on a corporation in this case, but still better to put a little effort upfront than having to do some serious archæology later.

How to write a good copyright statement and license notice

Finally we come to the main part of this article!

A good copyright statement should consist of the following information:

  • start with the © sign;
  • the year of the first publication – a good date would be the year in which you created the file and then do not touch it anymore;
  • the name of the copyright holder – typically the author, but can also be your employer or the if there is a CLA in place another legal entity or person;
  • a valid contact to the copyright owner

As an example, this is what I would put on something I wrote today:

© 2020 Matija Šuklje <matija@suklje.name>

While you are at it, it would make a lot of sense to also notify everyone which license you are releasing your code under as well. Using an SPDX ID is a great way to unambiguously state the license of your code. (See note mentioned below for an example of how things can go wrong otherwise.)

And if you have already come so far, it is just a small step towards following the best practices as described by REUSE.software by using SPDX tags to make your copyright statement (marked with SPDX-FileCopyrightText) and license notice (marked with SPDX-License-Identifier and followed by an SPDX ID).

Here is now an example of a copyright statement and license notice that check all the above boxes and also complies with both the SPDX and the REUSE.software specifications:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <matija@suklje.name>

SPDX-License-Identifier: BSD-3-Clause

Now make sure you have these in comments of all your source code files.

Q&A

Over the years, I have heard many questions on this topic – both from developers and lawyers.

I will try to address them below in no particular order.

If you have a question that is not addressed here, do let me know and I will try to include it in an update.

Why keep the year?

Some might argue that for the sake of simplicity it would be much easier to maintain copyright statements if we just skip the years. In fact, that is a policy at Microsoft/GitHub at the time of this writing.

While I agree that not updating the year simplifies things enormously, I do think that keeping a date helps preserve at least a vague timeline in the codebase. As the question is when the work was first expressed in a medium, the earliest date provable is the time when that file was first created.

In addition, having an easy way to find the earliest date of a piece of code, might prove useful also in figuring out when an invention was first expressed to the general public. Something that might become useful for patent defense.

This is also why e.g. in Liferay our new policy is to write the year of the file creation, and then not change the year any more.

Innocent infringement excursion for legal geeks

17 U.S. Code § 401.(d) states that if a work carries a copyright notice in the form that the law proscribes, in a copyright infringement case the defendant cannot rely on the innocent infringement defense, except if they had reason to believe their use was covered fair use. And even then, the innocent infringer would have to be e.g. a non-profit broadcaster or archive to be still eligible to such defence.

So, if you are concerned with copyright violations (at least in USA), you may actually want to make sure your copyright statements include both the copyright sign and year of publication.

See also note in Why the © sign for how a copyright notice following the US copyright act looks like.

Why not bump the year on change?

I am sure you have seen something like this before:
Copyright (C) 1992, 1995, 2000, 2001, 2003 CompanyX Inc.

The presumption behind this is that whenever you add a new year in the copyright statement, the copyright term would start anew, and therefore prolong the time that file would be protected by copyright.

Adding a new year on every change – or, even worse, simply every 1st January – is a practice still too wide-spread even today. Unfortunately, doing this is useless at best, and misleading at worst. For the origin of this myth see the short history above.

A big problem with this approach is that not every contribution is original or substantial enough to be copyrightable – even the popular 5 (or 10, or X) SLOC rule of thumb5 is legally-speaking very debatable

So, in order to keep your copyright statement true, you would need to make a judgement call every time whether the change was substantial and original enough to be granted copyright protection by the law and therefore if the year should be bumped. And that is a substantial test for every time you change a file.

On the other hand copyright lasts at least 50 (and usually 70) years6 after the death of the author; or if the copyright holder is a legal entity (e.g. CompanyX Inc.), since publication. So the risk of your own copyright expiring under your feet is very very low.

Worst case thought experiment

Let us imagine the worst possible scenario now:

1) you never bump the year in a copyright statement in a file and 2) 50+ years after its initial release, someone copies your code as if it were in public domain. Now, if you would have issue with that and go to court, and 3) the court would (very unlikely) take only the copyright statements in that file into account as the only proof and based on that 4) rule that the code in that file would have fallen under public domain and therefore the FOSS license would not apply to it any more.

The end result would simply be that (in one jurisdiction) that file would fall into public domain and be up for grabs by anyone for anything, no copyright, no copyleft, 50+ years from the file’s creation (instead of e.g. 5, maybe 20 years later).

But, honestly, how likely is it that 50 years from now the same (unaltered) code would still be (commercially) interesting?

… and if it turns out you do need to bump the year eventually, you still have, at worst, 50 years to sort it out – so, ample opportunity to mitigate the risk.

In addition to that, as typically a single source code file is just one of the many cogs in a bigger piece of software, what you are more concerned with is the software product/project as a whole. As the software grows, you will keep adding new files, and those will obviously have newer years in them. So the codebase as a whole work will already include copyright statements with newer years in it anyway.

Keep the Git/VCS history clean

Also, bumping the year in all the files every year messes with the usefulness of the Git/VCS history, and makes the log unnecessarily long(er) and the repository consumes more space.

It makes all the files seem equally old (in years), which makes it hard to identify stale code if you are looking for it.

Another issue might be that your year-bumping script can be too trigger-happy and bump the years also in the files that do not even belong to you. Furthering misinformation both in your VCS and the files’ copyright notices.

Why not use a year range?

Similar to the previous question, the year span (e.g. 1990-2013) is basically just a lazy version of bumping the year. So all of the above-mentioned applies.

A special case is when people use a range like {$year}-present. This has almost all of the above-mentioned issues7, plus it adds another dimension of confusion, because what constitutes the “present” is an open – and potentially philosophical – question. Does it mean:

  • the time when the file was last modified?
  • the time it was released as a package?
  • the time you downloaded it (maybe for the first time)?
  • the time you ran it the last time?
  • or perhaps even the ever eluding “right now”?

As you can see, this does not help much at all. Quite the opposite!

But doesn’t Git/Mercurial keep a better track?

Not reliably.

Git (and other VCS) are good at storing metadata, but you should be careful about it.

Git does have an Author field, which is separate from the Committer field. But even if we were to assume – and that is a big assumption8 – Git’s Author was the actual author of the code committed, they may not be the copyright holder.

Furthermore, the way git blame and git diff currently work, is line-by-line and using the last change as the final author, making Git suboptimal for finding out who actually wrote what.

Token-based blame information

For a more fine-grained tool to see who to blame for which piece of code, check out cregit.

And ultimately – and most importantly – as soon as the file(s) leave the repository, the metadata is lost. Whether it is released as a tarball, the repository is forked and/or rebased, or a single file is simply copied into a new codebase, the trace is lost.

All of these issues are addressed by simply including the copyright statement and license information in every file. REUSE.software best practices handle this very well.

Why the © sign?

Some might argue that the English word “Copyright” is so common nowadays that everyone understands it, but if you actually read the copyright laws out there, you will find that using © (i.e. the copyright sign) is the only way to write a copyright statement that is common in copyright laws around the world9.

Using the © sign makes sense, as it is the the common global denominator.

Comparison between US and Slovenian copyright statements

As an EU example, the Slovenian ZASP §175.(1) simply states that holders of exclusive author’s rights may mark their works with a (c)/© sign in front of their name or firm and year of first publication, which can be simply put as:

© {$year_of_first_publication} {$name_of_author_or_other_copyright_holder}

On the other side of the pond, in the USA, 17 U.S. Code § 401.(b) uses more words to give a more varied approach, and relevant for this question in §401(b)(1) proscribes the use of

the symbol © (the letter C in a circle), or the word “Copyright”, or the abbreviation “Copr.”;

The rest you can go read yourself, but can be summarised as:

(©|Copyright|Copr.) {$year_of_first_publication} {$name_or_abreviation_of_copyright_holder}

See also the note in Why keep the year for why this can matter in front of USA courts.

While the © sign is a pet peeve of mine, from the practical point of view, this is the least important point here. As we established in the introduction, copyright is automatic, so the actual risk of not following the law by its letter is pretty low if you write e.g. “Copyright” instead.

Why leave a contact? Even when there is more than one author?

A contact is in no way required by copyright law, but from practical reasons can be extremely useful.

It can happen that you need to access the author and/or copyright holder of the code for legal or technical question. Perhaps you need to ask how the code works, or have a fix you want to send their way. Perhaps you found a licensing issue and want to help them fix it (or ask for a separate license). In all of these cases, having a contact helps a lot.

As pretty much all of internet still hinges on the e-mail10, the copyright holder’s e-mail address should be the first option. But anything really goes, as long as that contact is easily accessible and actually in use long-term.

Avoiding orphan works

For the legal geeks out there, a contact to the copyright holder mitigates the issue of orphan works.

There will be cases where the authorship will be very dispersed or lie with a legal entity instead. In those cases, it might be more sense to provide a URL to either the project’s or legal entity’s homepage and provide useful information there. If a project lists copyright holders in a file such as AUTHORS or CONTRIBUTORS.markdown a permalink to that file (in the master) of the publicly available repository could also be a good URL option.

How to handle multitudes of authors?

Here are two examples of what you can write in case the project (e.g. Project X) has many authors and does not have a CAA or exclusive CLA in place to aggregate the copyright in a single entity:

© 2010 The Project X Authors <{$url}>

© 1998 Contributors to the Project X <{$url}>

What about public domain?

Public domain is tricky.

In general the public domain are works to which the copyright term has expired11.

While in some jurisdictions (e.g. USA, UK) you can actually waive your copyright and dedicate your work to public domain, in most jurisdiction (e.g. most of EU member countries) that is not possible.

Which means that depending on the applicable jurisdiction, it may be that although an author wrote that they dedicate their work into public domain this does not meet the legal standard for it to actually happen – they retain the copyright in their own work.

Unsurprisingly, FOSS compliance officers and other people/projects who take copyright and licensing seriously are typically very wary of statements like “this is public domain”.

This can be mitigated in two ways:

  • instead of some generic wording, when you want to dedicate something to public domain use a tried and tested public copyright waiver / public domain dedication with a very permissive license, such as CC0-1.0; and
  • include your name and contact if you are the author in the SPDX-FileCopyrightText: field – 1) because in doubt that will associate you with your dedication to the public domain, and 2) in case anything is unclear, people have a contact to you.

This makes sense to do even for files that you deem are not copyrightable, such as config files – if you mark them as above, everyone will know that you will not exercise your author’s rights (if they existed) in those files.

It may seem a bit of a hassle for something you just released to the public to use however they see fit, without people having to ask you for permission. I get that, I truly do! But do consider that if you already put so much effort into making this wonderful stuff you and donating it to the general humanity, it would be a huge pity that, for (silly) legal details, in the end people would not (be able to) use it at all.

What about minified JS?

Modern code minifiers/uglifiers tend to have an optional flag to preserve copyright and licensing info, even when they rip out all the other comments.

The copyright does not simply go away if you minify/uglify the code, so do make sure that you use a minifier that preserves both the copyright statement as well as the license (at least its SPDX Identifier) – or better yet, the whole REUSE-compliant header.

Transformations of code

Translations between different languages, compilations and other transformations are all exclusive rights of the copyright owner. So you need a valid license even for compiling and minifying.

What is wrong with “All rights reserved”?

Often you will see “all rights reserved” in copyright statements even in a FOSS project.

The cause of this, I suspect, lies again from a copycat behaviour where people tend to simply copy what they so often found on a (music) CD or in a book. Again, the copyright law does not ask for this, even if you want to follow the fullest formal copyright statement rules.

But what it does bring, is confusion.

The statement “all rights reserved” obviously contradicts the FOSS license the same file is released under. The latter gives everyone the rights to use, study, share and improve the code, while the former states that all of these rights the author reserves to themself.

So, as those three words cause a contradiction, and do not bring anything useful to the table in the first place, you should not write them in vain.

Practical example

Imagine12 a FOSS project that has a copy of the MIT license stored in its LICENSE file and (only) the following comment at the top of all its source code files:

# This file is Copyright (C) 1997 Master Hacker, all rights reserved.

Now imagine that someone simply copies one file from that repository/archive into their own work, which is under the AGPL-3.0-only license, and this is also what it says in the LICENSE file in the root of its own repository. And you, in turn, are using this second person’s codebase.

According to the information you have at hand:

  • the copyright in the copied file is held by Master Hacker;
  • apparently, Mr Hacker reserves all the rights they have under copyright law;
  • if you felt like taking a risk, you could assume that the copied file is under the AGPL-3.0-or-later license – which is false, and could lead to copyright violation13;
  • if you wanted to play it safe, you could assume that you have no valid license to this file, so you decide to remove it and work around it – again false and much more work, but safe;
  • you could wait until 2067 and hope this actually falls under public domain by then – but who has time for that.

This example highlights both how problematic the wording of “all rights reserved” can be even if there is a license text somewhere in the codebase.

This can be avoided by using a sane copyright statement (as described in this blog post) and including an unambiguous license ID. REUSE.software ties both of these together in an easy to follow specification.

hook out → hat tip to the TODO Group for giving me the push to finally finish this article and Carmen Bianca Bakker for some very welcome feedback


  1. This is presumed to be the author at least initially. But depending on circumstances can be also some other person, a legal entity, a group of people etc. 

  2. A license is by definition “[t]he permission granted by competent authority to exercise a certain privilege that, without such authorization, would constitute an illegal act, a trespass or a tort.” 

  3. Limitations and exceptions (or fair use/dealings) in copyright are extremely limited when it comes to software compared to more traditional media. Do not rely on them. 

  4. In USA, the copyright statement is often called a copyright notice. The two terms are used intercheangably. 

  5. E.g. the 5 SLOC rule of thumb means that any contribution that is 5 lines or shorter, is (likely) too short to be deemed copyrightable, and therefore can be treated as un-copyrightable or as in public domain; and on the flip-side anything longer than 5 lines of code needs to be treated as copyrightable. This rule can pop up when a project has a relatively strict contribution agreement (a CLA or even CAA), but wants some leeway to accept short fix patches from drive-by contributors. The obvious problem with this is that on one hand someone can be very original even in 5 lines (think haiku), while one can also have pages and pages of absolute fluff or just plain raw factual numbers. 

  6. This depends from jurisdiction to jurisdiction. The Berne convention stipulates at least 50 years after death of the author as the baseline. There are very few non-signatory states that have shorter terms, but the majority of countries have life + 70 years. The current longest copyright term is life + 100 years, in Mexico. 

  7. The only improvement is that it avoids messing up the Git/VCS history. 

  8. In practice what the Author field in a Git repository actually includes varies quite a bit and depends on how the committer set up and used Git. 

  9. Of course, I did not go through all of the copyright laws out there, but I checked a handful of them in different languages I understand, and this is the pattern I identified. If anyone has a more thorough analysis at hand, please reach out and I will happily include it. 

  10. Just think about it, pretty much every time you create a new account somewhere online, you are asked for your e-mail address, and in general people rarely change their e-mail address. 

  11. As stated before, in most jurisdictions that is 70 years after the death of the author. 

  12. I suspect many of the readers not only can imagine one, but have seen many such projects before ;)

  13. Granted, MIT code embedded into AGPL-3.0-or-later code is less risky than vice versa. But simply imagine what it would be the other way around … or wtih an even odder combination of licenses. 

Saturday, 21 March 2020

Using LibreDNS with dnscrypt-proxy

Using DNS over HTTPS aka DoH is fairly easy with the latest version of firefox. To use libredns is just a few settings in your browser, see here. In libredns’ site, there are also instructions for DNS over TLS aka DoT.

In this blog post, I am going to present how to use dnscrypt-proxy as a local dns proxy resolver using DoH the LibreDNS noAds (tracking) endpoint. With this setup, your entire operating system can use this endpoint for everything.

Disclaimer: This blog post is about dnscrypt-proxy version 2.

dnscrypt.png

dnscrypt-proxy

dnscrypt-proxy 2 - A flexible DNS proxy, with support for modern encrypted DNS protocols such as DNSCrypt v2, DNS-over-HTTPS and Anonymized DNSCrypt.

Installation

sudo pacman -S dnscrypt-proxy

Verify Package

$ pacman -Qi dnscrypt-proxy

Name            : dnscrypt-proxy
Version         : 2.0.39-3
Description     : DNS proxy, supporting encrypted DNS protocols such as DNSCrypt v2 and DNS-over-HTTPS
Architecture    : x86_64
URL             : https://dnscrypt.info
Licenses        : custom:ISC
Groups          : None
Provides        : None
Depends On      : glibc
Optional Deps   : python-urllib3: for generate-domains-blacklist [installed]
Required By     : None
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 12.13 MiB
Packager        : David Runge <dvzrv@archlinux.org>
Build Date      : Sat 07 Mar 2020 08:10:14 PM EET
Install Date    : Fri 20 Mar 2020 10:46:56 PM EET
Install Reason  : Explicitly installed
Install Script  : Yes
Validated By    : Signature

Disable systemd-resolved

if necessary

$ ps -e fuwww | grep re[s]olv
systemd+     525  0.0  0.1  30944 21804 ?        Ss   10:00   0:01 /usr/lib/systemd/systemd-resolved

$ sudo systemctl stop systemd-resolved.service

$ sudo systemctl disable systemd-resolved.service
Removed /etc/systemd/system/multi-user.target.wants/systemd-resolved.service.
Removed /etc/systemd/system/dbus-org.freedesktop.resolve1.service.

Configuration

It is time to configure dnscrypt-proxy to use libredns

sudo vim /etc/dnscrypt-proxy/dnscrypt-proxy.toml

In the top of the file, there is a server_names section

  server_names = ['libredns-noads']

Resolv Conf

We can now change our resolv.conf to use our local IP address.

echo -e "nameserver 127.0.0.1noptions edns0 single-request-reopen" | sudo tee /etc/resolv.conf
$ cat /etc/resolv.conf

nameserver 127.0.0.1
options edns0 single-request-reopen

Systemd

start & enable dnscrypt service

sudo systemctl start dnscrypt-proxy.service

sudo systemctl enable dnscrypt-proxy.service
$ sudo ss -lntup '( sport = :domain )'

Netid  State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port  Process
udp    UNCONN  0       0       127.0.0.1:53       0.0.0.0:*          users:(("dnscrypt-proxy",pid=55795,fd=6))
tcp    LISTEN  0       4096    127.0.0.1:53       0.0.0.0:*          users:(("dnscrypt-proxy",pid=55795,fd=7))

Verify

$ dnscrypt-proxy -config /etc/dnscrypt-proxy/dnscrypt-proxy.toml -list
libredns-noads
$ dnscrypt-proxy -config /etc/dnscrypt-proxy/dnscrypt-proxy.toml -resolve balaskas.gr
Resolving [balaskas.gr]

Domain exists:  yes, 2 name servers found
Canonical name: balaskas.gr.
IP addresses:   158.255.214.14, 2a03:f80:49:158:255:214:14:80
TXT records:    v=spf1 ip4:158.255.214.14/31 ip6:2a03:f80:49:158:255:214:14:0/112 -all
Resolver IP:    116.203.115.192 (libredns.gr.)

Dig

asking our local dns (proxy)

dig @localhost balaskas.gr
; <<>> DiG 9.16.1 <<>> @localhost balaskas.gr
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2449
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;balaskas.gr.                   IN      A

;; ANSWER SECTION:
balaskas.gr.            7167    IN      A       158.255.214.14

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat Mar 21 19:48:53 EET 2020
;; MSG SIZE  rcvd: 56

That’s it !

Yoursystem is now using LibreDNS DoH noads endpoint.

Manual Steps

If your operating system does not yet support dnscrypt-proxy-2 then:

Latest version

You can always download the latest version from github:

To view the files

curl -sLo - $(curl -sL https://api.github.com/repos/DNSCrypt/dnscrypt-proxy/releases/latest | jq -r '.assets[].browser_download_url | select( contains("linux_x86_64"))') | tar tzf -

linux-x86_64/
linux-x86_64/dnscrypt-proxy
linux-x86_64/LICENSE
linux-x86_64/example-cloaking-rules.txt
linux-x86_64/example-dnscrypt-proxy.toml
linux-x86_64/example-blacklist.txt
linux-x86_64/example-whitelist.txt
linux-x86_64/localhost.pem
linux-x86_64/example-ip-blacklist.txt
linux-x86_64/example-forwarding-rules.txt

To extrace the files

$ curl -sLo - $(curl -sL https://api.github.com/repos/DNSCrypt/dnscrypt-proxy/releases/latest | jq -r '.assets[].browser_download_url | select( contains("linux_x86_64"))') | tar xzf -

$ ls -l linux-x86_64/
total 9932
-rwxr-xr-x 1 ebal ebal 10117120 Μαρ  21 13:56 dnscrypt-proxy
-rw-r--r-- 1 ebal ebal      897 Μαρ  21 13:50 example-blacklist.txt
-rw-r--r-- 1 ebal ebal     1277 Μαρ  21 13:50 example-cloaking-rules.txt
-rw-r--r-- 1 ebal ebal    20965 Μαρ  21 13:50 example-dnscrypt-proxy.toml
-rw-r--r-- 1 ebal ebal      970 Μαρ  21 13:50 example-forwarding-rules.txt
-rw-r--r-- 1 ebal ebal      439 Μαρ  21 13:50 example-ip-blacklist.txt
-rw-r--r-- 1 ebal ebal      743 Μαρ  21 13:50 example-whitelist.txt
-rw-r--r-- 1 ebal ebal      823 Μαρ  21 13:50 LICENSE
-rw-r--r-- 1 ebal ebal     2807 Μαρ  21 13:50 localhost.pem

$ cd linux-x86_64/

Prepare the configuration

$ cp example-dnscrypt-proxy.toml dnscrypt-proxy.toml
$
$ vim dnscrypt-proxy.toml

In the top of the file, there is a server_names section

  server_names = ['libredns-noads']
$ ./dnscrypt-proxy -config dnscrypt-proxy.toml --list
[2020-03-21 19:27:20] [NOTICE] dnscrypt-proxy 2.0.40
[2020-03-21 19:27:20] [NOTICE] Network connectivity detected
[2020-03-21 19:27:22] [NOTICE] Source [public-resolvers] loaded
[2020-03-21 19:27:23] [NOTICE] Source [relays] loaded
libredns-noads

Run as root

$ sudo ./dnscrypt-proxy -config ./dnscrypt-proxy.toml
[sudo] password for ebal: *******

[2020-03-21 20:11:04] [NOTICE] dnscrypt-proxy 2.0.40
[2020-03-21 20:11:04] [NOTICE] Network connectivity detected
[2020-03-21 20:11:04] [NOTICE] Source [public-resolvers] loaded
[2020-03-21 20:11:04] [NOTICE] Source [relays] loaded
[2020-03-21 20:11:04] [NOTICE] Firefox workaround initialized
[2020-03-21 20:11:04] [NOTICE] Now listening to 127.0.0.1:53 [UDP]
[2020-03-21 20:11:04] [NOTICE] Now listening to 127.0.0.1:53 [TCP]
[2020-03-21 20:11:04] [NOTICE] [libredns-noads] OK (DoH) - rtt: 65ms
[2020-03-21 20:11:04] [NOTICE] Server with the lowest initial latency: libredns-noads (rtt: 65ms)
[2020-03-21 20:11:04] [NOTICE] dnscrypt-proxy is ready - live servers: 1

Check DNS

Interesting enough, first time is 250ms , second time is zero!

$ dig libredns.gr

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> libredns.gr
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53609
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;libredns.gr.   IN  A

;; ANSWER SECTION:
libredns.gr.    2399  IN  A 116.203.115.192
libredns.gr.    2399  IN  A 116.202.176.26

;; Query time: 295 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat Mar 21 20:12:52 EET 2020
;; MSG SIZE  rcvd: 72

$ dig libredns.gr

; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> libredns.gr
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31159
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;libredns.gr. IN  A

;; ANSWER SECTION:
libredns.gr.  2395  IN  A 116.203.115.192
libredns.gr.  2395  IN  A 116.202.176.26

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat Mar 21 20:12:56 EET 2020
;; MSG SIZE  rcvd: 72

That’s it

Thursday, 19 March 2020

Tools I use daily the Win10 edition

almost three (3) years ago I wrote an article about the Tools I use daily. But for the last 18 months (or so), I am partial using windows 10 due to my new job role, thus I would like to write an updated version on that article.

 

I’ ll try to use the same structure for comparison as the previous article, keep in mind this a nine to five setup (work related). So here it goes.

windesktop.jpg

 

NOTICE beer is just for decor ;)

Operating System

I use Win10 as my primary operating system in my worklaptop. I have a couple of impediments that can not work on a linux distribution but I am not going to bother you with them (it’s webex and some internal internet-explorer only sites).

We used to use webex as our primary communication tool. We are sharing our screen and have our video camera on, so that everybody can see each other.Working with remote teams, it’s kind of nice to see the faces of your coworkers. A lot of meetings are integrated with the company’s outlook. I use OWA (webmail) as an alternative but in fact it is still difficult to use both of them with a linux desktop.

We successful switched to slack for text communications, video calls and screen sharing. This choice gave us a boost in productivity as we are now daily using slack calls to align with each other. Although still webex is in the mix. Company is now using a newer webex version that works even better with browser support so that is a plus. It’s not always easy to get everybody with a webex license but as long as we are using slack it is okay. Only problem with slack in linux is when working with multiple monitors, you can not choose which monitor to share.

I have considered to use a VM (virtual machine) but a win10 vm needs more than 4G of RAM and a couple of CPUs just to boot up. In that case, it means that I have to reduce my work laptop resources for half the day, every day. So for the time being I am staying with Win10 as the primary operating system. I have to use the winVM for some other internal works but it is limited time.

 

Desktop

Default Win10 desktop

I daily use these OpenSource Tools:

  • AutoHotkey for keyboard shortcut (I like switching languages by pressing capslock)
  • Ditto as clipboard manager
  • Greenshot for screenshot tool

and from time to time, I also use:

except plumb, everything else is opensource!

So I am trying to have the same user desktop experience as in my Linux desktop, like my language swith is capslock (authotkey), I dont even think about it.

 

Disk / Filesystem

Default Win10 filesystem with bitlocker. Every HW change will lock the entire system. In the past this happened twice with a windows firmware device upgrade. Twice!

Dropbox as a cloud sync software, with EncFSMP partition and syncthing for secure personal syncing files.

(same setup as linux, except bitlocker is luks)

 

Mail

OWA for calendar purposes and … still Thunderbird for primary reading mails.

Thunderbird 68.6.0 AddOns:

(same setup as linux)

 

Shell

Windows Subsystem for Linux aka WSL … waiting for the official WSLv2 ! This is a huge HUGE upgrade for windows. I have setup an Arch Linux WSL environment to continue work on a linux environment, I mean bash. I use my WSL archlinux as a jumphost to my VMs.

 

Terminal Emulator

  • Mintty The best terminal emulator for WSL. Small, not to fancy, just works, beautiful, love it.

 

Editor

Using Visual Studio Code for scripting. vim within WSL and notepad for temporary text notes. I have switched to Boostnote for markdown and as my primary note editor.

(same setup as linux)

 

Browser

Multiple Instances of Firefox, Chromium, Tor Browser and brave

Primary Browser: Firefox
Primary Private Browsing: Brave

(same setup as linux)

 

Communication

I use mostly Slack and Signal Desktop. We are using webex but I prefer Zoom. Riot/Matrix for decentralized groups and IRC bridge. To be honest, I also use Viber & messanger (only through webbrowser).

(same setup as linux - minus the Viber client)

 

Media

VLC for windows, what else ? Also GIMP for image editing. I have switched to Spotify for music and draw io for diagrams. Last, I use CPod for podcasts. Netflix (sometimes).

(same setup as linux)

 

In conclusion

I have switched to a majority of electron applications. I use the same applications on my Linux boxes. Encrypted notes on boostnote, synced over syncthing. Same browsers, same bash/shell, the only thing I dont have on my linux boxes are webex and outlook. Consider everything else, I think it is a decent setup across every distro.

 

Thanks for reading my post.

Tag(s): win10

Tuesday, 17 March 2020

How to write your Pelican-powered blog using ownCloud and WebDAV

Originally this HowTo was part of my last post – a lengthy piece about how I migrated my blog to Pelican. As this specific modification might be more interesting than reading the whole thing, I decided to fork and extend it.

What and why?

What I was trying to do is to be able to add, edit and delete content from Pelican from anywhere, so whenever inspiration strikes I can simply take out my phone or open up a web browser and create a rough draft. Basically a make-shift mobile and desktop blogging app.

I decided to that the easiest this to do this by accessing my content via WebDAV via ownCloud that runs on the same server.

Works also on Nextcloud

As an update a few years after I wrote this blog post, I have since migrated from ownCloud to Nextcloud and it all still works the same way.

Why not Git and hooks?

The answer is quite simple: because I do not need it and it adds another layer of complication.

I know many use Git and its hooks to keep track of changes as well as for backups and for pushing from remote machines onto the server. And that is a very fine way of running it, especially if there are several users committing to it.

But for the following reasons, I do not need it:

  • I already include this page with its MarkDown sources, settings and the HTML output in my standard RSnapshot backup scheme of this server, so no need for that;
  • I want to sometimes draft my posts on my mobile and Git and Vim on a touch-screen are just annoying to use;
  • this is a personal blog, so the distributed VCS side of Git is just an overhead really;
  • there is no added benefit to sharing the MarkDown sources on-line, if all the HTML sources are public anyway.

Setting up the server

Pairing up Pelican and ownCloud

In ownCloud it is very easy to mount external storage, and a folder local to the server is still considered “extrenal” as it is outside of ownCloud. Needless to say, there is a nice GUI for that.

Once you open up the Admin page in ownCloud, you will see the External Storage settings. For security reasons only admins can mount a local folder, so if you aren’t one, you will not see Local as an option and you will have to ask your friendly ownCloud sysAdmin to add the folder from his Admin page for you.

If that is not an option, on a GNU/Linux server there is an easy, yet hackish solution as well: just link Pelican’s content folder into your ownCloud user’s file system – e.g:

ln -s /var/www/matija.suklje.name/content/ /var/www/owncloud/htdocs/data/hook/files/Blog

In order to have the files writeable over WebDAV, they need to have write permission from the user that PHP and web-server are running under – e.g.:

chown -R nginx:nginx /var/www/owncloud/htdocs/data/hook/files/Blog/

Automating page generation and ownership

To have pages constantly automatically generated, there is a option to call pelican --autoreload and I did consider turning it into an init script, but decided against it for two reasons:

  • it consumes too much CPU power just to check for changes;
  • as on my poor ARM server a full (re-)generation of this blog takes about 6 minutes2, I did not want to hammer my system for every time I save a minor change.

What I did instead was to create an fcronjob to (re-)generate the website every night at 3 in the morning (and send a mail to root’s default address), under the condition that there blog posts have either been changed in content or added since yesterday:

%nightly,mail * 3 cd /var/www/matija.suklje.name && posts=(content/**/*.markdown(Nm-1)); if (( $#posts )) LC_ALL="en_GB.utf8" make html

Update: the above command is changed to use Zsh; for the old sh version, use:

%nightly,mail * 3 cd /var/www/matija.suklje.name && [[ `find content -iname "*.markdown" -mtime -1` != "" ]] && LC_ALL="en_GB.utf8" make html

In order to have the file permissions on the content directory always correct for ownCloud (see above), I changed the Makefile a bit. The relevant changes can be seen below:

html:
    chown -R nginx:nginx $(INPUTDIR)
    $(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)

clean:
    [ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)

regenerate:
    chown -R nginx:nginx $(INPUTDIR)
    $(PELICAN) -r $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)

E-mail draft reminder

Not directly relevant, but still useful.

In order not to forget any drafts unattended, I have also set up an FCron job to send me an e-mail with a list of all unfinished drafts to my private address.

It is a very easy hack really, but I find it quite useful to keep track of things – find the said fcronjob below:

%midweekly,mailto(matija@suklje.name) * * cd /var/www/matija.suklje.name/content/ && ack "Status: draft"

Client software

ownNotes

As a mobile client I plan to use ownNotes, because it runs on my Nokia N91 and supports MarkDown highlighting out-of-the-box.

All I needed to do in ownNotes is to provide it with my ownCloud log-in credentials and state Blog as the "Remote Folder Name" in the preferences.

But before I can really make use of ownNotes, I have to wait for it to starts using properly managing file-name extensions.

ownCloud web interface

Since ownCloud includes a webGUI text editor with MarkDown highlighting out of the box, I sometimes use that as well.

An added bonus is that the Activity feed of ownCloud keeps a log of when which file changed or was added.

It does not seem possible yet to collaboratively edit files other than ODT in ownCloud’s webGUI, but I imagine that might be the case in the future.

Kate via WebDAV

In many other desktop environments it is child’s play to add a WebDAV remote folder — just adding a link to the file manager should be enough, e.g.: webdavs://thatfunkyplace.wheremymonkeyis.at:443/remote.php/webdav/Blog.

KDE’s Dolphin makes it easier for you, because all you have to do is select RemoteAdd remote folder and if you already have a connection to your ownCloud with some other service (e.g. Zanshin and KOrganizer for WebCal), it will suggest all the details to you, if you choose Recent connection.

Once you have the remote folder added, you can use it transparently all over KDE. So when you open up Kate, you can simply navigate the remote WebDAV folders, open up the files, edit and save them as if they were local files. It really is as easy as that! ☺

Tip

I probably could have also used the more efficient KIO FISH, but I have not bothered with setting up a more complex permission set-up for such a small task. For security reasons it is not possible to log in via SSH using the same user the web server runs under.

SSH and Vim

Of course, it is also possible to ssh to the web server, su to the correct user, edit the files with Vim and let FCron and Make file make sure the ownership is done appropriately.

hook out → back to studying Arbitration law


  1. Yes, I am well aware you can run Vim and Git on MeeGo Harmattan and I do use it. But Vim on a touch-screen keyboard is not very fun to use for brainstorming. 

  2. At the time of writing this blog includes 343 articles and 2 pages, which took Pelican 440 seconds to generate on my poor little ARM server (on a normal load). 

Monday, 16 March 2020

Install Jitsi-Meet alongside ejabberd

Since the corona virus is forcing many of us into home office there is a high demand for video conference solutions. A popular free and open source tool for creating video conferences similar to Google’s hangouts is Jitsi Meet. It enables you to create a conference room from within your browser for which you can then share a link to your coworkers. No client software is needed at all (except mobile devices).

The installation of Jitsi Meet is super straight forward – if you have a dedicated server sitting around. Simply add the jitsi repository to your package manager and (in case of debian based systems) type

sudo apt-get install jitsi-meet

The installer will guide you through most of the process (setting up nginx / apache, installing dependencies, even do the letsencrypt setup) and in the end you can start video calling! The quick start guide does a better job explaining this than I do.

Jitsi Meet is a suite of different components that all play together (see Jitsi Meet manual). Part of the mix is a prosody XMPP server that is used for signalling. That means if you want to have the simple easy setup experience, your server must not already run another XMPP server. Otherwise you’ll have to do some manual configuration ahead of you.

I did that.

Since I already run a personal ejabberd XMPP server and don’t have any virtualization tools at hands, I wanted to make jitsi-meet use ejabberd instead of prosody. In the end both should be equally suited for the job.

Looking at the prosody configuration file that comes with Jitsi’s bundled prosody we can see that Jitsi Meet requires the XMPP server to serve two different virtual hosts.
The file is located under /etc/prosody/conf.d/meet.example.org.cfg.lua

VirtualHost "meet.example.org"
        authentication = "anonymous"
        ssl = {
                ...
        }
        modules_enabled = {
            "bosh";
            "pubsub";
            "ping";
        }
        c2s_require_encryption = false

Component "conference.meet.example.org" "muc"
    storage = "memory"
admins = { "focus@auth.meet.example.org" }

Component "jitsi-videobridge.meet.example.org"
    component_secret = "SECRET1"

VirtualHost "auth.meet.example.org"
    ssl = {
        ...
    }
    authentication = "internal_plain"

Component "focus.meet.example.org"
    component_secret = "SECRET2"

Remember to replace SECRET1 and SECRET2 with secure secrets! There are also some external components that need to be configured. This is where Jitsi Meet plugs into the XMPP server.

In my case I don’t want to server 3 virtual hosts with my ejabberd, so I decided to replace auth.meet.jabberhead.tk with my already existing main domain jabberhead.tk which already uses internal authentication. So all I had to do is to add the virtual host meet.jabberhead.tk to my ejabberd.yml and configure it to use anonymous authentication.
The ejabberd config file is located under /etc/ejabberd/ejabberd.yml or /opt/ejabberd/conf/ejabberd.yml depending on your ejabberd distribution.

hosts:
    ## serves as main host, as well as auth.meet.jabberhead.tk for focus user
  - "jabberhead.tk"
    ## serves as anonymous authentication host for meet.jabberhead.tk
  - "meet.jabberhead.tk"
...
host_config:
  meet.jabberhead.tk:
    auth_method: anonymous
    allow_multiple_connections: true
    anonymous_protocol: both

The syntax for external components is quite different for ejabberd than it is for prosody, so it took me some time to get it working.

listen:
 -
    port: 5280
    ip: "::"
    module: ejabberd_http
    request_handlers:
      ## Not sure if this is needed, but by default jitsi-meet uses http-bind for bosh
      "/http-bind": mod_bosh
      "/bosh": mod_bosh
    tls: true
    protocol_options: 'TLS_OPTIONS'
  -
    port: 5275
    ip: "::"
    module: ejabberd_service
    access: all
    shaper: fast
    hosts:
      "jitsi-videobridge.jabberhead.tk":
        password: "SECRET1"
  -
    port: 5347
    module: ejabberd_service
    hosts:
      "focus.jabberhead.tk":
        password: "SECRET2"

By re-reading the config files now, I wonder why I ended up placing the focus component under the host focus.jabberhead.tk and not focus.meet.jabberhead.tk, but hey – it works and I’m too scared to touch it again đŸ˜›

The configuration of the modules was a bit trickier on ejabberd, as the ejabberd config syntax seems to disallow duplicate entries. In this case I had to configure mod_muc for my main domain different than for the meet.jabberhead.tk domain, so I had to move the original mod_muc and mod_muc_admin configuration out of the modules: block and into an append_host_config: block with different settings per domain.

append_host_config:
  jabberhead.tk:
    modules:
        ## This is the original muc configuration I used before
        mod_muc:
          access:
            - allow
          access_admin:
            - allow: admin
          access_create: muc_create
          access_persistent: muc_create
          access_mam:
            - allow
          default_room_options:
            allow_private_messages: true
            mam: true
            persistent: true
        mod_muc_admin: {}
  meet.jabberhead.tk:
    modules:
      ## This is the config only for meet.jabberhead.tk
      mod_muc:
        host: conference.meet.jabberhead.tk
      mod_muc_admin: {}

mod_pubsub, mod_ping and mod_bosh all have to be enabled, but can stay in the global modules: block.

Last but not least we have to add the focus user as an admin and also generate (not discussed here) and add certificates for the meet.jabberhead.tk subdomain.

certfiles:
  - ...
  - "/etc/ssl/meet.jabberhead.tk/cert.pem"
  - "/etc/ssl/meet.jabberhead.tk/fullchain.pem"
  - "/etc/ssl/meet.jabberhead.tk/privkey.pem"
...
acl:
  admin:
    user:
      - "focus@jabberhead.tk"

That’s it for the ejabberd configuration. Now we have to configure the other Jitsi Meet components. Lets start with jicofo, the Jitsi Conference Focus component.

My /etc/jitsi/jicofo/config file looks as follows.

JICOFO_HOST=jabberhead.tk
JICOFO_HOSTNAME=jabberhead.tk
JICOFO_SECRET=SECRET2
JICOFO_PORT=5347
JICOFO_AUTH_DOMAIN=jabberhead.tk
JICOFO_AUTH_USER=focus
JICOFO_AUTH_PASSWORD=SECRET3
JICOFO_OPTS=""
# Below can be left as is.
JAVA_SYS_PROPS=...

Respectively the videobridge configuration (/etc/jitsi/videobridge/config) looks like this:

JVB_HOSTNAME=jabberhead.tk
JVB_HOST=localhost
JVB_PORT=5275
JVB_SECRET=SECRET1
## Leave below as originally was
JAVA_SYS_PROPS=...

Some changes had to be made to /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.AUTHORIZED_SOURCE_REGEXP=focus@jabberhead.tk/.*
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=<IP-OF-YOUR-SERVER>
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=<IP-OF-YOUR-SERVER>
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443

Now we can wire it all together by modifying the Jitsi Meet config file found under /etc/jitsi/meet/meet.example.org-config.js:

var config = {
    hosts: {
        domain: 'jabberhead.tk',
        anonymousdomain: 'meet.jabberhead.tk',
        authdomain: 'jabberhead.tk',
        bridge: 'jitsi-videobridge.meet.jabberhead.tk',
        focus: 'focus.jabberhead.tk',
        muc: 'conference.meet.jabberhead.tk'
    },
    bosh: '//meet.jabberhead.tk/http-bind',
    clientNode: 'http://jitsi.org/jitsimeet',
    focusUserJid: 'focus@jabberhead.tk',

    testing: {
    ...
    }
...
}

Last but not least my nginx host configuration (/etc/nginx/sites-available/meet.example.org.conf) where all I changed was the bosh configuration (went from http to https, I heard people say this is not necessary though).

server {
    listen 80;
    server_name meet.jabberhead.tk;
    return 301 https://$host$request_uri;
}
server {
    listen 443 ssl;
    server_name meet.jabberhead.tk;
    ...
    # BOSH
    location = /http-bind {
        proxy_pass      https://localhost:5280/http-bind;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $http_host;
    }
    ...
}

Finally of course, I also had to register the focus user as an XMPP account:

ejabberdctl register focus jabberhead.tk SECRET3

Remember to use a safe password instead of SECRET3 and also stop and disable the bundled prosody! That’s it!

I hope this lowers the bar for some to deploy Jitsi Meet next to their already existing ejabberd. Lastly please do not ask me for support, as I barely managed to get this working for myself đŸ˜›

Sunday, 15 March 2020

restic with minio

restic is a fast, secure & efficient backup program.

I wanted to test restic for some time now. It is a go backup solution, I would say similar to rclone but it has a unique/different design. I prefer having an isolated clean environment when testing software, so I usually go with a VΜ. For this case, I installed elementary OS v5.1, an ubuntu LTS based distro focus on user experience. As backup storage solution, I used MinIO an S3 compatible object storage on the same VM. So here are my notes on restic and in the end of this article you will find how I setup minion.

Be aware this is a technical post!

restic

Most probably your distro package manager has already restic in their repositories.

pacman -S restic

or

apt -y install restic

download latest version

But just in case you want to install the latest binary version, you can use this command

curl -sLo - $(curl -sL https://api.github.com/repos/restic/restic/releases/latest | jq -r '.assets[].browser_download_url | select( contains("linux_amd64"))') \
  | bunzip2 - | sudo tee /usr/local/bin/restic > /dev/null

sudo chmod +x /usr/local/bin/restic

or if you are already root

curl -sLo - $(curl -sL https://api.github.com/repos/restic/restic/releases/latest | jq -r '.assets[].browser_download_url | select( contains("linux_amd64"))') \
  | bunzip2 - > /usr/local/bin/restic

chmod +x /usr/local/bin/restic

we can see the latest version

$ restic version
restic 0.9.6 compiled with go1.13.4 on linux/amd64

autocompletion

Enable autocompletion

sudo restic generate --bash-completion /etc/bash_completion.d/restic

restart your shell.

Prepare your repo

We need to prepare our destination repository. This is our backup endpoint. restic can save multiple snapshots for multiple hosts on the same endpoint (repo).

Apart from the files stored within the keys directory, all files are encrypted with AES-256 in counter mode (CTR). The integrity of the encrypted data is secured by a Poly1305-AES message authentication code (sometimes also referred to as a “signature”).

To access a restic repo, we need a key. We will use this key as password (or passphrase) and it is really important NOT to lose this key.

For automated backups (or scripts) we can use the environmental variables of our SHELL to export the password. It is best to export the password through a script or even better through a password file.

export -p RESTIC_PASSWORD=<our key>
or
export -p RESTIC_PASSWORD_FILE=<full path of 0400 file>

eg.

export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un

We can also declare the restic repository through an environmental variable

export -p RESTIC_REPOSITORY=<our repo>

Local Repo

An example of local backup repo should be something like this:

$ cat restic.local.conf
export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un
export -p RESTIC_REPOSITORY="/mnt/backup/"

minio S3

We are going to use minio as an S3 object storage, so we need to export the Access & Sercet Key in a similar way as for amazon S3.

AccessKey <~> AWS_ACCESS_KEY_ID
SecretKey <~> AWS_SECRET_ACCESS_KEY
export -p AWS_ACCESS_KEY_ID=minioadmin
export -p AWS_SECRET_ACCESS_KEY=minioadmin

The S3 endpoint is http://localhost:9000/demo so a full example should be:

$ cat restic.S3.conf

export -p AWS_ACCESS_KEY_ID=minioadmin
export -p AWS_SECRET_ACCESS_KEY=minioadmin

export -p RESTIC_PASSWORD=55C9225pXNK3s3f7624un
export -p RESTIC_REPOSITORY="s3:http://localhost:9000/demo"

source the config file into your shell:

source restic.S3.conf

Initialize Repo

We are ready to initialise the remote repo

$ restic init
created restic repository f968b51633 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Be Careful if you asked to type a password, that means that you did not use a shell environmental variable to export a password. That is fine, but only if that was your purpose. Then you will see something like that:

$ restic init

enter password for new repository: <type your password here>
enter password again: <type your password here, again>

created restic repository ea97171d56 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
enter password for new repository:
enter password again:
created restic repository ea97171d56 at s3:http://localhost:9000/demo

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

backup

We are ready to take our first snapshot.

$ restic -v backup /home/ebal/

open repository
repository c8d9898b opened successfully, password is correct
created new cache in /home/ebal/.cache/restic
lock repository
load index files
start scan on [/home/ebal/]
start backup on [/home/ebal/]
scan finished in 0.567s: 2295 files, 307.823 MiB

Files:        2295 new,     0 changed,     0 unmodified
Dirs:            1 new,     0 changed,     0 unmodified
Data Blobs:   2383 new
Tree Blobs:      2 new
Added to the repo: 263.685 MiB

processed 2295 files, 307.823 MiB in 0:28
snapshot 33e8ae0d saved

You can exclude or include files with restic, but I will not get into this right now.
For more info, read Restic Documentation

standard input

restic can also take for backup:

mysqldump --all-databases -uroot -ppassword | xz - | restic --stdin --stdin-filename mysqldump.sql.bz2

Check

$ restic -v check

using temporary cache in /tmp/restic-check-cache-528400534
repository c8d9898b opened successfully, password is correct
created new cache in /tmp/restic-check-cache-528400534
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

Take another snapshot

$ restic -v backup /home/ebal/ --one-file-system  --cleanup-cache

open repository
repository c8d9898b opened successfully, password is correct
lock repository
load index files
using parent snapshot 33e8ae0d
start scan on [/home/ebal/]
start backup on [/home/ebal/]
scan finished in 0.389s: 2295 files, 307.824 MiB

Files:           0 new,     4 changed,  2291 unmodified
Dirs:            0 new,     1 changed,     0 unmodified
Data Blobs:      4 new
Tree Blobs:      2 new
Added to the repo: 154.549 KiB

processed 2295 files, 307.824 MiB in 0:01
snapshot 280468f6 saved

List snapshots

$ restic -v snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
-----------------------------------------------------------------
6988dda7  2020-03-14 23:32:55  elementary              /etc
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
-----------------------------------------------------------------
3 snapshots

Remove snapshot

as you can see, I had one more snapshot before my home dir and I want to remove it

$ restic -v forget 6988dda7

repository c8d9898b opened successfully, password is correct
removed snapshot 6988dda7

list again

$ restic -v snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
-----------------------------------------------------------------
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
-----------------------------------------------------------------
2 snapshots

Compare snapshots

$ restic -v diff 33e8ae0d 280468f6

repository c8d9898b opened successfully, password is correct
comparing snapshot 33e8ae0d to 280468f6:

M    /home/ebal/.config/dconf/user
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/SiteSecurityServiceState.txt
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/datareporting/aborted-session-ping
M    /home/ebal/.mozilla/firefox/pw9z9f9z.default-release/storage/default/moz-extension+++62b23386-279d-4791-8ae7-66ab3d69d07d^userContextId=4294967295/idb/3647222921wleabcEoxlt-eengsairo.sqlite

Files:           0 new,     0 removed,     4 changed
Dirs:            0 new,     0 removed
Others:          0 new,     0 removed
Data Blobs:      4 new,     4 removed
Tree Blobs:     14 new,    14 removed
  Added:   199.385 KiB
  Removed: 197.990 KiB

Mount a snapshot

$ mkdir -p backup

$ restic -v mount backup/

repository c8d9898b opened successfully, password is correct
Now serving the repository at backup/
When finished, quit with Ctrl-c or umount the mountpoint.

open another terminal

$ cd backup/

$ ls -l
total 0
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 hosts
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 ids
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 snapshots
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 tags

$ ls -l hosts/
total 0
dr-xr-xr-x 1 ebal ebal 0 Μαρ  15 21:12 elementary

$ ls -l snapshots/
total 0
dr-xr-xr-x 3 ebal ebal 0 Μαρ  15 21:05 2020-03-15T21:05:55+02:00
dr-xr-xr-x 3 ebal ebal 0 Μαρ  15 21:08 2020-03-15T21:08:38+02:00
lrwxrwxrwx 1 ebal ebal 0 Μαρ  15 21:08 latest -> 2020-03-15T21:08:38+02:00

$ ls -l tags
total 0

So as we can see, snapshots are based on time.

$ du -sh snapshots/*

309M  snapshots/2020-03-15T21:05:55+02:00
309M  snapshots/2020-03-15T21:08:38+02:00
0     snapshots/latest

be aware as far as we have mounted the restic backup, there is a lock on the repo.
Do NOT forget to close the mount point when finished.

When finished, quit with Ctrl-c or umount the mountpoint.
  signal interrupt received, cleaning up

Check again

you may need to re-check to see if there is a lock on the repo

$ restic check

using temporary cache in /tmp/restic-check-cache-524606775
repository c8d9898b opened successfully, password is correct
created new cache in /tmp/restic-check-cache-524606775
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

Restore a snapshot

Identify which snapshot you want to restore

$ restic snapshots

repository c8d9898b opened successfully, password is correct
ID        Time                 Host        Tags        Paths
-----------------------------------------------------------------
33e8ae0d  2020-03-15 21:05:55  elementary              /home/ebal
280468f6  2020-03-15 21:08:38  elementary              /home/ebal
-----------------------------------------------------------------
2 snapshots

create a folder and restore the snapshot

$ mkdir -p restore
$ restic -v restore 280468f6 --target restore/

repository c8d9898b opened successfully, password is correct
restoring <Snapshot 280468f6 of [/home/ebal] at 2020-03-15 21:08:38.10445053 +0200 EET by ebal@elementary> to restore/
$ ls -l restore/
total 4
drwxr-xr-x 3 ebal ebal 4096 Μαρ  14 13:56 home

$ ls -l restore/home/
total 4
drwxr-xr-x 17 ebal ebal 4096 Μαρ  15 20:13 ebal

$ du -sh restore/home/ebal/
287M  restore/home/ebal/

List files from snapshot

$ restic -v ls 280468f6 | head
snapshot 280468f6 of [/home/ebal] filtered by [] at 2020-03-15 21:08:38.10445053 +0200 EET):

/home
/home/ebal
/home/ebal/.ICEauthority
/home/ebal/.Xauthority
/home/ebal/.bash_history
/home/ebal/.bash_logout
/home/ebal/.bashrc
/home/ebal/.cache
/home/ebal/.cache/.notifications.session

keys

$ restic key list

repository ea97171d opened successfully, password is correct
 ID        User  Host        Created
------------------------------------------------
*8c112442  ebal  elementary  2020-03-14 23:22:49
------------------------------------------------

restic rotate snapshot policy

a few more words about forget

Forget mode has a feature of keep last TIME snapshots, where time can be

  • number of snapshots
  • hourly
  • daily
  • weekly
  • monthly
  • yearly

and makes restic with local feature an ideally replacement for rsnapshot!

$ restic help forget

The "forget" command removes snapshots according to a policy. Please note that
this command really only deletes the snapshot object in the repository, which
is a reference to data stored there. In order to remove this (now unreferenced)
data after 'forget' was run successfully, see the 'prune' command.

Flags:
  -l, --keep-last n            keep the last n snapshots
  -H, --keep-hourly n          keep the last n hourly snapshots
  -d, --keep-daily n           keep the last n daily snapshots
  -w, --keep-weekly n          keep the last n weekly snapshots
  -m, --keep-monthly n         keep the last n monthly snapshots
  -y, --keep-yearly n          keep the last n yearly snapshots

Appendix - minio

MinIO is a s3 compatible object storage.

install server

sudo curl -sLo /usr/local/bin/minio \
  https://dl.min.io/server/minio/release/linux-amd64/minio

sudo chmod +x /usr/local/bin/minio

minio --version
minio version RELEASE.2020-03-14T02-21-58Z

run server

minio server ./data
Endpoint:  http://192.168.122.31:9000  http://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin

Browser Access:
   http://192.168.122.31:9000  http://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.122.31:9000 minioadmin minioadmin

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
Detected default credentials 'minioadmin:minioadmin',
please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'

browser

create demo bucket

minio_2020-03-14-19-24-58.png

minio_2020-03-14-19-25-15.png

minio_2020-03-14-19-25-19.png

minio_2020-03-14-19-25-26.png

install client

sudo curl -sLo /usr/local/bin/mc
  https://dl.min.io/client/mc/release/linux-amd64/mc

sudo chmod +x /usr/local/bin/mc

mc -v
mc version RELEASE.2020-03-14T01-23-37Z

configure client

mc config host add myminio http://192.168.122.31:9000 minioadmin minioadmin

run mc client

$ mc ls myminio
[2020-03-14 19:01:25 EET]      0B demo/

$ mc tree myminio/demo
$

mc autocompletion

mc --autocompletion

you need to restart your shell.

$ mc ls myminio/demo/

[2020-03-15 21:03:15 EET]    155B config
[2020-03-15 21:34:13 EET]      0B data/
[2020-03-15 21:34:13 EET]      0B index/
[2020-03-15 21:34:13 EET]      0B keys/
[2020-03-15 21:34:13 EET]      0B snapshots/

That’s It!

Tag(s): restic, minio

20.04 releases branches created

Make sure you commit anything you want to end up in the 20.04 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 19 of March.

More interesting dates
April 2: 20.04 RC (20.03.90) Tagging and Release
April 16: 20.04 Tagging
April 23: 20.04 Release

https://community.kde.org/Schedules/Applications/20.04_Release_Schedule

Tuesday, 10 March 2020

OMEMO Specification Sprint

The past weekend some members of the XMPP community gathered in DĂźsseldorf to work on the next iteration of the OMEMO Encryption Specification. All of us agree that the result – version 0.4 of XEP-0384 – is a huge step forward and better than ever!

On Saturday morning we met up at the Chaosdorf, a local Hacker Space who’s members kindly hosted the sprint. Huge thanks to them for having us!

Prior to the sprint we had collected a list of bullet points of topics we wanted to discuss. Among the more urging topics was proper specification of OMEMO for group chats, support for encrypting extension elements other than the body, as well as clarification on how to implement OMEMO without having to use libsignal. While the latter was technically already possible having a clear written documentation on how to do it is very important.

We spent most of the first day discussing several changes, features and problems and later started writing down the solutions we found. In between – in true DĂźsseldorf fashion – we snacked on some Onigiri and later went for some nice Ramen together. Saturday afternoon we started working in smaller groups on different parts of the specification. I’m amazed by the know-how and technical understanding that people brought to the table!

On the second day I had to leave relatively early after lunchtime due to family commitments, so I could only follow the remaining development of the XEP via git commits on the train.

Apart from further clarification, the updated spec now contains some additional features and tweaks. It is now possible to encrypt near arbitrary contents of messages with the help of Stanza Content Encryption. OMEMO now defines its own SCE profile. This enables workflows like fully end-to-end encrypted read markers and reactions. Thanks to Marvin and Klaus, the specification now also contains a section about how to opt-out of OMEMO encryption, both completely as well as on a per-conversation basis. Now you no longer have to manually disable OMEMO for that one contact on EVERY device you own.

The biggest part of the discussions went into properly specifying the cryptographic primitives for use with the Double Ratchet Algorithm. Tim and Andy did a great job of describing how to use hash functions and cipher algorithms to keep be able to re-implement OMEMO without having to rely on libsignal alone. Klaus and Marvin figured out some sane rules that help to decide when a device becomes active / inactive / stale. This should preserve the cryptographic guarantees of the encryption even if you don’t use one of your devices for a longer time.

Daniel properly described the workflow of recovering from broken sessions. This should improve OMEMO session stability. He also defined the exact form of OMEMO related XML elements. One notable feature from a users perspective are human readable labels for identity keys. This should make it easier for you to distinguish keys from another.

I’m really excited about the changes and can’t wait to see the first implementations in the real world!

One thing that’s left to do for now is to determine a smooth upgrade path. Clients will probably have to use both the new and old OMEMO in parallel for some time, as the changes are not backwards compatible. This would mean that we cannot immediately benefit from stanza content encryption and are bound to body-only encryption for some more time.

Monday, 09 March 2020

20.04 releases dependency freeze this thursday

Next interesting dates:

March 12: 20.04 Dependency Freeze

March 19: 20.04 Freeze and Beta (20.03.80) tag & release

Saturday, 07 March 2020

Real-time communication and collaboration in a file sync & share environment - A introduction to Nextcloud Talk

%!s()

At the beginning of this year I gave two times a presentation about Nextcloud Talk. First at the annual CS3 conference in Copenhagen and just one week later at FOSDEM in Brussels. Nextcloud Talk provides a full featured real-time communication platform. Completely Free Software, self-hosted and nicely integrated with all the other aspects of Nextcloud.


(This blog contain some presentation slides, you can see them here.)

Saturday, 29 February 2020

Diving into the world of Ring Fit Adventure

Continuing with my Ring Fit Adventure adventure1, these are the first full two weeks of working out with Ring Fit Adventure.

My plan is to work our every work day, but this weekend I could not stop myself from trying out a few more mini-games. On the other hand, I skipped two days, but felt bad and missed the routine.

Anyway, onwards with the adventure²! :D

Adventure mode

These weeks, according to the heart sensor, I am getting a light to moderate workout, which given that I seem to have caught a bit of a cold, is just about right. I also continue to stick to the game’s suggestion when to stop training for the day.

The difficulty level is still at 16. But I may ramp it up a bit next week, when I feel better.

World 3

On Monday I started with World 3 and so far it brought several new elements, keeping everything still fresh:

  • (semi-)optional side-quest
  • more mini-games – it seems through the Adventure mode the game plans to gradually introduce the player to all the mini-games, which is a neat trick
  • NPCs
  • shops for equipment, consumable items and ingredients
  • equipment – changes stats
  • consumable items – regenerates health, hints at other effects in the future
  • ingredients and recipes to create consumables
  • early signs of a plot twist
  • new battle skills

World 4

This world provded a new hurdle that I needed to overcome with a new movement skill. Which I had to obtain by training (surprise there!).

In addition to a nice story loop (and plenty of puns again), again there were new things introduced:

  • more ingredients and recipes – now they have additional effects like different attack buffs, better drops, or easier travel
  • another shop with new equipment and consumables – it seems like from now on there will be a new shop with new items in every world
  • optional side-quests of different kinds – one including a teleport
  • even more new battle skills
  • enemies with healing skills
  • healing skill
  • new movement skill

I even revisited World 3 and did some side-quests there. It seems like side-quests will pop up in previous worlds as one progresses to new ones. Which means revisiting previous worlds will be needed for a 100% run and they are also awarded with new skills and items, which is always neat.

General thoughts

So far, every world brought a new surprise, new gaming techniques and skills to master, at a very good pace. It seems to me it should not be too fast for newbies, but also does not seem to go too slow for veteran gamers.

What really needs praise is the level design. It might not be apparent at first, but once you start paying attention, you notice that it introduces not just great timing between slower and more intensive workouts, but also changes the types of workout either explicitly within the level or implied through the enemy choices and unlocking of new attack skills. If you look closely, you will even see that several levels have alternative paths.

Finally, I am honestly very positively surprised at the quality of the RPG elements in this game! Of course it is nowhere near D&D complexity, but it is much deeper and well made than it looks on face value. Again, easy to grasp, but still just meaty enough to keep veterans also engaged.

There are only two critiques I have at this stage:

  • I would like the alarm to be done in a better way (but I do not have a great suggestion either); and
  • Some of the skills/workouts are two-part – e.g. Bow Pull, where for the first part you pull with one arm, and in the second part with the other. With these, if you defeat an enemy before you did your full workout, you basically just trained just one side of your body asymmetrically to the other. A simple fix would be to switch which is the first side every now and again.

Tips

If you want to target specific muscle-groups or do certain types of workouts, you can select the Sets tab in the Set Skills menu and there you will find pre-sets that target e.g. legs, or are good for posture, or concentrate on core muscles. I find it great that apart from just min-maxing, the game gives you a really easy way to customise your adventure play-through also to fit your needed/wanted workout best.

Do not skip the pre-workout and the post-workout stretching – these are vital for a healthy workout. And a really cool thing Ring Fit Adventure does is that the post-workout/cooldown stretching varies depending on which muscle groups you trained during that session.

Quick Play and Custom mode

I also noticed that in Quick Play mode and Custom mode, all the workout skills are already present from the start, so if the Adventure mode2 does not appeal to you, the game does not force you to unlock them for your custom training sets.

After trying out a tiny bit the workout options outside the main Adventure mode, here is what I think of them.

Simple workouts seem to be basically in the gist of how many repetitions can you do in a given amount of time, and do not appeal to me much. Perhaps they are fun if you want to compete with friends in an (off the) couch co-op mode.

Minigames I find quite fun and are as entertaining as they are challenging. It may be that the novelty will wear off, but for now I am enjoying them quite a bit.

Workout Sets that target specific muscles or muscle groups are actually good and still remain quite fun, so it seems like a good choice for when you want to concentrate a bit on just one part of the body, core muscles, posture, or endurance.

Custom mode lets you assemble your own sets of workouts from the whole range of workout skills. For now I can just say they are super easy to set up and select, also from other users on the same system. I imagine these become useful later on, when you want to have more control of what you want to train that day.

Next time: first month or so.

hook out → still unsure whether I like Ring or Tipp better … although Dracaux is also growing on me slowly


  1. Ring Fit Adventure² for short ;) 

  2. I have to say, so far I am having a blast with Adventure mode though! 

Thursday, 27 February 2020

Diving into the world of Ring Fit Adventure

Continuing with my Ring Fit Adventure adventure1, these are the first full two weeks of working out with Ring Fit Adventure.

My plan is to work our every work day, but this weekend I could not stop myself from trying out a few more mini-games. On the other hand, I skipped two days, but felt bad and missed the routine.

Anyway, onwards with the adventure²! :D

Adventure mode

These weeks, according to the heart sensor, I am getting a light to moderate workout, which given that I seem to have caught a bit of a cold, is just about right. I also continue to stick to the game’s suggestion when to stop training for the day.

The difficulty level is still at 16. But I may ramp it up a bit next week, when I feel better.

World 3

On Monday I started with World 3 and so far it brought several new elements, keeping everything still fresh:

  • (semi-)optional side-quest
  • more mini-games – it seems through the Adventure mode the game plans to gradually introduce the player to all the mini-games, which is a neat trick
  • NPCs
  • shops for equipment, consumable items and ingredients
  • equipment – changes stats
  • consumable items – regenerates health, hints at other effects in the future
  • ingredients and recipes to create consumables
  • early signs of a plot twist
  • new battle skills

World 4

This world provded a new hurdle that I needed to overcome with a new movement skill. Which I had to obtain by training (surprise there!).

In addition to a nice story loop (and plenty of puns again), again there were new things introduced:

  • more ingredients and recipes – now they have additional effects like different attack buffs, better drops, or easier travel
  • another shop with new equipment and consumables – it seems like from now on there will be a new shop with new items in every world
  • optional side-quests of different kinds – one including a teleport
  • even more new battle skills
  • enemies with healing skills
  • healing skill
  • new movement skill

I even revisited World 3 and did some side-quests there. It seems like side-quests will pop up in previous worlds as one progresses to new ones. Which means revisiting previous worlds will be needed for a 100% run and they are also awarded with new skills and items, which is always neat.

General thoughts

So far, every world brought a new surprise, new gaming techniques and skills to master, at a very good pace. It seems to me it should not be too fast for newbies, but also does not seem to go too slow for veteran gamers.

What really needs praise is the level design. It might not be apparent at first, but once you start paying attention, you notice that it introduces not just great timing between slower and more intensive workouts, but also changes the types of workout either explicitly within the level or implied through the enemy choices and unlocking of new attack skills. If you look closely, you will even see that several levels have alternative paths.

Finally, I am honestly very positively surprised at the quality of the RPG elements in this game! Of course it is nowhere near D&D complexity, but it is much deeper and well made than it looks on face value. Again, easy to grasp, but still just meaty enough to keep veterans also engaged.

There are only two critiques I have at this stage:

  • I would like the alarm to be done in a better way (but I do not have a great suggestion either); and
  • Some of the skills/workouts are two-part – e.g. Bow Pull, where for the first part you pull with one arm, and in the second part with the other. With these, if you defeat an enemy before you did your full workout, you basically just trained just one side of your body asymmetrically to the other. A simple fix would be to switch which is the first side every now and again.

Tips

Target specific muscle groups

If you want to target specific muscle groups or do certain types of workouts, you can select the Sets tab in the Set Skills menu and there you will find pre-sets that target e.g. legs, or are good for posture, or concentrate on core muscles. I find it great that apart from just min-maxing, the game gives you a really easy way to customise your adventure play-through also to fit your needed/wanted workout best.

Stretching is important

Do not skip the pre-workout and the post-workout stretching – these are vital for a healthy workout. And a really cool thing Ring Fit Adventure does is that the post-workout/cooldown stretching varies depending on which muscle groups you trained during that session.

Quick Play and Custom mode

I also noticed that in Quick Play mode and Custom mode, all the workout skills are already present from the start, so if the Adventure mode2 does not appeal to you, the game does not force you to unlock them for your custom training sets.

After trying out a tiny bit the workout options outside the main Adventure mode, here is what I think of them.

Simple workouts seem to be basically in the gist of how many repetitions can you do in a given amount of time, and do not appeal to me much. Perhaps they are fun if you want to compete with friends in an (off the) couch co-op mode.

Minigames I find quite fun and are as entertaining as they are challenging. It may be that the novelty will wear off, but for now I am enjoying them quite a bit.

Workout Sets that target specific muscles or muscle groups are actually good and still remain quite fun, so it seems like a good choice for when you want to concentrate a bit on just one part of the body, core muscles, posture, or endurance.

Custom mode lets you assemble your own sets of workouts from the whole range of workout skills. For now I can just say they are super easy to set up and select, also from other users on the same system. I imagine these become useful later on, when you want to have more control of what you want to train that day.

Next time: first month or so.

hook out → still unsure whether I like Ring or Tipp better … although Dracaux is also growing on me slowly


  1. Ring Fit Adventure² for short ;) 

  2. I have to say, so far I am having a blast with Adventure mode though! 

Tuesday, 25 February 2020

How to Implement a XEP for Smack.

Smack is a FLOSS XMPP client library for Java and Android app development. It takes away much of the burden a developer of a chat application would normally have to carry, so the developer can spend more time working on nice stuff like features instead of having to deal with the protocol stack.

Many (80+ and counting) XMPP Extension Protocols (XEPs) are already implemented in Smack. Today I want to bring you along with me and add support for one more.

What Smack does very well is to follow the Open-Closed-Principle of software architecture. That means while Smacks classes are closed for modification by the developer, it is pretty easy to extend Smack to add support for custom features. If Smack doesn’t fit your needs, don’t change it, extend it!

The most important class in Smack is probably the XMPPConnection, as this is where messages coming from and going to. However, even more important for the developer is what is being sent.

XMPP’s strength comes from the fact that arbitrary XML elements can be exchanged by clients and servers. Heck, the server doesn’t even have to understand what two clients are sending each other. That means that if you need to send some form of data from one device to another, you can simply use XMPP as the transport protocol, serialize your data as XML elements with a namespace that you control and send if off! It doesn’t matter, which XMPP server software you choose, as the server more or less just forwards the data from the sender to the receiver. Awesome!

So lets see how we can extend Smack to add support for a new feature without changing (and therefore potentially breaking) any existing code!

For this article, I chose XEP-0428: Fallback Indication as an example protocol extension. The goal of Fallback Indication is to explicitly mark <body/> elements in messages as fallback. For example some end-to-end encryption mechanisms might still add a body with an explanation that the message is encrypted, so that older clients that cannot decrypt the message due to lack of support still display the explanation text instead. This enables the user to switch to a better client đŸ˜› Another example would be an emoji in the body as fallback for a reaction.

XEP-0428 does this by adding a fallback element to the message:

<message from="alice@example.org" to="bob@example.net" type="chat">
  <fallback xmlns="urn:xmpp:fallback:0"/>  <-- THIS HERE
  <encrypted xmlns="urn:example:crypto">Rgreavgl vf abg n irel ybat
gvzr nccneragyl.</encrypted>
  <body>This message is encrypted.</body>
</message>

If a client or server encounter such an element, they can be certain that the body of the message is intended to be a fallback for legacy clients and act accordingly. So how to get this feature into Smack?

After the XMPPConnection, the most important types of classes in Smack are the ExtensionElement interface and the ExtensionElementProvider class. The later defines a class responsible for deserializing or parsing incoming XML into the an object of the former class.

The ExtensionElement is itself an empty interface in that it does not provide anything new, but it is composed from a hierarchy of other interfaces from which it inherits some methods. One notable super class is NamedElement, more on that in just a second. If we start our XEP-0428 implementation by creating a class that implements ExtensionElement, our IDE would create this class body for us:

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return null;
    }

    @Override
    public String getElementName() {
        return null;
    }

    @Override
    public CharSequence toXML(XmlEnvironment xmlEnvironment) {
        return null;
    }
}

The first thing we should do is to change the return type of the toXML() method to XmlStringBuilder, as that is more performant and gains us a nice API to work with. We could also leave it as is, but it is generally recommended to return an XmlStringBuilder instead of a boring old CharSequence.

Secondly we should take a look at the XEP to identify what to return in getNamespace() and getElementName().

<fallback xmlns="urn:xmpp:fallback:0"/>
[   ^    ]      [        ^          ]
element name          namespace

In XML, the part right after the opening bracket is the element name. The namespace follows as the value of the xmlns attribute. An element that has both an element name and a namespace is called fully qualified. That’s why ExtensionElement is inheriting from FullyQualifiedElement. In contrast, a NamedElement does only have an element name, but no explicit namespace. In good object oriented manner, Smacks ExtensionElement inherits from FullyQualifiedElement which in term is inheriting from NamedElement but also introduces the getNamespace() method.

So lets turn our new knowledge into code!

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return "urn:xmpp:fallback:0";
    }

    @Override
    public String getElementName() {
        return "fallback";
    }

    @Override
    public XmlStringBuilder toXML(XmlEnvironment xmlEnvironment) {
        return null;
    }
}

Hm, now what about this toXML() method? At this point it makes sense to follow good old test driven development practices and create a JUnit test case that verifies the correct serialization of our element.

package tk.jabberhead.blog.wow.nice;

import static org.jivesoftware.smack.test.util.XmlUnitUtils.assertXmlSimilar;
import org.jivesoftware.smackx.pubsub.FallbackIndicationElement;
import org.junit.jupiter.api.Test;

public class FallbackIndicationElementTest {

    @Test
    public void serializationTest() {
        FallbackIndicationElement element = new FallbackIndicationElement();

        assertXmlSimilar("<fallback xmlns=\"urn:xmpp:fallback:0\"/>",
element.toXML());
    }
}

Now we can tweak our code until the output of toXml() is just right and we can be sure that if at some point someone starts messing with the code the test will inform us of any breakage. So what now?

Well, we said it is better to use XmlStringBuilder instead of CharSequence, so lets create an instance. Oh! XmlStringBuilder can take an ExtensionElement as constructor argument! Lets do it! What happens if we return new XmlStringBuilder(this); and run the test case?

<fallback xmlns="urn:xmpp:fallback:0"

Almost! The test fails, but the builder already constructed most of the element for us. It prints an opening bracket, followed by the element name and adds an xmlns attribute with our namespace as value. This is typically the “head” of any XML element. What it forgot is to close the element. Lets see… Oh, there’s a closeElement() method that again takes our element as its argument. Lets try it out!

<fallback xmlns="urn:xmpp:fallback:0"</fallback>

Hm, this doesn’t look right either. Its not even valid XML! (ăƒŽŕ˛ ç›Šŕ˛ )ăƒŽĺ˝Ąâ”ťâ”â”ť Normally you’d use such a sequence to close an element which contained some child elements, but this one is an empty element. Oh, there it is! closeEmptyElement(). Perfect!

<fallback xmlns="urn:xmpp:fallback:0"/>
package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.ExtensionElement;
import org.jivesoftware.smack.packet.XmlEnvironment;

public class FallbackIndicationElement implements ExtensionElement {
    
    @Override
    public String getNamespace() {
        return "urn:xmpp:fallback:0";
    }

    @Override
    public String getElementName() {
        return "fallback";
    }

    @Override
    public XmlStringBuilder toXML(XmlEnvironment xmlEnvironment) {
        return new XmlStringBuilder(this).closeEmptyElement();
    }
}

We can now serialize our ExtensionElement into valid XML! At this point we could start sending around FallbackIndications to all our friends and family by adding it to a message object and sending that off using the XMPPConnection. But what is sending without receiving? For this we need to create an implementation of the ExtensionElementProvider custom to our FallbackIndicationElement. So lets start.

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.XmlEnvironment;
import org.jivesoftware.smack.provider.ExtensionElementProvider;
import org.jivesoftware.smack.xml.XmlPullParser;

public class FallbackIndicationElementProvider
extends ExtensionElementProvider<FallbackIndicationElement> {
    
    @Override
    public FallbackIndicationElement parse(XmlPullParser parser,
int initialDepth, XmlEnvironment xmlEnvironment) {
        return null;
    }
}

Normally implementing the deserialization part in form of a ExtensionElementProvider is tiring enough for me to always do that last, but luckily this is not the case with Fallback Indications. Every FallbackIndicationElement always looks the same. There are no special attributes or – shudder – nested named child elements that need special treating.

Our implementation of the FallbackIndicationElementProvider looks simply like this:

package tk.jabberhead.blog.wow.nice;

import org.jivesoftware.smack.packet.XmlEnvironment;
import org.jivesoftware.smack.provider.ExtensionElementProvider;
import org.jivesoftware.smack.xml.XmlPullParser;

public class FallbackIndicationElementProvider
extends ExtensionElementProvider<FallbackIndicationElement> {
    
    @Override
    public FallbackIndicationElement parse(XmlPullParser parser,
int initialDepth, XmlEnvironment xmlEnvironment) {
        return new FallbackIndicationElement();
    }
}

Very nice! Lets finish the element part by creating a test that makes sure that our provider does as it should by creating another JUnit test. Obviously we have done that before writing any code, right? We can simply put this test method into the same test class as the serialization test.

    @Test
    public void deserializationTest()
throws XmlPullParserException, IOException, SmackParsingException {
        String xml = "<fallback xmlns=\"urn:xmpp:fallback:0\"/>";
        FallbackIndicationElementProvider provider =
new FallbackIndicationElementProvider();
        XmlPullParser parser = TestUtils.getParser(xml);

        FallbackIndicationElement element = provider.parse(parser);

        assertEquals(new FallbackIndicationElement(), element);
    }

Boom! Working, tested code!

But how does Smack learn about our shiny new FallbackIndicationElementProvider? Internally Smack uses a Manager class to keep track of registered ExtensionElementProviders to choose from when processing incoming XML. Spoiler alert: Smack uses Manager classes for everything!

If we have no way of modifying Smacks code base, we have to manually register our provider by calling

ProviderManager.addExtensionProvider("fallback", "urn:xmpp:fallback:0",
new FallbackIndicationElementProvider());

Element providers that are part of Smacks codebase however are registered using an providers.xml file instead, but the concept stays the same.

Now when receiving a stanza containing a fallback indication, Smack will parse said element into an object that we can acquire from the message object by calling

FallbackIndicationElement element = message.getExtension("fallback",
"urn:xmpp:fallback:0");

You should have noticed by now, that the element name and namespace are used and referred to in a number some places, so it makes sense to replace all the occurrences with references to a constant. We will put these into the FallbackIndicationElement where it is easy to find. Additionally we should provide a handy method to extract fallback indication elements from messages.

...

public class FallbackIndicationElement implements ExtensionElement {
    
    public static final String NAMESPACE = "urn:xmpp:fallback:0";
    public static final String ELEMENT = "fallback";

    @Override
    public String getNamespace() {
        return NAMESPACE;
    }

    @Override
    public String getElementName() {
        return ELEMENT;
    }

    ...

    public static FallbackIndicationElement fromMessage(Message message) {
        return message.getExtension(ELEMENT, NAMESPACE);
    }
}

Did I say Smack uses Managers for everything? Where is the FallbackIndicationManager then? Well, lets create it!

package tk.jabberhead.blog.wow.nice;

import java.util.Map;
import java.util.WeakHashMap;

import org.jivesoftware.smack.Manager;
import org.jivesoftware.smack.XMPPConnection;

public class FallbackIndicationManager extends Manager {

    private static final Map<XMPPConnection, FallbackIndicationManager>
INSTANCES = new WeakHashMap<>();

    public static synchronized FallbackIndicationManager
getInstanceFor(XMPPConnection connection) {
        FallbackIndicationManager manager = INSTANCES.get(connection);
        if (manager == null) {
            manager = new FallbackIndicationManager(connection);
            INSTANCES.put(connection, manager);
        }
        return manager;
    }

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
    }
}

Woah, what happened here? Let me explain.

Smack uses Managers to provide the user (the developer of an application) with an easy access to functionality that the user expects. In order to use some feature, the first thing the user does it to acquire an instance of the respective Manager class for their XMPPConnection. The returned instance is unique for the provided connection, meaning a different connection would get a different instance of the manager class, but the same connection will get the same instance anytime getInstanceFor(connection) is called.

Now what does the user expect from the API we are designing? Probably being able to send fallback indications and being notified whenever we receive one. Lets do sending first!

    ...

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
    }

    public MessageBuilder addFallbackIndicationToMessage(
MessageBuilder message, String fallbackBody) {
        return message.setBody(fallbackBody)
                .addExtension(new FallbackIndicationElement());
}

Easy!

Now, in order to listen for incoming fallback indications, we have to somehow tell Smack to notify us whenever a FallbackIndicationElement comes in. Luckily there is a rather nice way of doing this.

    ...

    private FallbackIndicationManager(XMPPConnection connection) {
        super(connection);
        registerStanzaListener();
    }

    private void registerStanzaListener() {
        StanzaFilter filter = new AndFilter(StanzaTypeFilter.MESSAGE, 
                new StanzaExtensionFilter(FallbackIndicationElement.ELEMENT, 
                        FallbackIndicationElement.NAMESPACE));
        connection().addAsyncStanzaListener(stanzaListener, filter);
    }

    private final StanzaListener stanzaListener = new StanzaListener() {
        @Override
        public void processStanza(Stanza packet) 
throws SmackException.NotConnectedException, InterruptedException,
SmackException.NotLoggedInException {
            Message message = (Message) packet;
            FallbackIndicationElement fallbackIndicator =
FallbackIndicationElement.fromMessage(message);
            String fallbackBody = message.getBody();
            onFallbackIndicationReceived(message, fallbackIndicator,
fallbackBody);
        }
    };

    private void onFallbackIndicationReceived(Message message,
FallbackIndicationElement fallbackIndicator, String fallbackBody) {
        // do something, eg. notify registered listeners etc.
    }

Now that’s nearly it. One last, very important thing is left to do. XMPP is known for its extensibility (for the better or the worst). If your client supports some feature, it is a good idea to announce this somehow, so that the other end knows about it. That way features can be negotiated so that the sender doesn’t try to use some feature that the other client doesn’t support.

Features are announced by using XEP-0115: Entity Capabilities, which is based on XEP-0030: Service Discovery. Smack supports this using the ServiceDiscoveryManager. We can announce support for Fallback Indications by letting our manager call

ServiceDiscoveryManager.getInstanceFor(connection)
        .addFeature(FallbackIndicationElement.NAMESPACE);

somewhere, for example in its constructor. Now the world knows that we know what Fallback Indications are. We should however also provide our users with the possibility to check if their contacts support that feature as well! So lets add a method for that to our manager!

    public boolean userSupportsFallbackIndications(EntityBareJid jid) 
            throws XMPPException.XMPPErrorException,
SmackException.NotConnectedException, InterruptedException, 
            SmackException.NoResponseException {
        return ServiceDiscoveryManager.getInstanceFor(connection())
                .supportsFeature(jid, FallbackIndicationElement.NAMESPACE);
    }

Done!

I hope this little article brought you some insights into the XMPP protocol and especially into the development process of protocol libraries such as Smack, even though the demonstrated feature was not very spectacular.

Quick reminder that the next Google Summer of Code is coming soon and the XMPP Standards Foundation got accepted đŸ˜‰
Check out the project ideas page!

Happy Hacking!

Monday, 17 February 2020

Smack: Some more busy nights and 12 bytes of IV

In the last months I stayed up late some nights, so I decided to add some additional features to Smack.

Among the additions is support for some new XEPs, namely:

I also started working on an implementation of XEP-0245: Message Moderation, but that one is not yet finished and needs more work.

Direct MUC invitations are a method to invite users to a group chat. Smack already had support for another similar mechanism, but this one is recommended by the XMPP Compliance Suites 2020.

Message Fastening is a generalized mechanism to add information to messages. That might be a reaction, eg. a thumbs up which is added to a previous message.

Message Retraction is used to retract previously sent messages. Internally it is based on Message Fastening.

The Stanza Content Encryption pull request only teaches Smack what SCE elements are, but it doesn’t yet teach it how to use them. That is partly due to no E2EE specification actually using them yet. That will hopefully change soon đŸ˜‰

Anu brought up the fact that the OMEMO XEP is not totally clear on the length of initialization vectors used for message encryption. Historically most clients use 16 bytes length, while normally you would want to use 12. Apparently some AES-GCM libraries on iOS only support 12 bytes length, so using 12 bytes is definitely desirable. Most OMEMO implementations already support receiving 12 bytes as well as 16 bytes IV.

That’s why Smack will soon also start sending OMEMO messages with 12 bytes IV.

Friday, 14 February 2020

Why I am not using Grindr

Grindr is proprietary software that only runs on Android and iOS. It also depends on a centralized server infrastructure that stores data in unencrypted form. The company that hosts Grindr, Amazon is known for violating users privacy. Grindr also sends data to Third-Party Websites and is known for sharing users HIV status without their consent. The terms of use and privacy policy are much too long (about 50 pages), therefore most users don’t read them. If a user has read only parts of those terms, they should become suspect that Grindr violates their privacy and not use the service. I think that sensitive information should be visible only to the intended recipients and not the administrators of any servers or routers, therefore I never use Grindr.

To share such sensitive information I could only use copylefted free software such as GNUnet, which has strong privacy guarantees. In GNUnet every communication is end to end encrypted and metadata leakage is minimized. This is important today where secret services such as the NSA kill on metadata. GNUnet provides social scalability while protecting metadata and it allows users to have multiple unlinkable Egos. It also uses public key cryptography which is inherently more secure than using passwords. Systems such as Alovoa still use passwords and depend on email which is unencrypted by default. Even if used with GPG email leaks metadata. Since GNUnet is a peer to peer network no centralized servers storing data of millions of users are needed. It also provides a replacement for centralized identity providers such as Facebook that act as a kind of password store. When you send personal data to Facebook, the NSA gets the data anyway and they can abuse it for killing people. Please do not do that.

I Love Free Software on the go: the Replicant operating system in practice

On I Love Free Software Day 2020 I’d like to pay attention to and thank the Replicant operating system, which is in active development and empowers users to use Free Software on the go. As a user with a non-technical background it was an honor and a privilege to attend the Replicant Birds of a […]

I love the hidden champions

A few days ago I’ve sent an announcement email for today’s I Love Free Software Day to a large bunch of people. Most of the remarkably many replies have been positive and a pure joy to read, but some were a bit sceptical and critical. These came from Free Software contributors who are maintaining and helping projects that they think nobody knows and sees – not because these software projects are unused, but because they are small, a building block for other, more popular applications.

When we ask people to participate in #ilovefs (this year for the 10th time in a row!) by expressing their gratitude to contributors of their favourite Free Software projects, many think about the applications they often use and come up with obvious ones like Mozilla’s Firefox and Thunderbird, LibreOffice, their Linux-based distribution, or CMSs like WordPress and Drupal. Not that I think this is not deserved, but what about the projects that actually form the foundations for these popular suites?

I researched a bit on my own system (based on Arch Linux) and checked on how many packages some of the aforementioned applications depend (including dependencies of their dependencies)1:

  • Firefox: 221
  • Thunderbird: 179
  • LibreOffice: 185
  • GIMP: 166
  • Inkscape: 164

Phew! Looking through the list of dependencies, there a dozens of programmes and libraries that I couldn’t even imagine what they could be about. But they make a big application, be it Firefox, Thunderbird or GIMP, actually possible. Isn’t it a bit unfair that we often don’t see these small (or sometimes huge) projects and the people who take care of it?2

I decided to change that, at least for one day! I’ve analysed which packages are most used as dependencies of other packages (similar for Debian/Ubuntu [^3]):

for p in $(pacman -Q | cut -d" " -f1); do
  echo "$(pactree -r -l $p | tail -n+2 | sort | uniq | wc -l)$p$(pacman -Qi $p | grep "^Description" | grep -oP '(?<=: ).*')"
done | column -t -s'–' | sort -nr

Output:

1621  iana-etc                   /etc/protocols and /etc/services provided by IANA
1620  tzdata                     Sources for time zone and daylight saving time data
1620  linux-api-headers          Kernel headers sanitized for use in userspace
1620  filesystem                 Base Arch Linux files
1619  glibc                      GNU C Library
1349  gcc-libs                   Runtime libraries shipped by GCC
1287  ncurses                    System V Release 4.0 curses emulation library
1267  readline                   GNU readline library
1261  bash                       The GNU Bourne Again shell
...

As you might expect, on the very top I found a lot of GNU and Linux sub-projects, some widely known (like bash), some which I as a more-user-than-developer never heard of before (like libffi). This alone has been an interesting journey during which I learnt a lot about projects and their maintainers which play a crucial role on my laptop.

In the end, I decided to express my thanks today to the following projects and people:

  • The development team behind acl/attr which controls access permissions
  • The four initial creators of argon2, Jean-Philippe, Samuel, Dmitry and Daniel, for their password hasing function
  • Jan Dittberner (who also is a FSFE supporter!) and Nathan Neulinger, developers of CrackLib which checks and enforces strong passwords
  • Reuben Thomas and Dom Lachowicz for their enchant project, a wrapper for various spell checking engines
  • Maintainers of glibc and gcc, important tools for the C library and compiler
  • The HarfBuzz team which can shape glyphs from Unicode texts
  • The libmnl/netfilter people, who provide tools for network-related operations
  • The contributors of libxml2 for their library and tools that are crucial for the FSFE website
  • Martin Mitáš who more or less alone maintains md4c, a Markdown parser
  • Thomas Dickey who maintains ncurses which provides a text-based interface for the command line
  • Chet Ramey as representative of readline, a programme for interactive user input
  • And last but not least Lasse Collin who maintains xz, a compression tool

But of course, that’s only a small fraction of the many interesting Free Software components that enable my daily work. However, if we all do the same and think about the hidden champions – not only during #ILoveFs day but beyond – we can make the humans behind it enjoy their invaluable contributions a bit more.

Happy I Love Free Software Day everyone! ❤

PS: If you want to try the same with apt (with another separator):

for p in $(dpkg --get-selections | cut -f1 | cut -d":" -f1); do
  echo "$(apt-cache rdepends $p | tr -d '|' | tail -n+3 | sort | uniq | wc -l)*$p*$(apt-cache show $p | grep -m 1 "^Description:" | grep -oP '(?<=: ).*')"
done | column -t -s'*' | sort -nr

  1. During the writing of this blog post I remembered Matthias hugging Peter Stuge for #ilovefs 2013 who also contributes to widely used Free Software projects. ↩︎

  2. pactree -l firefox | sort | uniq ↩︎

Thursday, 13 February 2020

The beginning of my Ring Fit Adventure²

This week I finally got my Ring Fit Adventure for the Nintendo Switch.

For those who do not know it yet, Nintendo Switch is a hybrid gaming console, which can be used either hand-held or docked and connected into a TV. Ring Fit Adventure is a fitness game for it that uses the joy-cons’ motion controls in connection with a custom pilates ring and a leg strap in order to track your movement.

The waiting

I actually liked the sound of it from the very start. It sounded exactly what I needed to trick me into starting a regular training routine and get back into shape. I already keep a workout log, and am ashamed to admit it is depressingly empty.

Ideally I would like to properly pick up rowing, but I found that my core muscles are not on par yet to keep up with others in the Ljubljana Rowing Club. At least that is my excuse …

Why did it take me so long? Well, I seriously misjudged how popular Ring Fit Adventure would be and it sold out before I could get my hands on one.

But this week, I finally got mine!

So how did it work out?

Day 1 – Getting to know the beast

The first evening I fired it up just to check it out. Did not even bother changing clothes.

After calibration, after getting 100% push and pull strength on the ring, and asking me a few questions, the game evaluated my difficulty level to 14. Playing the first level of the first world proved to be pretty easy, but I was definitely moving.

I really liked that the game wants you to stretch before and after, and even rewards you for doing so. On the first level, I got to meet the main protagonist and antagonist, but there was no fighting yet.

When you finish for the day, the game also assesses how hard the workout was for you. As it turns out it was merely a light workout for me, it suggested to raise the difficulty a bit, so next day I start on difficulty level 16.

Then I tried the paragliding and robot-smashing mini-games, which were surprisingly fun, but also physically engaging.

First impressions:

  • very well made, both hardware and software
  • this could be fun, yay!
  • mini-games are pretty fun as well

Day 2 – Fighting through the first world

Right, so first proper day, level 16 difficulty, and I set myself to go through the whole first world, including replaying the first level. This time I changed to gym clothes – and, boy, was it a good idea!

This time, I not only needed to run, squeeze and pull the ring, but also had to fight of monsters.

The turn-based combat where you attack and block performing exercises like squats, over-head squeees of the ring, knees-to-chests, and the chair yoga position, was even more engaging than I thought! Both workout- and gaming-wise.

In the end, I had fun, felt engaged and challenged, and actually broke out quite a sweat. The boss battle at the end of the first world was pretty intense.

So far Ring Fit Adventure exceeded my expectations. Let us see if it keeps me engaged.

Day 3 – Things ramp up

Next day I kept the difficulty at 16, which proved to be a good idea. If the day before I got a moderate workout, today I got substantially worked out.

What was not a good idea, was setting the alarm to an early hour. The vibration is pretty strong and made a lot of noise on the table. I fixed it by changing it to a later hour.

I managed to play throught three adventure levels of the second world, where one was the robo-smashing mini-game, before the game asked me if I had enough. I am somewhat ashamed to admit, that I did.

What I also noticed is that the cooldown stretching at the end differs depending on which muscles you worked out most during your playthrough. Which makes a lot of sense, but I honestly did not expect.

I cannot say my muscles hurt, but I definitely feel them. So far the game seems to set its difficulty really well.

So far the story is not really super griping, but it is good and funny enough to keep me engaged and moderately entertained.

During the weekend I will probably rest, but next week, I will start it up again.

hook out → feeling fitter by the meter

Tuesday, 11 February 2020

Working with different remotes in git

One of the things that is typical when working with gitlab/github is work with different git remotes.

This is sometimes because you don't have commit access to the original repository so you fork it into your own repository and work over there, but you still want to have the original repository around so you can rebase your changes over it.

In this blog we will see how to do that with the okular repository.

First off, we start by cloning the original repository

Since we don't know the URL by memory, we go to https://invent.kde.org/kde/okular/ and press the Clone button to get a hint, if we have commit access we can use both urls, otherwise we have to use the https one, for the sake of this let's assume we do not have commit access.


$ git clone https://invent.kde.org/kde/okular.git
$ cd okular


Ok, at this point we have clone the upstream Okular repository, we can see we only have one remote, called origin


$ git remote -v
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


Now we want to do some fixes, since we can't commit into the main repository, we need to fork, for that we press the fork button in https://invent.kde.org/kde/okular/. Once done we end up in the fork of Okular under our name, e.g. https://invent.kde.org/aacid/okular.

Now what we want is to add our remote to the existing one, so we press the blue button (here we use the git@ one since we can always commit to our fork)


$ git remote add aacid_fork git@invent.kde.org:aacid/okular.git
$ git remote -v
aacid_fork git@invent.kde.org:aacid/okular.git (fetch)
aacid_fork git@invent.kde.org:aacid/okular.git (push)
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


So now we have a remote called aacid_fork that points to url fork, aacid_fork is the name i chose because it's easy to remember, but we could have used any name we wanted there.

Now there's several things one may want to do

Do changes in master and push them to your fork

This is really not the recommended way but since it's what i do and it'll explain how to push from one branch name to another i'll explain it.

After doing the changes and doing the typical git commit now we have to push the changes to our aacid_fork, so we do

git push aacid_fork master:adding_since_to_function

What this does is push the local branch master to the branch named adding_since_to_function of the aacid_fork remote

Create a branch and then push that to your fork

This is what you should be doing, so what you should do is

git branch adding_since_to_function

and then change to work on that branch

git switch adding_since_to_function

After doing the changes and doing the typical git commit now we have to push the changes to our aacid_fork, so we do

git push aacid_fork adding_since_to_function

What this does is push the local branch adding_since_to_function to a branch with the same name of the aacid_fork


Get a branch from someone else's remote and push to it


Sometimes some people say "hey let's work on my branch together", so you need to push not to origin, not to your fork but to someone else's fork.

Let's say you want to work on joliveira's gsoc2019_numberFormat branch, so you would need to add his remote


$ git remote add joliveira_fork git@invent.kde.org:joliveira/okular.git
$ git remote -v
aacid_fork git@invent.kde.org:aacid/okular.git (fetch)
aacid_fork git@invent.kde.org:aacid/okular.git (push)
joliveira_fork git@invent.kde.org:joliveira/okular.git (fetch)
joliveira_fork git@invent.kde.org:joliveira/okular.git (push)
origin https://invent.kde.org/kde/okular.git (fetch)
origin https://invent.kde.org/kde/okular.git (push)


Then we would need to tell git, hey listen, please go and read the branches that remote i just added has

git fetch joliveira_fork

Next we have to tell git to actually give us the gsoc2019_numberFormat branch, there's lots of ways to do that, one that works is

git checkout --track joliveira_fork/gsoc2019_numberFormat

This will create a local gsoc2019_numberFormat from the contents of the remote branch joliveira_fork/gsoc2019_numberFormat and that also "tracks" it, that means that if someone else does changes to it and you do git pull --rebase while on your local gsoc2019_numberFormat, you'll get them.

After doing the changes and doing the typical git commit now we have to push the changes to the joliveira_fork, so we do

git push joliveira_fork gsoc2019_numberFormat


What you don't want to do

Don't push to the master branch of your remote, it's weird, some people do, but it's note really recommended.

Things to remember

A git remote is another repository, it just so happens that it has "similar" code, but it's a fork, so you can push to it, checkout branches from it, etc.

Every time you want to get changes from a remote, remember to git fetch remote_name otherwise you're still on the "old" snapshot form your last fetch.

When git pushing the syntax is git push remote_name local_branch_name:remote_branch_name

Bonus track: Using git mr

As shown in my previous blog post you can use git mr to easy download the code of a mr. Let's use as example Okular's MR #20 https://invent.kde.org/kde/okular/merge_requests/20.

You can simply do git mr 20 and it will create a local branch named mr/20 with the contents of that MR. Unfortunately, if you want to commit changes to it, you still need to use the original remote and branch name name, so if you do some changes, after the git commit you should do

git push joliveira_fork mr/20:gsoc2019_percentFormat

Sunday, 09 February 2020

20.04 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.04_Release_Schedule

Dependency freeze is in ~five weeks (March12) and Feature Freeze a week after that, make sure you start finishing your stuff!

Saturday, 08 February 2020

From socket(2) to .onion with pf(4)

I’ve been rebuilding my IRC bouncer setup and as part of this process I’ve decided to connect to IRC via onion services where possible. This setup isn’t intended to provide anonymity as once I’m connected I’m going to identify to NickServ anyway. I guess it provides a little protection in that my IP address shouldn’t be visible in that gap between connection and a cloak activating, but there’s so many other ways that my identity could leak.

You might wonder why I even bothered if not for anonymity. There are two reasons:

  1. to learn more about tor(1) and pf(4), and
  2. to figure out how to get non-proxy aware software to talk to onion services.

I often would find examples of socat, torsocks, etc. but none of them seemed to fit with my goal of wanting to use an onion service as if it were just another host on the Internet. By this I mean, with a socket(AF_INET, SOCK_STREAM) that didn’t also affect my ability to connect to other Internet hosts.

Onion services don’t have IP addresses. They have names that look like DNS names but that are not actually in DNS. So the first problem here is going to be that we’re not going to be able to give an onion address to the kernel, it wants an IP address. In my setup I chose 10.10.10.0/24 as a subnet that will have IP addresses that when connected to, will actually connect to onion services.

In the torrc file you can use MapAddress to encode these mappings, for example:

MapAddress 10.10.10.10 ajnvpgl6prmkb7yktvue6im5wiedlz2w32uhcwaamdiecdrfpwwgnlqd.onion # Freenode
MapAddress 10.10.10.11 dtlbunzs5b7s5sl775quwezleyeplxzicdoh3cnhm7feolxmkfd42nqd.onion # Hackint
MapAddress 10.10.10.12 awwqg2ishrohngue.onion # 2600net - broken(?)
MapAddress 10.10.10.13 darksci3bfoka7tw.onion # darkscience
MapAddress 10.10.10.14 akeyxc6hie26nlfylwiuyuf3a4tdwt4os7wiz3fsafijpvbgrkrzx2qd.onion # Indymedia

Now when tor(1) is asked to connect to 10.10.10.10 it will map this to the address of Freenode’s onion service, and connect to that instead. The next part of the problem is allowing tor(1) to receive these requests from a non-proxy aware application, in my case ZNC. This setup will also need a network interface to act as the interface to tor(1). A loopback interface will suffice and it’s not necessary to add an IP address to it:

# ifconfig lo1 up

pf is a firewall for OpenBSD, that can also perform some other related functions. One such function is called divert-to. Unfortunately, there is also divert-packet which is completely unrelated. tor(1) supports receiving packets that have been processed by a divert-to rule and this is often used for routing all traffic from a network through the Tor network. This arrangement is known as a “transparent proxy” because the application is unaware that anything is going on.

In my setup, I’m only routing traffic for specific onion services via the Tor network, but the same concepts are used.

In the torrc:

TransPort 127.0.0.1:1338
TransProxyType pf-divert

In pf.conf(5):

pass in quick on lo1 inet proto tcp all divert-to 127.0.0.1 port 1338
pass out inet proto tcp to 10.10.10.0/24 route-to lo1

and that’s it! I’m now able to connect to 10.10.10.10 from ZNC and pf will divert the traffic to tor.

On names and TLS certificates: I’m using TLS to connect to the onion services, but I’m not validating the certificates. I’ve already verified the server identities because they have the key for the onion service, the reason I’m using TLS is because I’m then presenting a client certificate to the servers (CertFP) to log in to NickServ. The TLS is there for the server’s benefit while the onion service authentication is for my benefit. You could add entries to your /etc/hosts file with mappings from irc.freenode.org to 10.10.10.10 but it seemed like a bit of a fragile arrangement. If pf or tor stop functioning currently, then no connection is made, but if the /etc/hosts file were to be rewritten, you’d then connect over the Internet and you’ve disabled TLS verification because you’re relying on the onion service to do that, which you’re not using.

On types of tranparent proxy: There are a few different types of transparent proxy supported by tor. pf-divert seemed like the most appropriate one to use in my case. It’s possible that the natd(8) “protocol” referred to in the NATDPort torrc option is actually talking about divert(4) sockets which are supported in OpenBSD, and that’s another option, but it’s not clear which would be the preferred way to do it. If I had more time I’d dig into which methods are useful and which are redundant as removing code is often a good thing to do.

Friday, 07 February 2020

Sway and the Dock station

I just moved permanently from awesome to Sway because I can barely see any difference. Really.

The whole Wayland ecosystem has improved a LOT since last time I used it. That was last year, as I give Wayland a try once a year since 2016.

However, I had to ditch an useful daemon, dockd. It does automatically disable my laptop screen when I put it in the dock station, but it does relies over xrandr.

What to use then?

ACPI events.

The acpid daemon can be configured to listen to ACPI events and to trigger your custom script. You just have to define which events are you interested in (it does accept wildcards also) and which script acpid should trigger when such events occurs.

I used acpi_listen to catch the events which gets triggered by the physical dock/undock actions:

# acpi_listen
ibm/hotkey LEN0068:00 00000080 00004010
[...]
ibm/hotkey LEN0068:00 00000080 00004011
[...]

Then, I setup an acpid listener by creating the file /etc/acpi/events/dock with the following content:

event=ibm/hotkey
action=/etc/acpi/actions/dock.sh %e

This listener will call my script only when an event of type ibm/hotkey occurs, then it tells sway to disable or enable the laptop screen based on the action code. Here’s my dock.sh script:

#!/bin/sh

pid=$(pgrep '^sway$')

if [ -z $pid ]; then
    logger "sway isn't running. Nothing to do"
    exit
fi

user=$(ps -o uname= -p $pid)

case "$4" in
  00004010)
    runuser -l $user -c 'SWAYSOCK=/run/user/$(id -u)/sway-ipc.$(id -u).$(pidof sway).sock swaymsg "output LVDS-1 disable"'
    logger "Disabled LVDS-1"
    ;;
  00004011)
    runuser -l $user -c 'SWAYSOCK=/run/user/$(id -u)/sway-ipc.$(id -u).$(pidof sway).sock swaymsg "output LVDS-1 enable"'
    logger "Enabled LVDS-1"
    ;;
esac

Don’t forget to make it executable!

chmod +x /etc/acpi/actions/dock.sh

And then start the acpid daemon:

systemctl enable --now acpid

Happy docking!

Thursday, 06 February 2020

New help porting to Go >= 1.12

At KDE we have a GitHub mirror. One of the problems about having a mirror is that people routinely try to propose Pull Requests over there, but no one is watching, so they would go stale, which is not good for anyone.

What no one? Actually no, we have kdeclose, a bot that will go over all Pull Requests and gracefully close them suggesting people to move the patch over to KDE infrastructure where we are watching.

The problem is that I'm running that code on Google AppEngine and they are cutting support for the old Go version that it's using, so I would need someone help me port the code to a newer Go version.

Anyone can help me?

P.S: No, I'm not the original author of the code, it's a fork of something else, but that has not been updated either.

Update: This is now done, thanks to Daniele (see first comment). Mega fast, thanks community!

Sunday, 02 February 2020

QCA cleanup spree

The last few weeks I've done quite a bit of QCA cleanup.

Bit of a summary:
* Moved to KDE's gitlab and enable clazy and clang-tidy continuous integration checks
* Fixed lots of crashes when copying some of the classes, it's not a normal use case, but it's supported by the API so make it work :)
* Fixed lots of crashes because we were assuming some of the backend libraries had more features than they actually do (e.g. we thought botan would support always support a given crypto algorithm, but some versions don't, now we check if the algorithm it's supported before saying it is)
* Made all the tests succeed :)
* Dropped Qt4 support
* Use override, nullptr (Laurent), various of the "sanity" QT_* defines, etc.
* botan backend now requires botan2
* Fixed most of the compile warnings

What I probably also did is maybe break the OSX and Windows builds, so if you're using QCA there you should start testing it and propose Merge Requests.

Note: My original idea was to actually kill QCA because i started looking at it and lot of the code looked a bit fishy, and no one wants crypto fishy code, but then i realized we use it in too many places in KDE and i'd rather have "fishy crypto code" in one place than in lots of different places, at least this way it's easier to eventually fix it.

Monday, 27 January 2020

The Qt Company is stopping Qt LTS releases. We (KDE) are going to be fine :)

Obvious disclaimer, this is my opinion, not KDE's, not my employer's, not my parents', only mine ;)

Big news today is that Qt Long-term-supported (LTS) releases and the offline installer will become available to commercial licensees only.

Ignoring upcoming switch to Qt6 scenario for now, how bad is that for us?

Let's look at some numbers from our friends at repology.

At this point we have 2 Qt LTS going on, Qt 5.9 (5.9.9 since December) and Qt 5.12 (5.12.6 since November).

How many distros ship Qt 5.9.9? 0. (there's macports and slackbuilds but none of those seem to provide Plasma packages, so I'm ignoring them)

How many distros ship Qt 5.12.6? 5, AdĂŠlie Linux, Fedora 30, Mageia 7, OpenSuse Leap 15.2, PCLinux OS (ALT Linux and GNU Guix also do but they don't seem to ship Plasma). Those are some bigger names (I'd say specially Fedora and OpenSuse).

On the other hand Fedora 28 and 29 ship some 5.12.x version but have not updated to 5.12.6, Opensuse Leap 15.1 has a similar issue, it's stuck on 5.9.7 and did not update to 5.9.9 and so is Mageia 6 which is stuck on Qt 5.9.4

Ubuntu 19.04, 19.08 and 20.04 all ship some version of Qt 5.12 (LTS) but not the lastest version.

On the other a few of other "big" distros don't ship Qt LTS, Arch and Gentoo ship 5.14, our not-distro-distro Neon is on 5.13 and so is flatpak.

As I see it, the numbers say that while it's true that some distros are shipping the latest LTS release, it's not all of them by far, and it looks more like an opportunistic use, the LTS branch is followed for a while in the last release of the distro, but the previous ones get abandoned at some point, so the LTS doesn't really seem to be used to its fully potential.

What would happen if there was no Qt LTS?

Hard to say, but I think some of the "newer" distros would actually be shipping Qt 5.13 or 5.14, and in my book that's a good thing, moving users forward is always good.

The "already released" distros is different story, since they would obviously not be updating from Qt 5.9 to 5.14, but as we've seen it seems that most of the times they don't really follow the Qt LTS releases to its full extent either.

So all in all, I'm going to say not having Qt LTS releases is not that bad for KDE, we've lived for that for a long time (remember there has only been 4 Qt LTS, 4.8, 5.6, 5.9 and 5.12) so we'll do mostly fine.

But What about Qt 5.15 and Qt 6 you ask!


Yes, this may actually be a problem, if all goes to plan Qt 5.15 will be released in May and Qt 6.0 in November, that means we will likely get up to Qt 5.15.2 or 5.15.3 and then that's it, we're moving to Qt 6.0

Obviously KDE will have to move to Qt 6 at some point, but that's going to take a while (as example Plasma 5 was released when Qt was at 5.3) so for let's say that for a year or two we will still be using Qt 5.15 without any bugfix releases.

That can be OK if Qt 5.15 ends being a good release or a problem if it's a bit buggy. If it's buggy, well then we'll have to figure out what to do, and it'll probably involve some kind of fork somewhere, be it by KDE (qt already had that for a while in ancient history with qt-copy) or by some other trusted source, but let's hope it doesn't get to that, since it would mean that there's two set of people fixing bugs in Qt 5.15, The Qt Company engineers and the rest of the world, and doing the same work twice is not smart.

Sunday, 26 January 2020

git mr: easily downloading gitlab merge requests

With KDE [slowly] moving to gitlab you may probably find yourselves reviewing more gitlab based patches.

In my opinion the web UI in gitlab is miles better, the fact that it has a "merge this thing" button makes it a game changer.

Now since we are coming from phabricator you have probably used the arc patch DXXX command to download and locally test a patch.

The gitlab web UI has a a link named "You can merge this merge request manually using the command line" that if pressed tells you to


git fetch "git@invent.kde.org:sander/okular.git" "patch-from-kde-bug-415012"
git checkout -b "sander/okular-patch-from-kde-bug-415012" FETCH_HEAD


if you want to locally test https://invent.kde.org/kde/okular/merge_requests/80

That is *horrible*

Enter git mr a very simple script that makes it so that you only have to type


git mr 80




P.S: If you're an archlinux user you can get it from AUR https://aur.archlinux.org/packages/git-mr

P.P.S: Unfortunately it does not support pushing, so if you want to push to that mr you'll have to do some work.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

        Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog – Think. Innovation.  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog