Free Software, Free Society!
Thoughts of the FSFE Community (English)

Friday, 15 October 2021

How the Integrated Gradients method works?

For artificial intelligence (AI) transparency and to better shape upcoming policies, we need to better understand the AI’s output. In particular, one may want to understand the role attributed to each input. This is hard, because in neural networks input variables don’t have a single weight that could serve as a proxy for determining their importance with regard to the output. Therefore, one have to consider all the neural network’s weights, which may be all interconnected. Here is how Integrated Gradients does this.

Approaches such as LIME, which is covered in a previous post try to simplify the problem by locally approximating neural network models , but the quality of the attributions (the importance of each feature relative to the neural network model output) is hard to asses, because one can’t tell whether incorrect attributions comes from problems in the model or from flaws or approximations in attribution method. Integrated Gradients (IG) seeks to satisfy two desirable axioms for an attribution mechanism:

  1. Sensitivity. If one feature change makes the classification output to change, then that feature should have a non-zero attribution. That makes sense, because if a feature makes the output to change, then it must have played a role. For example, if only changing the feature “Age” makes the predicted decision to change, then “Age” should have a played a role in it and therefore the attribution should be non-zero.
  2. Implementation Invariance. The attribution method result should not depend on the specificities of the neural network. If two neural networks are equivalent (i.e. they give the same results for the same input), the attribution should be the same.

Because computing the gradients of the input with regard to the output is implementation invariant (as $\frac{\partial f}{\partial g} = \frac{\partial f}{ \partial h} \times \frac{\partial h}{ \partial g}$) but does not satisfy Sensitivity (a feature change does not necessarily yield a non-zero gradient for that feature), they can’t be used directly for attributions. To provide explanations, IG makes use of a baseline, a reference input for which the predictions are neutral (e.g. the probabilities are close to $1/k$ for classification with $k$ classes), and then computes the gradient from the reference to the input. IG needs a neutral baseline so that it is easy to compare it to the input and to make the model outputs as close to zeros as possible, which is necessary to consider the attributions as depending only on the inputs.

IG are defined as:

$$IntegratedGrads_i(x) \mathrel{\coloncolonequals} (x_i - x^\prime_i ) \times \int_{\alpha=0}^1 \frac{\partial F(x^\prime + \alpha \times (x - x^\prime))}{\partial x_i}d\alpha$$

where:

  • $x$ is the input for which we want attributions
  • $i$ is a dimension in $x$
  • $x^\prime$ is the baseline
  • $\alpha$ is a coefficient that creates small interpolation steps from $x^\prime$ to $x$
  • $F$ is the neural network

To compute the integral, the Riemann approximation is used in practice, which sums up rectangular portions of the integral:

$$IntegratedGrads_i(x) \mathrel{\coloncolonequals} (x_i - x^\prime_i ) \times \sum_{k=1}^m \frac{\partial F(x^\prime + \frac{k}{m} \times (x - x^\prime))}{\partial x_i} \times \frac{1}{m}$$

where $m$ is the number of steps in the Riemann approximation. The greater the more accurate the approximation is. The paper states that 20 to 300 steps are enough, that the number should be proportional to the complexity of the network.

Integrated Gradients in practice

In PyTorch, this is equivalent to:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import torch


# Example deep learning model
class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.lin1 = torch.nn.Linear(20, 10)
        self.relu = torch.nn.ReLU()
        self.lin2 = torch.nn.Linear(10, 3)

    def forward(self, input):
        return torch.nn.functional.log_softmax(
            self.lin2(self.relu(self.lin1(input))), dim=1
        )


model = Model()

# Generate 50 inputs and baselines with 20 dimensions each
inputs = torch.rand(50, 20, requires_grad=True)
baseline = torch.zeros_like(inputs, requires_grad=True)

# Number of steps
m = 20

# Hold the gradients for each step
grads = []
for k in range(1, m + 1):
    model.zero_grad()
    # Interpolation from the baseline to the input
    baseline_input = baseline + ((k / m) * (inputs - baseline))
    # Put the interpolated baseline through the model
    out = model(baseline_input)

    # Get the predicted classes and use them as indexes for which we want
    # attributions
    idx = out.argmax(dim=1).unsqueeze(1)
    # Select the output for each predicted class
    out = out.gather(dim=1, index=idx)

    # Perform backpropagation to generate gradients for the input
    out.backward(torch.ones_like(idx))

    # Append the gradient for each step
    grads.append(inputs.grad.detach())


# Stack the list of gradients, compute the mean over the m steps
grads = torch.stack(grads, 0).mean(dim=0)
# Compute attributions
attr = (inputs - baseline).detach() * grads

The captum library (released under the BSD 3-Clause license) provides an easy-to-use implementation of the integrated gradients.

Tuesday, 12 October 2021

Help gathering resources for how to learn programming

Little spoiler: if all goes well, before the end of the year, there will be an illustrated read-aloud book "Ada & Zangemann - a fairy tale about software, skateboards, and ice cream" for children from ~5-6 years old about Free Software. The book will first be available in German, but I am already working on an English version and maybe assist in other translations in future.

In the book there will be a link to a website with further information about Free Software and an important component of this will be the wiki page "How can children learn programming".

The wiki page is at the moment just a first frame. Please help to expand it, so that parents whose children are interested in programming can get some help with resources that don't teach it with non-free software. Thank you!

Sunday, 10 October 2021

A Simple OpenPGP API

In this post I want to share how easy it is to use OpenPGP using the Stateless OpenPGP Protocol (SOP).

I talked about the SOP specification and its purpose and benefits already in past blog posts. This time I want to give some in-depth examples of how the API can be used in your application.

There are SOP API implementations available in different languages like Java and Rust. They have in common, that they are based around the Stateless OpenPGP Command Line Specification, so they are very similar in form and function.

For Java-based systems, the SOP API was defined in the sop-java library. This module merely contains interface definitions. It is up to the user to choose a library that provides an implementation for those interfaces. Currently the only known implementation is pgpainless-sop based on PGPainless.

The single entry point to the SOP API is the SOP interface (obviously). It provides methods for OpenPGP actions. All we need to get started is an instantiation of this interface:

// This is an ideal candidate for a dependency injection framework!
SOP sop = new SOPImpl(); // provided by pgpainless-sop

Let’s start by generating a secret key for the user Alice:

byte[] key = sop.generateKey()
        .userId("Alice <alice@example.org>")
        .generate()
        .getBytes();

The resulting byte array now contains our OpenPGP secret key. Next, lets extract the public key certificate, so that we can share it with out contacts.

// public key
byte[] cert = sop.extractCert()
        .key(key) // secret key
        .getBytes();

There we go! Both byte arrays contain the key material in ASCII armored form (which we could disable by calling .noArmor()), so we can simply share the certificate with our contacts.

Let’s actually create an encrypted, signed message. We obviously need our secret key from above, as well as the certificate of our contact Bob.

// get bobs certificate
byte[] bobsCert = ...

byte[] message = "Hello, World!\n".getBytes(StandardCharsets.UTF_8);

byte[] encryptedAndSigned = sop.encrypt()
        .signWith(key) // sign with our key
        .withCert(cert) // encrypt for us, so that we too can decrypt
        .withCert(bobsCert) // encrypt for Bob
        .plaintext(message)
        .getBytes();

Again, by default this message is ASCII armored, so we can simply share it as a String with Bob.

We can decrypt and verify Bobs reply like this:

// Bobs answer
byte[] bobsEncryptedSignedReply = ...

ByteArrayAndResult<DecryptionResult> decrypted = sop.decrypt()
        .verifyWithCert(bobsCert) // verify bobs signature
        .withKey(key) // decrypt with our key
        .ciphertext(bobsEncryptedSignedReply)
        .toByteArrayAndResult();

// Bobs plaintext reply
byte[] message = decrypted.getBytes();
// List of signature verifications
List<Verification> verifications = decrypted.getResult().getVerifications();

Easy! Signing messages and verifying signed-only messages basically works the same, so I’ll omit examples for it in this post.

As you can see, performing basic OpenPGP operations using the Stateless OpenPGP Protocol is as simple as it gets. And the best part is that the API is defined as an interface, so swapping the backend can simply be done by replacing the SOP object with an implementation from another library. All API usages stay the same.

I want to use this opportunity to encourage YOU the reader: If there is no SOP API available for your language of choice, consider creating one! Take a look at the specification and an API definition like sop-java or the sop Rust crate to get an idea of how to design the API. If you keep your SOP API independent from any backend library it will be easy to swap backends out for another library later in the process.

Let’s make OpenPGP great again! Happy Hacking!

Thursday, 07 October 2021

KDE Gear 21.12 releases schedule finalized

 It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_21.12_Schedule

 
Dependency freeze is in four weeks (November 4) and Feature Freeze a week after that, make sure you start finishing your stuff!

Wednesday, 06 October 2021

why I do not buy the Oculus Quest

Oculus is part of Facebook, a company that does many evil things including surveillance, censorship and tax avoidance, The Quest cannot be used without a Facebook account and it runs Android, a nonfree OS. Installing a free OS such as PureOS or the GUIX system seems to be impossible, since the bootloader is most likely locked down. Of course I don’t want to play nonfree games such as VRChat, which most likely spy on the player. By contrast VSekai is free software built on top of the Godot engine. The Godot engine runs on my Talos II and most likely it will also run on the Librem 5, and future hardware based on the Libre-SOC which I have been contributing to. Cardboard is great if ungoogled.

Thursday, 23 September 2021

Software freedom podcast with Till Jaeger

For this month's software freedom podcast I talked with Till Jaeger. He has been involved with Free Software legal and licensing topics for over two decades. In this episode Till talks about the history and the beginning of enforcing Free Software licences in Germany. Till has worked alongside Harald Welte enforcing the GNU GPL in the first court cases in Germany.

We talk about how Till got involved in Free Software, highlight the short and long term impacts of the first court decisions, about some of the most common misunderstandings of Free Software licensing, as well as the role of the FSFE's legal network in fostering the discussion and knowledge for Free Software legal and licensing topics. I highly appreciate Till for being able to explain complex legal topics, so non-lawyers can understand them.

You can listen to the podcast on the episode website, or best subscribe to the podcast either through the OPUS feed or MP3 feed so you do not miss new episodes or listen to previous episodes.

If you are new to podcasts I got the feedback from many that they enjoy listing to podcasts on their mobile with Antennapod which you can install through F-Droid.

Saturday, 18 September 2021

PSA: Don't insult or physically threaten people to try to convince them you are right

Sadly I have had to disable comments in my previous blog because there is a being [that probably passes by human if you look at them] that started with insults, continued with more insults and then graduated to physically threaten me.

 

I always thought it was obvious, but if you insult people or try to cause them phisical harm, they are usually less prone to think "oh you're right, you've convinced me" and more prone to think "this human needs to stop harassing me and sort out their problems".

American English speakers: Kiev or Kyiv ?

A while ago there was a merge request created for KGeography asking to change Kiev to Kyiv saying "this is the official transliteration of the name for the city".

But that's not what the default text in KGeography shows, the default text in KGeography not the official names of places, the default text in KGeography is the American English translation of KGeography, that's why it says Poland and not Polska.

So question for you American English speakers, if you wanted to write the name of the capital of Ukraine, would you write Kiev or Kyiv?

Edit: comments blocked because there's an body without brain that can't behave in the internet.

Thursday, 16 September 2021

How to reach craftsmanship?

I taught myself playing the violin. Some years ago I borrowed my grandmothers violin and just started fiddling around until I could play some very simple songs like “Happy Birthday” and such. Sincere apologies to my close family at that time, they really had a lot of patience as it must have sounded horrible. Some time later I played at some concerts of the local YMCA youth group. On a stage!

Still, I can’t read a single note and have no idea of music theory. Tell me to play in “E major” and I would have no clue what you mean. I can merely play by ear.

Me standing in a grain field, playing the violin. Yeah, we totally staged this image 😀

I also taught myself coding. Well, I learned the basics of Java programming in school, but I kept on learning beyond that. My first projects were the typical mess that you’d expect from a beginner which has no idea what they are doing. Later I studied computer science and now I’m just a few credit points away from getting my masters degree. Yet, the university is not the place where you learn to code. They do teach you the basics of how a computer works, what a compiler is and even the theory behind creating your own compilers, but they hardly teach you how to write *good* code.

Sometimes I feel like I code just as I play the violin. By instinct. I’m not following any rules or well defined procedures. I roughly know where I want to go and more or less how to get there. Then I just start and see where it leads me. I wouldn’t say that I write bad code, but I also don’t feel like I got the ultimate understanding of what good code is. Most certainly I wouldn’t describe my coding process as “methodical” or “planned”.

So, my question is, how to learn to write *good* code? How do I acquire the skills to write “professional” code? During my Google Summer of Code projects I found that having a mentor was massively helping me to write cleaner, more concise code. So should I join a company as junior software developer in order to be able to learn from the senior developers? Can I just hike into the mountains and find an old man in a cave who teaches me the ancient art of the JVM?

I tried reading some books. I soaked up the Uncle Bob trilogy (especially “Clean Architecture”), not least because I recognized many of the patterns Martin described in that book from my coding adventures. But books are just books and they cannot answer all the questions one might have.

I thought about attending some software engineering related conferences to listen to talks by the greybeards. But then there is this Covid thing (although I hope that vaccines and such will improve the situation soon-ish).

Now that I’m earning some money by writing code, I fell like I have the duty to elevate my skills from “I know some things about what I’m doing” into a proper craftsmanship.

But how do I start? Should I do some courses? Are there any courses that might be fitting?

Let me know if you have any advise or experiences that you think could help me.

Happy Hacking.

Tuesday, 31 August 2021

FSFE information stall on Veganmania 2021

FSFE information stall on day 1
FSFE information stall on day 2
FSFE information stall on day 3

Due to the Corona lock down we couldn’t man the traditional information stall at any Veganmania summer festival in Vienna in 2020. So we where pleased that from 27 to 29 August 2021 we were able to be present on one again. Officially, about 12.500 people visited the event each day this time and we had many encounters with people eager to hear our arguments for free software. Many hadn’t even heard about free software before. Others knew about open source or Linux. And of course we also met many people who already use free software at home or at work. In fact, maybe even more than ever before on those information stalls – except of course for those on our local Linux Week events.

Weather

On the first day we needed to pull out our plastic cover for our information material twice because short bursts of rain challenged our intent to inform people about independence on their own electrical devices. Unfortunately, we do not have a tent for this purpose yet. (It would be rather expensive and might prohibit us from being able to transport everything we need at once on a bicycle.) But fortunately, the weather remained stable for the rest of the weekend. Over all we had mostly ideal weather conditions since in previous years the summer heat was almost unbearable at times. This weekend, staying for hours in the open wasn’t an issue at all.

People and safety

Maybe due to the fact that people hadn’t been to such events for quite some time, it was was very well attended. Despite the huge interest, the organisers kept everyone safe by closing off the area and only letting people in who could show a recent negative Corona test or who already could prove they were immunised either by having recovered from the infection or by being fully vaccinated.

We often had to answer the question why the FSFE of all organisations was present on an event focusing on veganism. We gladly explained our reasoning: Most people chose a vegan life style in order to protect the well being and rights of those not having the power to protect themselves. If you transfer the same reasoning into information technology you end up with free software because there as well, the main concern is protecting the rights of all users and to ensure fair conditions for everyone.

Information material

Once more our information materials proved to be useful for this not usually very technical audience. Especially our introduction leaflet to the idea for freedom in technology and our locally produced practical overview of well known distributions came in very handy. In addition, the guide for email encryption and the stickers and post cards with motives like: “I love free software, but I love you more …”, “There is no cloud, just other peoples computers.” and some other funny freedom related stickers found many happy new owners.

A short time ago I found the domain distrotest.net and was very pleased by how easy this web page makes it to explore different free software distributions. It is simply fun to quickly test many desktops by starting virtual machines directly in your browser. The people I told about it, obviously liked this prospect too. I will certainly include a link to this in future versions of our distribution overview leaflet.

Another well received leaflet we hadn’t had on our desk in previous years was a short practical guide on computer security for activists. In this we didn’t go into complicated advanced stuff but rather very practical things everybody can do to improve the trustworthiness of their used system. It elaborates on 12 very basic things like creating backups and using a password manager or using software as a service only if there is no other possible way of doing things. It also explains why relying on well known centralised social media platforms can be especially dangerous if you want to challenge powerful institutions as an activist.

In addition we made good use of our little local online list of free software experts on freie.it who are ready to help out in case people lack the time or patience to dig through the extensive amount of online documentation and guides if they get stuck at any point in their adventure into the joyful free software world.

Thanks

I want to thank the very knowledgable volunteers who spontaneously dropped by and helped me to man the FSFE stand this time. Even if there wasn’t much opportunity to talk to each other they did a fabulous job at taking care of those who wanted to learn more about our common cause: free software.

Friday, 27 August 2021

Poppler 21.09 will have a massive speed increase for PDF files that use lots of save/restore PDF commands

Take the file from poppler issue 1126. It's a file that doesn't look super complicated, a map of some caves.


With Poppler 21.08 it took 46 seconds to render in my relatively powerful i9-8950HK

 

Thanks to a patch from Thomas Freitag that time got reduced to 28 seconds by not recalculating again something we had already calculated and just copying it. Huge improvement! [This patch was developed prior to the filing of issue 1126, i guess Thomas had found similar issues on his own]


Then issue 1126 was created yesterday and it was clear that we were still super slow, mupdf/gs/firefox/chromium can render the file almost instantly, and we were at 28 seconds :/




So i had a look at the code and thought, why are we even copying the data if that data never changes? So I made a patch that brought rendering time down to 2.8 seconds :)

 

And then i had another look at the code and realized we were copying some more data that was never even used

 

 

After patching that we're standing at 1.5 seconds

The 3 combined patches have reduced the rendering time by more than 95%


P.S: This only applies to the Splash backend (used by Okular), the Cairo backend (used by Evince) is at 120 seconds, so someone should have a look because probably similar improvements can be made.


Thursday, 05 August 2021

MotionPhoto / MicroVideo File Formats on Pixel Phones

  • Losca
  • 09:16, Thursday, 05 August 2021

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

#!/bin/bash
#
# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

UPDATE 08/2021: Here's the script to extract also MP without brute force:

#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

set -e
# Brute force
#extractposition=$(grep --binary --byte-offset --only-matching --text -P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

# Metadata
offset=$(exiv2 -p x "$1" | grep Length | tail -n 1 |  rev | cut -d ' ' -f 1 | rev)
echo offset: ${offset}
re='^[0-9]+$'
if ! [[ $offset =~ $re ]] ; then
   echo "offset not found"
   exit 1
fi
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')

echo filesize: $filesize
extractposition=$(expr $filesize - $offset)
echo extractposition=$extractposition

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

(cross-posted to my other blog)

Wednesday, 21 July 2021

wireguard

WireGuard: fast, modern, secure VPN tunnel. WireGuard securely encapsulates IP packets over UDP.

Goal

What I would like to achieve, in this article, is to provide a comprehensive guide for a redirect-gateway vpn using wireguard with a twist. The client machine should reach internet through the wireguard vpn server. No other communications should be allowed from the client and that means if we drop the VPN connection, client can not go to the internet.

wireguard.png

Intro - Lab Details

Here are my lab details. This blog post will help you understand all the necessary steps and provide you with a guide to replicate the setup. You should be able to create a wireguard VPN server-client between two points. I will be using ubuntu 20.04 as base images for both virtual machines. I am also using LibVirt and Qemu/KVM running on my archlinux host.

wireguard_vms.png

Wireguard Generate Keys

and the importance of them!

Before we begin, give me a moment to try explaining how the encryption between these two machines, in a high level design works.

Each linux machines creates a pair of keys.

  • Private Key
  • Public Key

These keys have a unique relationship. You can use your public key to encrypt something but only your private key can decrypt it. That mean, you can share your public keys via an cleartext channel (internet, email, slack). The other parties can use your public key to encrypt traffic towards you, but none other decrypt it. If a malicious actor replace the public keys, you can not decrypt any incoming traffic thus make it impossible to connect to VPN server!

wireguard_keys.png

Public - Private Keys

Now each party has the other’s public keys and they can encrypt traffic for each other, BUT only you can decrypt your traffic with your private key. Your private key never leaves your computer.

wireguard_traffic.png

Hopefully you get the idea.

Wireguard Server - Ubuntu 20.04 LTS

In order to make the guide easy to follow, I will start with the server part.

Server Key Generation

Generate a random private and public key for the server as-root

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

Make the keys read-only

chmod 0400 /etc/wireguard/*key

List keys

ls -l /etc/wireguard/
total 8
-r-------- 1 root root 45 Jul 12 20:29 privatekey
-r-------- 1 root root 45 Jul 12 20:29 publickey

UDP Port & firewall

In order to run a VPN service, you need to choose a random port that wireguard VPN server can listen to it.

eg.

61194

open firewall

ufw allow 61194/udp
Rule added
Rule added (v6)

view more info

ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To             Action      From
--             ------      ----
22/tcp         ALLOW IN    Anywhere
61194          ALLOW IN    Anywhere
22/tcp (v6)    ALLOW IN    Anywhere (v6)
61194 (v6)     ALLOW IN    Anywhere (v6)

So you see, only SSH and our VPN ports are open.
Default Incoming Policy is: DENY.

Wireguard Server Configuration File

Now it is time to create a basic configuration wireguard-server file.

Naming the wireguard interface

Clarification the name of our interface does not matter, you can choose any name. Gets it’s name from the configuration filename! But in order to follow the majority of guides try to use something from wg+. For me it is easier to name wireguard interface:

  • wg0

but you may seen it also as

  • wg

without a number. All good!

Making a shell script for wireguard server configuration file

Running the below script will create a new configuration file /etc/wireguard/wg0.conf

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address = ${NETWORK}.1/24
ListenPort = 61194
PrivateKey = ${PRIVATE_KEY}

EOF

ls -l /etc/wireguard/wg0.conf
cat   /etc/wireguard/wg0.conf

Note I have chosen the network 10.0.8.0/24 for my VPN setup, you can choose any Private Network

10.0.0.0/8     - Class A
172.16.0.0/12  - Class B
192.168.0.0/16 - Class C

I chose a Class C (256 IPs) from a /8 (Class A) private network, do not be confused about this. It’s just a Class-C /24 private network.

Let’s make our first test with the server

wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

It’s Alive!

Kernel Module

Verify that wireguard is loaded as a kernel module on your system.

lsmod | grep wireguard
wireguard             212992  0
ip6_udp_tunnel         16384  1 wireguard
udp_tunnel             16384  1 wireguard

Wireguard is in the Linux kernel since March 2020.

Show IP address

ip address show dev wg0
3: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420
qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.0.8.1/24 scope global wg0
       valid_lft forever preferred_lft forever

Listening Connections

Verify that wireguard server listens to our specific UDP port

ss -nulp '( sport = :61194 )' | column -t
State   Recv-Q  Send-Q  Local Address:Port  Peer Address:Port  Process
UNCONN  0       0           0.0.0.0:61194       0.0.0.0:*
UNCONN  0       0              [::]:61194          [::]:*

Show wg0

wg show wg0
interface: wg0
  public key: <public key>
  private key: (hidden)
  listening port: 61194

Show Config

wg showconf wg0
[Interface]
ListenPort = 61194
PrivateKey = <private key>

Close wireguard

wg-quick down wg0

What is IP forwarding

In order for our VPN server to forward our client’s network traffic from the internal interface wg0 to it’s external interface and on top of that, to separate it’s own traffic from client’s traffic, the vpn server needs to masquerade all client’s traffic in a way that knows who to forward and send back traffic to the client. In a nuthshell this is IP forwarding and perhaps the below diagram can explain it a little better.

wireguard_forwarding.png

To do that, we need to enable the IP forward feature on our linux kernel.

sysctl -w net.ipv4.ip_forward=1

The above command does not persist the change across reboots. To persist this setting we need to add it, to sysctl configuration. And although many set this option global, we do not need to have to do that!

Wireguard provides four (4) stages, to run our own scripts.

Wireguard stages

  • PreUp
  • PostUp
  • PreDown
  • PostDown

So we can choose to enable IP forwarding at wireguard up and disable it on wireguard down.

IP forwarding in wireguard configuration file

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8

cat > /etc/wireguard/wg0.conf <<EOF

[Interface]
Address    = ${NETWORK}.1/24
ListenPort = 61194
PrivateKey = ${PRIVATE_KEY}

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0
EOF

verify above configuration

wg-quick up wg0
[#] sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

Verify system control setting for IP forward:

sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Close wireguard

wg-quick down wg0
[#] wg showconf wg0
[#] ip link delete dev wg0
[#] sysctl -w net.ipv4.ip_forward=0
net.ipv4.ip_forward = 0

Verify that now IP Forward is not enabled.

sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0

Routing via firewall rules

Next on our agenda, is to add some additional rules on our firewall. We already enabled IP Forward feature with the above and now it is time to update our firewall so it can masquerade our client’s network traffic.

Default Route

We need to identify the default external network interface on the ubuntu server.

DEF_ROUTE=$(ip -4 -json route list default | jq  -r .[0].dev)

usually is eth0 or ensp3 or something similar.

firewall rules

  1. We need to forward traffic from the internal interface to the external interface
  2. We need to masquerade the traffic of the client.
iptables -A FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT

iptables -t nat -A POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

and also we need to drop these rules when we stop wireguard service

iptables -D FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT

iptables -t nat -D POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

See the -D after iptables.

iptables is a CLI (command line interface) for netfilters. So think the above rules as filters on our network traffic.

WG Server Conf

To put everything all together and also use wireguard PostUp and PreDown to

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

chmod 0400 /etc/wireguard/*key

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8
DEF_ROUTE=$(ip -4 -json route list default | jq  -r .[0].dev)
WG_PORT=61194

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address    = ${NETWORK}.1/24
ListenPort = ${WG_PORT}
PrivateKey = ${PRIVATE_KEY}

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

PostUp  = iptables -A FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE
PreDown = iptables -D FORWARD -i %i -o ${DEF_ROUTE} -s ${NETWORK}.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s ${NETWORK}.0/24 -o ${DEF_ROUTE} -j MASQUERADE

EOF

testing up

wg-quick up wg0
[#] sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] iptables -A FORWARD -i wg0 -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE

testing down

wg-quick down wg0
[#] iptables -D FORWARD -i wg0 -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE
[#] ip link delete dev wg0
[#] sysctl -w net.ipv4.ip_forward=0
net.ipv4.ip_forward = 0

Start at boot time

systemctl enable wg-quick@wg0

and status

systemctl status wg-quick@wg0

Get the wireguard server public key

We have NOT finished yet with the server !

Also we need the output of:

cat /etc/wireguard/publickey

output should be something like this:

wr3jUAs2qdQ1Oaxbs1aA0qrNjogY6uqDpLop54WtQI4=

Wireguard Client - Ubuntu 20.04 LTS

Now we need to move to our client virtual machine.
It is a lot easier but similar with above commands.

Client Key Generation

Generate a random private and public key for the client

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

Make the keys read-only

chmod 0400 /etc/wireguard/*key

Wireguard Client Configuration File

Need to replace <wireguard server public key> with the output of the above command.

Configuration is similar but simpler from the server

PRIVATE_KEY=$(cat /etc/wireguard/privatekey)
NETWORK=10.0.8
WG_PORT=61194
WGS_IP=$(ip -4 -json route list default | jq -r .[0].prefsrc)
WSG_PUBLIC_KEY="<wireguard server public key>"

cat > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address    = ${NETWORK}.2/24
PrivateKey = ${PRIVATE_KEY}

[Peer]
PublicKey  = ${WSG_PUBLIC_KEY}
Endpoint   = ${WGS_IP}:${WG_PORT}
AllowedIPs = 0.0.0.0/0

EOF

verify

wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.8.2/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] wg set wg0 fwmark 51820
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] iptables-restore -n

start client at boot

systemctl enable wg-quick@wg0

Get the wireguard client public key

We have finished with the client, but we need the output of:

cat /etc/wireguard/publickey

Wireguard Server - Peers

As we mentioned above, we need to exchange public keys of server & client to the other machines in order to encrypt network traffic.

So as we get the client public key and run the below script to the server

WSC_PUBLIC_KEY="<wireguard client public key>"
NETWORK=10.0.8

wg set wg0 peer ${WSC_PUBLIC_KEY} allowed-ips ${NETWORK}.2

after that we can verify that our wireguard server and connect to the wireguard client

wireguard server ping to client

$ ping -c 5 10.0.8.2

PING 10.0.8.2 (10.0.8.2) 56(84) bytes of data.
64 bytes from 10.0.8.2: icmp_seq=1 ttl=64 time=0.714 ms
64 bytes from 10.0.8.2: icmp_seq=2 ttl=64 time=0.456 ms
64 bytes from 10.0.8.2: icmp_seq=3 ttl=64 time=0.557 ms
64 bytes from 10.0.8.2: icmp_seq=4 ttl=64 time=0.620 ms
64 bytes from 10.0.8.2: icmp_seq=5 ttl=64 time=0.563 ms

--- 10.0.8.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4103ms
rtt min/avg/max/mdev = 0.456/0.582/0.714/0.084 ms

wireguard client ping to server

$ ping -c 5 10.0.8.1

PING 10.0.8.1 (10.0.8.1) 56(84) bytes of data.
64 bytes from 10.0.8.1: icmp_seq=1 ttl=64 time=0.752 ms
64 bytes from 10.0.8.1: icmp_seq=2 ttl=64 time=0.639 ms
64 bytes from 10.0.8.1: icmp_seq=3 ttl=64 time=0.622 ms
64 bytes from 10.0.8.1: icmp_seq=4 ttl=64 time=0.625 ms
64 bytes from 10.0.8.1: icmp_seq=5 ttl=64 time=0.597 ms

--- 10.0.8.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
rtt min/avg/max/mdev = 0.597/0.647/0.752/0.054 ms

wireguard server - Peers at configuration file

Now the final things is to update our wireguard server with the client Peer (or peers).

wg showconf wg0

the output should be something like that:

[Interface]
ListenPort = 61194
PrivateKey = <server: private key>

[Peer]
PublicKey = <client: public key>
AllowedIPs = 10.0.8.2/32
Endpoint = 192.168.122.68:52587

We need to append the peer section to our configuration file.

so /etc/wireguard/wg0.conf should look like this:

[Interface]
Address    = 10.0.8.1/24
ListenPort = 61194
PrivateKey = <server: private key>

PreUp    = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

PostUp  = iptables -A FORWARD -i %i -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE
PreDown = iptables -D FORWARD -i %i -o ens3 -s 10.0.8.0/24 -j ACCEPT ; iptables -t nat -D POSTROUTING -s 10.0.8.0/24 -o ens3 -j MASQUERADE

[Peer]
PublicKey = <client: public key>
AllowedIPs = 10.0.8.2/32
Endpoint = 192.168.122.68:52587

SaveConfig

In None of server or client wireguard configuration file, we didn’t declare this option. If set, then the configuration is saved on the current state of wg0 interface. Very useful !

SaveConfig = true

client firewall

It is time to introduce our twist!

If you mtr or traceroute our traffic from our client, we will notice that in need our network traffic goes through the wireguard vpn server !

               My traceroute  [v0.93]
wgc (10.0.8.2)                        2021-07-21T22:41:44+0300

                          Packets               Pings
 Host                Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 10.0.8.1         0.0%    88    0.6   0.6   0.4   0.8   0.1
 2. myhomepc         0.0%    88    0.8   0.8   0.5   1.0   0.1
 3. _gateway         0.0%    88    3.8   4.0   3.3   9.6   0.8
 ...

The first entry is:

1. 10.0.8.1  

if we drop our vpn connection.

wg-quick down wg0

We still go to the internet.

                     My traceroute  [v0.93]
wgc (192.168.122.68)                     2021-07-21T22:45:04+0300

                            Packets               Pings
 Host                   Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. myhomepc             0.0%     1    0.3   0.3   0.3   0.3   0.0
 2. _gateway             0.0%     1    3.3   3.3   3.3   3.3   0.0

This is important because in some use cases, we do not want our client to directly or unsupervised talk to the internet.

UFW alternative for the client

So to avoid this issue, we will re-rewrite our firewall rules.

A simple script to do that is the below. Declares the default policy to DENY for everything and only accepts ssh incoming traffic and outgoing traffic through the vpn.

DEF_ROUTE=$(ip -4 -json route list default | jq -r .[0].dev)
WGS_IP=192.168.122.69
WG_PORT=61194

## reset
ufw --force reset

## deny everything!

ufw default deny incoming
ufw default deny outgoing
ufw default deny forward

## allow ssh
ufw allow 22/tcp

## enable
ufw --force enable

## allow traffic out to the vpn server
ufw allow out on ${DEF_ROUTE} to ${WGS_IP} port ${WG_PORT}

## allow tunnel traffic out
ufw allow out on wg+
# ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), deny (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)             

192.168.122.69 61194       ALLOW OUT   Anywhere on ens3
Anywhere                   ALLOW OUT   Anywhere on wg+
Anywhere (v6)              ALLOW OUT   Anywhere (v6) on wg+

DNS

There is caveat. Please bare with me.

Usually and especially in virtual machines they get DNS setting through a local LAN. We can either allow traffic on the local vlan or update our local DNS setting so it can go only through our VPN

cat > /etc/systemd/resolved.conf <<EOF
[Resolve]
DNS=88.198.92.222

EOF

systemctl restart systemd-resolved

let’s try this

# ping google.com
ping: google.com: Temporary failure in name resolution

Correct, fire up wireguard vpn client

wg-quick up wg0

ping -c4 google.com
PING google.com (142.250.185.238) 56(84) bytes of data.
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=1 ttl=115 time=43.5 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=2 ttl=115 time=44.9 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=3 ttl=115 time=43.8 ms
64 bytes from fra16s53-in-f14.1e100.net (142.250.185.238): icmp_seq=4 ttl=115 time=43.0 ms

--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 42.990/43.795/44.923/0.707 ms

mtr shows:

mtr google.com
                My traceroute  [v0.93]
wgc (10.0.8.2)                          2021-07-21T23:01:16+0300

                                        Packets               Pings
 Host                                 Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 10.0.8.1                           0.0%     2    0.6   0.7   0.6   0.7   0.1
 2. _gateway                           0.0%     1    0.8   0.8   0.8   0.8   0.0
 3. 192.168.1.1                        0.0%     1    3.8   3.8   3.8   3.8   0.0

drop vpn connection

# wg-quick down wg0 

[#] ip -4 rule delete table 51820
[#] ip -4 rule delete table main suppress_prefixlength 0
[#] ip link delete dev wg0
[#] iptables-restore -n
# ping -c4 google.com
ping: google.com: Temporary failure in name resolution

and that is perfect !

No internet access for our client. Only when our vpn connection is up!

WireGuard: fast, modern, secure VPN tunnel

That’s it!

Tag(s): wireguard, vpn

Tuesday, 20 July 2021

Progress on PGPainless Development

Not much time has passed since I last wrote about my progress on the PGPainless library. However, I feel like its time for an update.

Since the big 0.2.0 release, 4 further releases, 0.2.1 through 0.2.4 have been published. Taken together, the changes are quite substantial, so let me summarize.

Image of capsule tower in Japan
Photo by Raphael Koh on Unsplash

Modular SOP Implementation

The (in my opinion) most exciting change is that there now is an experimental module of java interfaces that model the Stateless OpenPGP Protocol (SOP). This module named sop-java is completely independent from PGPainless and has no external dependencies whatsoever. Its basically a port of Sequoia-PGP’s sop crate (which in term is based around the Stateless OpenPGP Command Line Interface specification) to Java.

Applications that want to execute basic OpenPGP operations can depend on this interface and decide on the concrete implementation later without locking themselves in with one fixed implementation. Remember:

The Database Is a Detail. […] The Web Is a Detail. […] Frameworks Are Details.

Uncle Bob – Clean Architecture

The module sop-java-picocli contains a CLI frontend for sop-java. It uses the picocli library to closely model the Stateless OpenPGP Command Line Interface (SOP-CLI) specification (version 1 for now).

The exciting part is that this module too is independent from PGPainless, but it can be used by any library that implements sop-java.

Next up, the contents of pgpainless-sop drastically changed. While up until recently it contained a fully fledged SOP-CLI application which used pgpainless-core directly, it now no longer contains command line application code, but instead an implementation of sop-java using pgpainless-core. Therefore pgpainless-sop can be used as a drop-in for sop-java, making it the first java-based SOP implementation (to my knowledge).

Lastly, pgpainless-cli brings sop-java-picocli and pgpainless-sop together. The code does little more than to plug pgpainless-sop as SOP backend into the command line application, resulting in a fully functional OpenPGP command line application (basically what pgpainless-sop was up until release 0.2.3, just better :P).

$ ./pgpainless-cli help
Usage: pgpainless-cli [COMMAND]
Commands:
  help          Displays help information about the specified command
  armor         Add ASCII Armor to standard input
  dearmor       Remove ASCII Armor from standard input
  decrypt       Decrypt a message from standard input
  encrypt       Encrypt a message from standard input
  extract-cert  Extract a public key certificate from a secret key from
                  standard input
  generate-key  Generate a secret key
  sign          Create a detached signature on the data from standard input
  verify        Verify a detached signature over the data from standard input
  version       Display version information about the tool

The exciting part about this modular design is that if YOU are working on an OpenPGP library for Java, you don’t need to re-implement a CLI frontend on your own. Instead, you can implement the sop-java interface and benefit from the CLI provided by sop-java-picocli for free.

If you are a library consumer, depending on sop-java instead of pgpainless-core would allow you to swap out PGPainless for another library, should any emerge in the future. It also means that porting your application to other platforms and languages might become easier, thanks to the more or less fixed API provided by the SOP protocol.

Further Changes

There are some more exciting changes worth mentioning.

The whole PGPainless suite can now be built reproducibly!

$ gradle --quiet clean build &> /dev/null && md5sum {.,pgpainless-core,pgpainless-sop,pgpainless-cli,sop-java,sop-java-picocli}/build/libs/*.jar

e7e9f45eb9d74540092920528bb0abf0  ./build/libs/PGPainless-0.2.4.jar
8ab68285202c8a303692c7332d15c2b2  pgpainless-core/build/libs/pgpainless-core-0.2.4.jar
a9c1d7b4a47d5ec66fc65131c14f4848  pgpainless-sop/build/libs/pgpainless-sop-0.2.4.jar
08cfb620a69015190e45d66548b8ea0f  pgpainless-cli/build/libs/pgpainless-cli-0.2.4.jar
e309d5a8d3a9439c6fae1c56150d9d07  sop-java/build/libs/sop-java-0.2.4.jar
9901849535f57f04b615afb06216ae5c  sop-java-picocli/build/libs/sop-java-picocli-0.2.4.jar

It actually was not hard at all to achieve reproducibility. The command line application has a version command, which extracted the current application version by accessing a version.properties file which would be written during the Gradle build.

Unfortunately, Java’s implementation of the Properties class includes a timestamp when writing out the object into a PrintStream. Therefore the result was not reproducible. The fix was to write the file manually, without using a Properties object.

Furthermore, the APIs for decryption/verification were further simplified, following the example of the encryption API. Instead of chained builder subclasses, there now is a single builder class which is used to receive decryption keys and public key certificates etc.

If you need more details about what changed in PGPainless, there now is a changelog file.

Friday, 16 July 2021

LibreDNS DnsOverTLS no ads with systemd-resolved

Below my personal settings -as of today- for LibreDNS using systemd-resolved service for DNS resolution.

sudo vim /etc/systemd/resolved.conf

basic settings

[Resolve]
DNS=116.202.176.26:854#dot.libredns.gr
DNSOverTLS=yes
FallbackDNS=88.198.92.222
Cache=yes

apply

sudo systemctl restart systemd-resolved.service

verify

resolvectl query analytics.google.com

analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 144.7ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: network

Explain Settings

DNS setting

DNS=116.202.176.26:854#dot.libredns.gr

We declare the IP of our DoT service. Using : as a separator we add the no-ads TCP port of DoT, 854. We also need to add our domain in the end to tell systemd-resolved that this IP should respond to dot.libredns.gr

Dns Over TLS

DNSOverTLS=yes

The usually setting is yes. In older systemd versions you can also select opportunistic.
As we are using Lets Encrypt systemd-resolved can not verify (by default) the IP inside the certificate. The type of certificate can verify the domain dot.libredns.gr but we are asking the IP: 116.202.176.26 and this is another type of certificate that is not free. In order to “fix” this , we added the #dot.libredns.gr in the above setting.

dotlibrednsgr.png

FallBack

Yes not everything has Five nines so you may need a fall back dns to .. fall. Be aware this is cleartext traffic! Not encrypted.

FallbackDNS=88.198.92.222

Cache

Last but not least, caching your queries can give provide you with an additional speed when browsing the internet ! You already asked this a few seconds ago, why not caching it on your local system?

Cache=yes

to give you an example

resolvectl query analytics.google.com

analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 144.7ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: network

second time:

resolvectl query analytics.google.com
analytics.google.com: 0.0.0.0                  -- link: eth0

-- Information acquired via protocol DNS in 2.3ms.
-- Data is authenticated: no; Data was acquired via local or encrypted transport: yes
-- Data from: cache
Tag(s): LibreDNS, systemd, DoT

Saturday, 10 July 2021

KDE Gear 21.08 releases branches created

Make sure you commit anything you want to end up in the 21.08 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 15 of July.

More interesting dates:
  • July 29: 21.08 RC (21.07.90) Tagging and Release
  • August  5: 21.08 Tagging
  • August 12: 21.08 Release

https://community.kde.org/Schedules/release_service/21.08_Release_Schedule

offlineimap - unicode decode errors

My main system is currently running Ubuntu 21.04. For e-mail I'm relying on neomutt together with offlineimap, which both are amazing tools. Recently offlineimap was updated/moved to offlineimap3. Looking on my system, offlineimap reports itself as OfflineIMAP 7.3.0 and dpkg tells me it is version 0.0~git20210218.76c7a72+dfsg-1.

Unicode Decode Error problem

Today I noticed several errors in my offlineimap sync log. Basically the errors looked like this:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 1299: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xeb in position 1405: invalid continuation byte

Solution

If you encounter it as well (and you use mutt or neomutt), please have a look at this great comment on Github from Joseph Ishac (jishac) since his tip solved the issue for me.

To "fix" this issue for future emails, I modified my .neomuttrc and commented out the default send encoding charset and omitted the iso-8859-1 part:

#set send_charset = "us-ascii:iso-8859-1:utf-8"
set send_charset = "us-ascii:utf-8"

Then I looked through the email files on the filesystem and identified the ISO-8859 encoded emails in the Sent folder which are causing the current issues:

$ file * | grep "ISO-8859"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: ISO-8859 text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   ISO-8859 text
1525607314.R13283589178011616624.desktop:2,S:                                ISO-8859 text

That left me with opening the files with vim and saving them with the correct encoding:

:set fileencoding=utf8
:wq

Voila, mission accomplished:

$ file * | grep "UTF-8"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: UTF-8 Unicode text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   UTF-8 Unicode text
1525607314.R13283589178011616624.desktop:2,S:                                UTF-8 Unicode text

Sunday, 27 June 2021

Everything we release in KDE Gear is maintained

During Akademy some members of the community said stuff like "app/library XXX and YYY are not maintained even if we ship them in KDE Gear".


Which is not true, everything released in KDE Gear is maintained, there may not be an explicit maintainer, but there is shared community maintenance.


When confronting a particular person about it, they said "but look, there has not been any new commit that isn't either translation improvements or adapting to code deprecations/etc.", to which i said, "yes, that is exactly what maintenance means".


Road maintenance doesn't mean adding more lanes to the road every month, just means making sure the road does not break and fixing it a bit when/if it happens.


The same applies to software maintenance, the fact that there are no new features does not mean the software is not maintained.


Now, it is true that with community maintenance it's possible that the greater we has missed that something has stopped working, in that case, please raise it up, and we'll evaluate whether it's fixable or if indeed we want to declare that particular bit of software as unmaintained and stop shipping it in KDE Gear.

Wednesday, 23 June 2021

How the Netherlands group grew in Covid times

Before André and I became coordinators for the Netherlands, we used to have yearly community meetings at both the Dutch T-Dose and the Belgian FOSDEM conferences. In order to build the FSFE NL community, as new coordinators we started having a regular booth at the national Linux user group (NLLGG) that met bi-monthly in Utrecht, which is quite central in the Netherlands. The NLLGG has a large space for booths, so we could do our outreach alongside the various sessions and other booths that were going on. Via that regular booth, we had more frequent interactions with like-minded people, including the friends we had made over the years. Of course there were still people that were interested to stay in touch with the FSFE, but didn’t want to travel half the country. At least we had email and chat as an option for those more distant relationships.

 

And than Covid came to the Netherlands, and we were forced to change our ways. We could no longer meet at the physical NLLGG location. There was an online NLLGG session we joined, but as expected the main focus was on the Linux users there and not on our overlapping FSFE group. Eventually in the autumn we just tried our luck with an online session of our own. Luckily the FSFE had just launched their conference server based on BigBlueButton, so the required freedom-respecting infrastructure was already in place. We held our first meeting on the 28th of October 2020, which we announced on the FSFE website, on our XMPP group and on the Netherlands mailinglist (contact details on our BNL wiki page).

The first meeting was a bit rough. As can be expected with the hotchpotch of computer setups, there were various issues with audio and webcams. Still we had a nice meeting of about 1,5 hours to discuss various topics that were keeping our minds occupied. With everybody locked up at home, it was a welcome change to chat to the people with similar interest you would normally only meet by going to a conference or other community event. The format of the meeting was very much the same as at the booth, just to have a relaxed group conversation on free software related issues.

We kept on doing the online meetings by just scheduling another one at the end of the meeting. We recently had our 9th online get-together already. The attendance varies somewhere between the 5 and 9 persons. In the mean time we have settled on the 3rd Wednesday of the month, so it has be come a regular thing with a regular attendance. Every meeting is somewhat of a surprise, because you just don’t know exactly what it will bring. Some new people might join, there might be some new and interesting subjects being tabled, and there could be a strong collaboration on an opportunity. The last meeting we started compiling a list of topics beforehand on an etherpad, so we can make an explicit decision which topics to spend time discussing.

We had one special occasion on the 25 of November 2020 when we had a sneak preview of the Dutch audio translation of the PMPC video. There was a larger attendance than usual, and Matthias was kind enough to join and speak some kind words about our local efforts.

In our meetings there is a lively discussion on current free software related topics, which helps to form opinions and perhaps even a plan of action. I like it when these discussions result in an idea of an action we could take from the FSFE perspective like reaching out to politicians or governmental organizations. The Dutch supporters have quite a bit of activism experience among them. Besides the general discussion plenty of day-to-day advice is shared how to maintain or increase your practical freedom.

The meetings are open to a wider audience, so if you are interested in attending, just sign up by emailing me and I’ll send the meeting room details. The meetings are announced on the FSFE events page. We have successfully switched to English in multiple occasions to be inclusive to other nationalities, so language shouldn’t be a problem. So maybe I’ll see you there?

Friday, 11 June 2021

I'm back in the boat

In mid-2014 I first heard about Jolla and Sailfish OS and immediately bought a Jolla 1; wrote apps; participated in the IGG campaign for Jolla Tablet; bought the TOHKBD2; applied for (and got) Jolla C.

Sounds like the beginning of a good story doesn’t it?

Well, by the beginning of 2017 I had sold everything (except the tablet, we all know what happened to that one).

So what happened?? I was a happy Sailfish user, but Jolla’s false promises disappointed me.

Yet, despite all that, I still think about Sailfish OS to this day. I think it’s because, despite some proprietary components, the ecosystem around Sailfish OS is ultimately open source. And that’s what interests me. It also got a fresh update which solves some of the problems that where there 5 years ago.

Nowadays, thanks to the community, Sailfish OS can be installed on many devices, even if with some less components, but I’m looking for that complete experience and so I asked on the forum if there was someone willing to sell his Xperia device with or without the license… and I got one for free. Better still, in exchange for some apps!

To decide which applications to create, I therefore took a look at that ecosystem. I started with the apps I use daily on Android and looked for the Sailfish OS alternative (spoiler: I’m impressed, good job guys!).

I am writing them all here because I am sure it will be useful to someone else:

  • AntennaPod (podcast app) -> PodQast
  • Ariane (gemini protocol browser)
  • AsteroidOS (AsteroidOS sync) -> Starfish
  • Connectbot (ssh client) -> built-in (Terminal)
  • Conversation (xmpp client) -> built-in (Messaging)
  • Davx5 (caldav/cardav) -> built-in (Account)
  • DroidShows (TV series) -> SeriesFinale
  • Element (Matrix client) -> Determinant
  • Endoscope (camera stream)
  • Fedilab (Mastodon client) -> Tooter
  • ForkHub (GitHub client) -> SailHub
  • FOSS Browser -> built-in (Browser)
  • FreeOTP -> SailOTP
  • Glider (hacker news reader) -> SailHN
  • K-9 Mail -> built-in (Mail)
  • KDE Connect (KDE sync) -> SailfishConnect
  • Keepassx (password manager) -> ownKeepass
  • Labcoat (GitLab client)
  • Lemmur (Lemmy client)
  • MasterPassword (password manager) -> MPW
  • MuPDF (PDF reader) -> built-in (Documents)
  • Newpipe (YouTube client) -> YTPlayer
  • Nextcloud (Nextcloud files) -> GhostCloud
  • Notes (Nextcloud notes) -> Nextcloud Notes
  • OCReader (Nextcloud RSS) -> Fuoten
  • OsmAnd~ (Maps) -> PureMaps
  • Printing (built-in) -> SeaPrint
  • QuickDic (dictionary) -> Sidudict
  • RedMoon (screen color temperature) -> Tint Overlay
  • RedReader (Reddit client) -> Quickddit
  • Signal -> Whisperfish
  • Syncthing (files sync) -> there’s the binary, no UI
  • Transdroid (Trasmission client) -> Tremotes
  • Vinyl (music player) -> built-in (Mediaplayer)
  • VLC (NFS streaming) -> videoPlayer
  • WireGuard (VPN) -> there’s the binary, no UI
  • YetAnotherCallBlocker (call blocker) -> Phonehook

So, to me it looks like almost everything is there, except:

I’ve already started to write a UI for Syncthing, then maybe I could write the browser for the gemini protocol or rather the GitLab client?

Please consider a donation if you would like to support me (mention your favourite project!).

Liberapay

Many many thanks to Jörg who sent me his Sony Xperia 10 Plus! I hope I don’t disappoint him!

Wednesday, 09 June 2021

KDE Gear 21.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/KDE_Gear_21.08_Schedule

 
Dependency freeze is in four weeks (July 8) and Feature Freeze a week after that, make sure you start finishing your stuff!

Saturday, 05 June 2021

Deployed my blog on Kubernetes

One of the most well-known k8s memes is the below image that represent the effort and complexity on building a kubernetes cluster just to run a simple blog. So In this article, I will take the opportunity to install a simple blog engine on kubernetes using k3s!

k8s_blog.jpg

terraform - libvirt/qemu - ubuntu

For this demo, I will be workinig on my local test lab. A libvirt /qemu ubuntu 20.04 virtual machine via terraform. You can find my terraform notes on my github repo tf/0.15/libvirt/0.6.3/ubuntu/20.04.

k3s

k3s is a lightweight, fully compliant kubernetes distribution that can run on a virtual machine, single node.

login to your machine and became root

$ ssh 192.168.122.42 -l ubuntu
$ sudo -i
#

install k3s with one command

curl -sfL https://get.k3s.io | sh -

output should be something like this

[INFO]  Finding release for channel stable

[INFO]  Using v1.21.1+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.1+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Firewall Ports

I would propose to open the below network ports so k3s can run smoothly.

Inbound Rules for K3s Server Nodes

PROTOCOL PORT SOURCE DESCRIPTION
TCP 6443 K3s agent nodes Kubernetes API Server
UDP 8472 K3s server and agent nodes Required only for Flannel VXLAN
TCP 10250 K3s server and agent nodes Kubelet metrics
TCP 2379-2380 K3s server nodes Required only for HA with embedded etcd

Typically all outbound traffic is allowed.

ufw allow

ufw allow 6443/tcp
ufw allow 8472/udp
ufw allow 10250/tcp
ufw allow 2379/tcp
ufw allow 2380/tcp

full output

# ufw allow 6443/tcp
Rule added
Rule added (v6)

# ufw allow 8472/udp
Rule added
Rule added (v6)

# ufw allow 10250/tcp
Rule added
Rule added (v6)

# ufw allow 2379/tcp
Rule added
Rule added (v6)

# ufw allow 2380/tcp
Rule added
Rule added (v6)

k3s Nodes / Pods / Deployments

verify nodes, roles, pods and deployments

# kubectl get nodes -A
NAME         STATUS   ROLES                  AGE   VERSION
ubuntu2004   Ready    control-plane,master   11m   v1.21.1+k3s1

# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   helm-install-traefik-crd-8rjcf            0/1     Completed   2          13m
kube-system   helm-install-traefik-lwgcj                0/1     Completed   3          13m
kube-system   svclb-traefik-xtrcw                       2/2     Running     0          5m13s
kube-system   coredns-7448499f4d-6vrb7                  1/1     Running     5          13m
kube-system   traefik-97b44b794-q294l                   1/1     Running     0          5m14s
kube-system   local-path-provisioner-5ff76fc89d-pq5wb   1/1     Running     6          13m
kube-system   metrics-server-86cbb8457f-n4gsf           1/1     Running     6          13m

# kubectl get deployments -A
NAMESPACE     NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                  1/1     1            1           17m
kube-system   traefik                  1/1     1            1           8m50s
kube-system   local-path-provisioner   1/1     1            1           17m
kube-system   metrics-server           1/1     1            1           17m

Helm

Next thing is to install helm. Helm is a package manager for kubernetes, it will make easy to install applications.

curl -sL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

output

Downloading https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
helm version

version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"}

repo added

As a package manager, you can install k8s packages, named charts and you can find a lot of helm charts here https://artifacthub.io/. You can also add/install a single repo, I will explain this later.

# helm repo add nicholaswilde https://nicholaswilde.github.io/helm-charts/

"nicholaswilde" has been added to your repositories

# helm repo update
Hang tight while we grab the latest from your chart repositories...

Successfully got an update from the "nicholaswilde" chart repository
Update Complete. ⎈Happy Helming!⎈

hub Vs repo

basic difference between hub and repo is that hub is the official artifacthub. You can search charts there

helm search hub blog
URL                                                 CHART VERSION   APP VERSION DESCRIPTION
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
https://artifacthub.io/packages/helm/nicholaswi...  0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
https://artifacthub.io/packages/helm/bitnami/dr...  10.2.23         9.1.10      One of the most versatile open source content m...
https://artifacthub.io/packages/helm/bitnami/ghost  13.0.13         4.6.4       A simple, powerful publishing platform that all...
https://artifacthub.io/packages/helm/bitnami/jo...  10.1.10         3.9.27      PHP content management system (CMS) for publish...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
https://artifacthub.io/packages/helm/nicholaswi...  0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...
https://artifacthub.io/packages/helm/bitnami/wo...  11.0.13         5.7.2       Web publishing platform for building blogs and ...

using a repo, means that you specify charts sources from single (or multiple) repos, usally outside of hub.

helm search repo blog
NAME                        CHART VERSION   APP VERSION DESCRIPTION
nicholaswilde/blog          0.1.2           v1.3        Lightweight self-hosted facebook-styled PHP blog.
nicholaswilde/chyrp-lite    0.1.2           v2021.02    An ultra-lightweight blogging engine, written i...
nicholaswilde/twtxt         0.1.1           0.1.1       A Self-Hosted, Twitter™-like Decentralised micr...
nicholaswilde/wiki          0.1.1           900b76a     A self-hosted well uh wiki engine or content ma...

Install a blog engine via helm

before we continue with the installation of our blog engine, we need to set the kube config via a shell variable

kube configuration yaml file

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

kubectl-k3s, already knows where to find this yaml configuration file. kubectl is a link to k3s in our setup

# whereis kubectl
kubectl: /usr/local/bin/kubectl

# ls -l /usr/local/bin/kubectl
lrwxrwxrwx 1 root root 3 Jun  4 23:20 /usr/local/bin/kubectl -> k3s

but not helm that we just installed.

After that we can install our blog engine.

helm install chyrp-lite              \
  --set env.TZ="Europe/Athens"  \
  nicholaswilde/chyrp-lite

output

NAME: chyrp-lite
LAST DEPLOYED: Fri Jun  4 23:46:04 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the application URL by running these commands:
  http://chyrp-lite.192.168.1.203.nip.io/

for the time being, ignore nip.io and verify the deployment

# kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
chyrp-lite   1/1     1            1           2m15s

# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
chyrp-lite-5c544b455f-d2pzm   1/1     Running   0          2m18s

Port Forwarding

as this is a pod running through k3s inside a virtual machine on our host operating system, in order to visit the blog and finish the installation we need to expose the port.

Let’s find out if there is a service running

kubectl get service chyrp-lite

output

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
chyrp-lite   ClusterIP   10.43.143.250   <none>        80/TCP    11h

okay we have a cluster ip.

you can also verify that our blog engine is running

curl -s 10.43.143.250/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

and then port forward the pod tcp port to our virtual machine

kubectl port-forward service/chyrp-lite 80

output

Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80

k3s issue with TCP Port 80

Port 80 used by build-in load balancer by default

That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.

So the above command will not work, it will give you an 404 error.
We can disable LoadBalancer (we do not need it for this demo) but it is easier to just forward the service port to 10080

kubectl port-forward service/chyrp-lite 10080:80
Forwarding from 127.0.0.1:10080 -> 80
Forwarding from [::1]:10080 -> 80
Handling connection for 10080
Handling connection for 10080

from our virtual machine we can verify

curl -s http://127.0.0.1:10080/install.php  | head

it will produce

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

ssh port forward

So now, we need to forward this TCP port from the virtual machine to our local machine. Using ssh, you should be able to do it like this from another terminal

ssh 192.168.122.42 -l ubuntu -L8080:127.0.0.1:10080

verify it

$ sudo ss -n -t -a 'sport = :10080'

State           Recv-Q          Send-Q                   Local Address:Port                    Peer Address:Port         Process
LISTEN          0               128                          127.0.0.1:10080                        0.0.0.0:*
LISTEN          0               128                              [::1]:10080                           [::]:*

$ curl -s http://localhost:10080/install.php | head

<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8">
        <title>Chyrp Lite Installer</title>
        <meta name="viewport" content="width = 800">
        <style type="text/css">
            @font-face {
                font-family: 'Open Sans webfont';
                src: url('./fonts/OpenSans-Regular.woff') format('woff');

I am forwarding to a high tcp port (> 1024) so my user can open a tcp port, eitherwise I need to be root.

finishing the installation

To finish the installation of our blog engine, we need to visit the below url from our browser

http://localhost:10080/install.php

Database Setup

chyrplite01.jpg

Admin Setup

chyrplite02.jpg

Installation Completed

chyrplite03.jpg

First blog post

chyrplite04.jpg

that’s it !

Wednesday, 02 June 2021

PGPainless 0.2 Released!

I’m very proud and excited to announce the release of PGPainless version 0.2! Since the last stable release of my OpenPGP library for Java and Android 9 months ago, a lot has changed and improved! Most importantly development on PGPainless is being financially sponsored by FlowCrypt, so I was able to focus a lot more energy into working on the library. I’m very grateful for this opportunity 🙂

A red letter, sealed with a wax seal
Photo by Natasya Chen on Unsplash

PGPainless is using Bouncycastle, but aims to save developers from the pain of writing lots of boilerplate code, while at the same time using the BC API properly. The new release is available on maven central, the source code can be found on Github and Codeberg.

PGPainless is now depending on Bouncycastle 1.68 and the minimum Android API level has been raised to 10 (Android 2.3.3). Let me bring you up to speed about some of its features and the recent changes!

Inspect Keys!

Back in the last stable release, PGPainless could already be used to generate keys. It offered some shortcut methods to quickly generate archetypal keys, such as simple RSA keys, or key rings based on elliptic curves. In version 0.2, support for additional key algorithms was added, such as EdDSA or XDH.

The new release introduces PGPainless.inspectKeyRing(keyRing) which allows you to quickly access information about a key, such as its user-ids, which subkeys are encryption capable and which can be used to sign data, their expiration dates, algorithms etc.

Furthermore this feature can be used to evaluate a key at a certain point in time. That way you can quickly check, which key flags or algorithm preferences applied to the key 3 weeks ago, when that key was used to create that signature you care about. Or you can check, which user-ids your key had 5 years ago.

Edit Keys!

Do you already have a key, but want to extend its expiration date? Do you have a new Email address and need to add it as a user-id to your key? PGPainless.modifyKeyRing(keyRing) allows basic modification of a key. You can add additional user-ids, adopt subkeys into your key, or expire/revoke existing subkeys.

secretKeys = PGPainless.modifyKeyRing(secretKeys)
                       .setExpirationDate(expirationDate, keyRingProtector)
                       .addSubKey(subkey, subkeyProtector, keyRingProtector)
                       .addUserId(UserId.onlyEmail("alice@pgpainless.org"), keyRingProtector)
                       .deleteUserId("alice@pgpainful.org", keyRingProtector)
                       .revokeSubkey(subKeyId, keyRingProtector)
                       .revokeUserId("alice@pgpaintrain.org", keyRingProtector)
                       .changePassphraseFromOldPassphrase(oldPass)
                           .withSecureDefaultSettings().toNewPassphrase(newPass)
                       .done();

Encrypt and Sign Effortlessly!

PGPainless 0.2 comes with an improved, simplified encryption/signing API. While the old API was already quite intuitive, I was focusing too much on the code being “autocomplete-friendly”. My vision was that the user could encrypt a message without ever having to read a bit of documentation, simply by typing PGPainless and then following the autocomplete suggestions of the IDE. While the result was successful in that, the code was not very friendly to bind to real-world applications, as there was not one builder class, but several (one for each “step”). As a result, if a user wanted to configure the encryption dynamically, they would have to keep track of different builder objects and cope with casting madness.

// Old API
EncryptionStream encryptionStream = PGPainless.createEncryptor()
        .onOutputStream(targetOuputStream)
        .toRecipient(aliceKey)
        .and()
        .toRecipient(bobsKey)
        .and()
        .toPassphrase(Passphrase.fromPassword("password123"))
        .usingAlgorithms(SymmetricKeyAlgorithm.AES_192, HashAlgorithm.SHA256, CompressionAlgorithm.UNCOMPRESSED)
        .signWith(secretKeyDecryptor, aliceSecKey)
        .asciiArmor();

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

The new API is still intuitive, but at the same time it is flexible enough to be modified with future features. Furthermore, since the builder has been divided it is now easier to integrate PGPainless dynamically.

// New shiny 0.2 API
EncryptionStream encryptionStream = PGPainless.encryptAndOrSign()
        .onOutputStream(outputStream)
        .withOptions(
                ProducerOptions.signAndEncrypt(
                        new EncryptionOptions()
                                .addRecipient(aliceKey)
                                .addRecipient(bobsKey)
                                // optionally encrypt to a passphrase
                                .addPassphrase(Passphrase.fromPassword("password123"))
                                // optionally override symmetric encryption algorithm
                                .overrideEncryptionAlgorithm(SymmetricKeyAlgorithm.AES_192),
                        new SigningOptions()
                                // Sign in-line (using one-pass-signature packet)
                                .addInlineSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // Sign using a detached signature
                                .addDetachedSignature(secretKeyDecryptor, aliceSecKey, signatureType)
                                // optionally override hash algorithm
                                .overrideHashAlgorithm(HashAlgorithm.SHA256)
                ).setAsciiArmor(true) // Ascii armor
        );

Streams.pipeAll(plaintextInputStream, encryptionStream);
encryptionStream.close();

Verify Signatures Properly!

The biggest improvement to PGPainless 0.2 is improved, proper signature verification. Prior to this release, PGPainless was doing what probably every other Bouncycastle-based OpenPGP library was doing:

PGPSignature signature = [...];
// Initialize the signature with the public signing key
signature.init(pgpContentVerifierBuilderProvider, signingKey);

// Update the signature with the signed data
int read;
while ((read = signedDataInputStream.read()) != -1) {
    signature.update((byte) read);
}

// Verify signature correctness
boolean signatureIsValid = signature.verify();

The point is that the code above only verifies signature correctness (that the signing key really made the signature and that the signed data is intact). It does however not check if the signature is valid.

Signature validation goes far beyond plain signature correctness and entails a whole suite of checks that need to be performed. Is the signing key expired? Was it revoked? If it is a subkey, is it bound to its primary key correctly? Has the primary key expired or revoked? Does the signature contain unknown critical subpackets? Is it using acceptable algorithms? Does the signing key carry the SIGN_DATA flag? You can read more about why signature verification is hard in my previous blog post.

After implementing all those checks in PGPainless, the library now scores second place on Sequoia’s OpenPGP Interoperability Test Suite!

Lastly, support for verification of cleartext-signed messages such as emails was added.

New SOP module!

Also included in the new release is a shiny new module: pgpainless-sop

This module is an implementation of the Stateless OpenPGP Command Line Interface specification. It basically allows you to use PGPainless as a command line application to generate keys, encrypt/decrypt, sign and verify messages etc.

$ # Generate secret key
$ java -jar pgpainless-sop-0.2.0.jar generate-key "Alice <alice@pgpainless.org>" > alice.sec
$ # Extract public key
$ java -jar pgpainless-sop-0.2.0.jar extract-cert < alice.sec > alice.pub
$ # Sign some data
$ java -jar pgpainless-sop-0.2.0.jar sign --armor alice.sec < message.txt > message.txt.asc
$ # Verify signature
$ java -jar pgpainless-sop-0.2.0.jar verify message.txt.asc alice.pub < message.txt 
$ # Encrypt some data
$ java -jar pgpainless-sop-0.2.0.jar encrypt --sign-with alice.sec alice.pub < message.txt > message.txt.asc
$ # Decrypt ciphertext
$ java -jar pgpainless-sop-0.2.0.jar decrypt --verify-with alice.pub --verify-out=verif.txt alice.sec < message.txt.asc > message.txt

The primary reason for creating this module though was that it enables PGPainless to be plugged into the interoperability test suite mentioned above. This test suite uncovered a ton of bugs and shortcomings and helped me massively to understand and interpret the OpenPGP specification. I can only urge other developers who work on OpenPGP libraries to implement the SOP specification!

Upstreamed changes

Even if you are not yet convinced to switch to PGPainless and want to keep using vanilla Bouncycastle, you might still benefit from some bugfixes that were upstreamed to Bouncycastle.

Every now and then for example, BC would fail to do some internal conversions of elliptic curve encryption keys. The source of this issue was that BC was converting keys from BigIntegers to byte arrays, which could be of invalid length when the encoding was having leading zeros, thus omitting one byte. Fixing this was easy, but finding the bug was taking quite some time.

Another bug caused decryption of messages which were encrypted for more than one key/passphrase to fail, when BC tried to decrypt a Symmetrically Encrypted Session Key Packet with the wrong key/passphrase first. The cause of this issue was that BC was not properly rewinding the decryption stream after reading a checksum, thus corrupting decryption for subsequent attempts with the correct passphrase or key. The fix was to mark and rewind the stream properly before the next decryption attempt.

Lastly some methods in BC have been modernized by adding generics to Iterators. Chances are if you are using BC 1.68, you might recognize some changes once you bump the dependency to BC 1.69 (once it is released of course).

Thank you!

I would like to thank anyone who contributed to the new release in any way or form for their support. Special thanks goes to my sponsor FlowCrypt for giving me the opportunity to spend so much time on the library! Furthermore I’d like to thank the all the amazing folks over at Sequoia-PGP for their efforts of improving the OpenPGP ecosystem and patiently helping me understand the (at times at bit muddy) OpenPGP specification.

Monday, 31 May 2021

How i ended up fixing a "not a bug" in Qt Quick that made apostrophes not being rendered while reviewing an Okular patch

But in Okular we don't use Qt Quick you'll say!


Well, we actually use Qt Quick in the mobile interface of Okular, but you're right, this was not a patch for the mobile version, so "But in Okular we don't use Qt Quick!"

 

The story goes like this.

I was reviewing https://invent.kde.org/graphics/okular/-/merge_requests/431 and was not super convinced of the look and feel of it, so Nate gave some examples and one example was "the sidebar in Elisa".

 

So I started Elisa and saw this

You probably don't speak Catalan so don't see that lesquerra is a mistake it should be l'esquerra

 

Oh the translators made a mistake i thought, so i went to the .po file https://websvn.kde.org/trunk/l10n-kf5/ca/messages/elisa/elisa.po?view=markup#l638 and oh surprise! the translators had not made a mistake (well not really a surprise since they're really good).


Ok, so what's wrong then?


I had a look at the .qml code https://invent.kde.org/multimedia/elisa/-/blob/release/21.04/src/qml/MediaPlayListView.qml#L333 and it really looked simple enough.

 

I had never seen a Kirigami.PlaceholderMessage before but looking at it, well the text ended up in a Kirigami.Heading and then in a QtQuick Label. Nothing really special.


Weeeeeeird.


Ok i said, let's replace the the whole xi18nc in there with the translated text and run elisa. And then i could see the ' properly.


Hmmmm, ok then the problem had to be xi18nc right? Well it was and it was not.


The problem is that xi18nc translates ' to &apos; to be more "html compliant" https://invent.kde.org/frameworks/ki18n/-/blob/master/src/kuitmarkup.cpp#L40

 

And unfortunately Qt Declarative 5.15 Text item default "html support" is something Qt calls "Styled Text" that only supports five HTML entities, and apos is not one of them https://code.qt.io/cgit/qt/qtdeclarative.git/tree/src/quick/util/qquickstyledtext.cpp?h=5.15.2#n558

 

So where is the bug?

 

The Text documentation clearly states that &apos; is in the list of supported entities.

One could say the bug is that every time we use xi18nc in a Text we should remember to change the "html support" to something Qt calls "RichText" that actually supports &apos;.


But we all know that's never going to happen :D, and in this case it would actually be hard since Kirigami.PlaceholderMessage doesn't even expose the format property of the inner text

So I convinced our friends at Qt that only supporting five entities is not great and now we support six ^_^ in Qt 6.2, 6.1 and 5.15 (if you use the KDE Qt patch collection) https://invent.kde.org/qt/qt/qtdeclarative/-/merge_requests/3/diffs


P.S: I have a patch in the works to support all HTML entities in StyledText but somehow it's breaking the tests, so well, need to work more on it :)

Tuesday, 25 May 2021

When it takes a pandemic ...

to understand the speed of innovation.

2020 was a special year for all of us - with "us" here meaning the entire world: Faced with a truly urgent global problem that year was a learning opportunity for everyone.

For me personally the year started like any other year - except that news coming out of China were troubling. Little did I know how fast those news would reach the rest of the world - little did I know the impact that this would have.

I started the year with FOSDEM in Brussels in February - like every other year, except it felt decidedly different going to this event with thousands of attendees, crammed into overfull university rooms.

Not a month later, travel budgets in many corporations had been frozen. The last in person event that I went to was FOSS Backstage - incapable of imagining just for how long this would be the last in person event I would go to. To this date I'm grateful for Bertrand for teaching the organising team just how much can be transported with video calls - and I'm still grateful for the technicians onsite that made speaker-attendee interaction seamless - across several hundred miles.

One talk from FOSS Backstage that I went back to over and over during 2020 and 2021 was the one given by Emmy Tsang on Open Science.

Referencing the then new pandemic she made a very impressive case for open science, for open collaboration - and for the speed of innovation that comes from that open collaboration. More than anything else I had heard or read before it made it clear to me what everyone means by explaining how Open Source (and by extension internally InnerSource) increases the speed of innovation substantially:

Instead of having everyone start from scratch, instead of wasting time and time again to build the basic foundation - instead we can all collaborate on that technological foundation and focus on key business differentiators. Or as Danese Cooper would put it: "All the boats must rise." The one thing that I found most amazing during this pandemic were moments during which we saw scientists from all sorts of disciplines work together - and doing so in a very open and accessible way. Among all the chaos and misinformation voices for reliable and dependable information emerged. We saw practitioners add value to the discussion.

With information shared in pre-print format, groups could move much faster than the usual one year innovation cycle. Yes, it meant more trash would make it as well. And still we wouldn't be where we are today if humans across the globe, no matter their nationality or background would have had a chance to collaborate and move faster as a result.

Somehow that's at a very large scale the same effect seen in other projects:

  • RoboCup only moved as fast as it did by opening up the solution of winning teams each year. As a result new teams would get a head start with designs and programs readily available. Instead of starting from scratch, they can stand on the shoulders of giants.
  • Open source helps achieve the same on a daily basis. There's a very visible sign for that: Perseverance on Mars is running on open source software. Every GitHub user, who in their life has contributed to software running on Perseverance today has a badge on their GitHub profile - there are countless badges serving as proof just how many hands it took, how much collaboration was necessary to make this project work.


For me one important learning during this pandemic was just how much we can achieve by working together, be collaborating and building bridges. In that sense, what we have seen is how it is possible to gain so much more by sharing - as Niels Basjes put it so nicely when explaining the Apache Way in one sentence: Gaining by Sharing.

In a sense this is what brought me to the InnerSource Commons Foundation - it's a way for all of us to experience the strenght of collaboration. It's a first step towards bringing more people and more businesses to the open source world, joining forces to solve issues ahead of us.

Monday, 24 May 2021

Sharing your loan details to anyone

A week ago, I blogged about a vulnerability in a platform that would allow anyone to download users’ amortisation schedules. This was a critical issue, but it wasn’t really exploitable in the wild as it included a part where you had to guess the name of the document to download.

I no longer trust that platform so I went to their website to remove my loan data from it, but apparently this isn’t possibile via the UI.

I also opened a ticket on their support platform to request removal and they replied that it isn’t possible.

So I went to their website with the intention of replacing the data with a fake one… but there was no longer an edit button!

Loans

I’m sure it was there before and in fact the code also confirms that it was there:

Loans code

However, the platform is based on Magento and so, starting from the current URL, we can easily guess the edit URL, e.g. https://<host>/anagraficamutui/mutuo/edit/id/<n>.

Let’s try 1… bingo!

But wait a minute… this isn’t my loan! Luckily it’s just a demo entry put in by some developer:

Someone else loan

Even though it’s a dummy page, we can already see the details of the loan such as the (hopefully) fake IBAN, or the loan total and loan number and even the bank contact person name and email address.

And now take a look at this: if I try to access that page in private mode, then I get the login page. All (almost) well, right?

Nope. Let’s try the same request via curl:

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep banca

<input type="text" name="istituto_credito" id="istituto_credito" value="banca acme" title="Nome istituto" class="input-text istituto_credito required-entry" />

$ curl -s https://<host>/anagraficamutui/edit/id/1 | grep NL75

<input type="text" name="iban" id="iban" value="NL75xxxxxxxxx" title="Iban" class="input-text iban required-entry validate-iban validate-length maximum-length-27 validate-alphanum" />

Wait a minute, what’s going on?

Well, it turns out that the page sets the location header to redirect you to the login page when there’s no cookie, otherwise it prints the HTML page!

$ curl -s https://<host>/anagraficamutui/edit/id/1 -I | grep location

location: https://<host>/customer/account/login/

Oh-no!

Conclusion

Data from 5723 loans could have been exposed by accessing a specific URL. Details such as IBAN, loan number, loan total and the bank account contact person could have been used to perform spear phishing attacks.

I reported this privacy flaw to the CSIRT Italia and the platform’s DPO. The issue has been solved after 2 days, but I still haven’t heard from them.

Wednesday, 19 May 2021

Sharing your amortisation schedule to anyone

Last month, my company allowed me to claim some benefits through a dedicated platform. This platform is specifically built for this purpose and allows you to recover these benefits not only in the form of coupons or discount codes, but also as reimbursements for medical visits or interest on mortgage payments.

I wanted to try the latter.

I logged on to the platform and then I filled in all the (many) details about the loan that the plaform asks you to fill in, until I had to upload my amortisation schedule which contains a lot of sensitive data. In fact, a strange thing happened at this step: my file was named document.pdf, but after uploading it was renamed to document_2.pdf.

How do I know? Well, let’s have a look to the UI:

Loan details

Loan details hover

It clearly shows the file name and that’s also a hyperlink. Let’s click then.

The PDF opens in my browser. This is expected, but what happens if we take the URL and try to open it in a private window?? Guess what?

You guessed it.

Let’s have a look to the URL again. It’s in the form: https://<host>/media/mutuo/file/d/o/document_2.pdf.

That’s tempting, isn’t?

I wanted to have some fun and I tried the following:

Loan download

Both the curl output and the checksums are enough to understand that some document has been downloaded there (but discarded since I didn’t download them to my disk…).

Thus, since the d and o parent folders match the two initial letters of my file, I successfully tried with stuff like:

  • /c/o/contratto.pdf, /c/o/contratto_2.pdf, …
  • /c/o/contract.pdf, …
  • /p/r/prospetto.pdf, …

and it does also work with numbers too (to find this out I had to upload a file named 1.pdf 😇), e.g. https://<host>/media/mutuo/file/1/_/1_10.pdf.

Conclusion

If you have uploaded your amortisation schedule to this platform, that in its website says it has more than 300k users from 3k different companies, well someone may have downloaded it.

I reported this privacy flaw to the CSIRT Italia via a PGP encrypted email; the CSIRT is supposed to write to the company that owns the platform to alert them to the problem, but a week later I still hadn’t heard from either of them. So after a week I pinged the CSIRT again, and they replied with a plain text email telling me that they had opened an internal ticket and were nice enough to embed my initial PGP encrypted email.

Two weeks later (about 21 days since my first mail) the platform fixed the problem (the uploaded file path isn’t deterministic anymore and authentication is in place), but I still haven’t heard from them.

Addendum

Since <host> is a third-level domain in my case, I used stuff like Sublist3r and Amass, but you can also use the online version hosted on nmmapper.com, to perform DNS enumeration and I found ~50 websites, 30 of which are aliases pointing to the same host. In fact, I could replace <host> with each of them and I would always download my document_2.pdf file.

Sunday, 16 May 2021

Artificial Intelligence safety: embracing checklists

Unfortunately, human errors are bound to happen. Checklists allows one to verify that all the required actions are correctly done, and in the correct order. The military has it, the health care sector has it, professional diving has it, the aviation and space industries have it, software engineering has it. Why not artificial intelligence practitioners?

In October 1935, the two pilots of the new Boeing warplane B-17 were killed in the crash of the aircraft. The crash was caused by an oversight of the pilots, who forgot to release a lock during the takeoff procedure. Since then, following the checklist during flight operations is mandatory and reduced the number of accidents. During the Apollo 13 mission of 1970, carefully written checklists mitigated the oxygen tank explosion accident.

In healthcare, checklists are widespread too. For example, the World Health Organization released a checklist outlining the required steps before, during, and after a surgery. A meta-analysis suggested that using the checklist was associated with mortality and complication rates reduction.

Because artificial intelligence is used for increasingly important matters, accidents can have important consequences. During a test, a chatbot suggested harmful behaviors to fake patients. The data scientists explained that the AI had no scientific or medical expertise. AI in law enforcement can also cause serious trouble. For example, a facial recognition software mistakenly identified a criminal, resulting in the arrest of an innocent, an algorithm used to determine the likelihood of crime recidivism was judged unfair towards black defendants. AI is also used in healthcare where a simulation of the Covid-19 outbreak in the United Kingdom shaped policy and led to a nation-wide lockdown. However, the AI simulation was badly programmed, causing serious issues. Root cause analysis determined that the simulation was not deterministic and badly tested. The lack of checklist could have played a role.

Just like in the aforementioned complex and critical industries, checklists should be leveraged to make sure the building and reporting of AI models includes everything required to reproduce results and make an informed judgement, which fosters trust in the AI accuracy. However, checklists for building prediction models like TRIPOD are not often used by data scientists, even though they might help. Possible reasons might be ignorance about the existence of such checklists, perceived lack of usefulness or misunderstandings caused by the use of different vocabularies among AI developers.

Enforcing the use of standardized checklists would lead to better idioms and practices, thereby fostering fair and accurate AI with a robust evaluation, making its adoption easier for sensitive tasks. In particular, a checklist on AI should include points about how the training dataset was constructed and preprocessed, the model specification and architecture and how its efficiency, accuracy and fairness were assessed. A list of all intended purposes of the AI should also be disclosed, as well as known risks and limitations.

As a novel field, one can understand why checklists are not widely used for AI. However, they are used in other fields for known reasons, and taking notes from past mistakes and ideas would be great this time.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

          Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  English – nico.rikken’s blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nikos Roussos - opensource  Planet FSFE on irl.xyz  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Thoughts of a sysadmin (Posts about planet-fsfe)  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog