Free Software, Free Society!
Thoughts of the FSFE Community (English)

## KDE Applications 19.08 branches created

Make sure you commit anything you want to end up in the KDE Applications 19.08 release to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 18 of July.

More interesting dates
August 1, 2019: KDE Applications 19.08 RC (19.07.90) Tagging and Release
August 8, 2019: KDE Applications 19.08 Tagging
August 15, 2019: KDE Applications 19.08 Release

https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

## Notes based on Ubuntu 18.04 LTS

My notes for this k8s blog post are based upon an Ubuntu 18.05 LTS KVM Virtual Machine. The idea is to use nested-kvm to run minikube inside a VM, that then minikube will create a kvm node.

minikube builds a local kubernetes cluster on a single node with a set of small resources to run a small kubernetes deployment.

Archlinux –> VM Ubuntu 18.04 LTS runs minikube/kubeclt —> KVM minikube node

## Pre-requirements

### Nested kvm

#### Host

(archlinux)

$grep ^NAME /etc/os-release NAME="Arch Linux" Check that nested-kvm is already supported: $ cat /sys/module/kvm_intel/parameters/nested
N

If the output is N (No) then remove & enable kernel module again:

$sudo modprobe -r kvm_intel$ sudo modprobe kvm_intel nested=1

Check that nested-kvm is now enabled:

$cat /sys/module/kvm_intel/parameters/nested Y #### Guest Inside the virtual machine: $ grep NAME /etc/os-release
NAME="Ubuntu"
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
$egrep -o 'vmx|svm|0xc0f' /proc/cpuinfo vmx $ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

#### LibVirtd

If the above step fails, try to edit the xml libvirtd configuration file in your host:

# virsh edit ubuntu_18.04

and change cpu mode to passthrough:

from

  <cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Nehalem</model>
</cpu>

to

  <cpu mode='host-passthrough' check='none'/>

## Install Virtualization Tools

Inside the VM

sudo apt -y install
qemu-kvm
bridge-utils
libvirt-clients
libvirt-daemon-system


## Permissions

We need to be included in the libvirt group

sudo usermod -a -G libvirt $(whoami) newgrp libvirt ## kubectl kubectl is a command line interface for running commands against Kubernetes clusters. size: ~41M $ export VERSION=$(curl -sL https://storage.googleapis.com/kubernetes-release/release/stable.txt)$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/amd64/kubectl$ chmod +x kubectl
$sudo mv kubectl /usr/local/bin/kubectl$ kubectl completion bash | sudo tee -a /etc/bash_completion.d/kubectl
$kubectl version if you wan to use bash autocompletion without logout/login use this: source <(kubectl completion bash) What the json output of kubectl version looks like: $ kubectl version -o json | jq .
The connection to the server localhost:8080 was refused - did you specify the right host or port?
{
"clientVersion": {
"major": "1",
"minor": "15",
"gitVersion": "v1.15.0",
"gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529",
"gitTreeState": "clean",
"buildDate": "2019-06-19T16:40:16Z",
"goVersion": "go1.12.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}

Message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

it’s okay if minikube hasnt started yet.

## minikube

size: ~40M

$curl -sLO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64$ chmod +x minikube-linux-amd64

$sudo mv minikube-linux-amd64 /usr/local/bin/minikube$ minikube version
minikube version: v1.2.0

$minikube update-check CurrentVersion: v1.2.0 LatestVersion: v1.2.0$ minikube completion bash | sudo tee -a /etc/bash_completion.d/minikube 

To include bash completion without login/logout:

source $(minikube completion bash) ### KVM2 driver We need a driver so that minikube can build a kvm image/node for our kubernetes cluster. size: ~36M $ curl -sLO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2

$chmod +x docker-machine-driver-kvm2$ mv docker-machine-driver-kvm2 /usr/local/bin/

### Start minikube

$minikube start --vm-driver kvm2 * minikube v1.2.0 on linux (amd64) * Downloading Minikube ISO ... 129.33 MB / 129.33 MB [============================================] 100.00% 0s * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6 * Downloading kubeadm v1.15.0 * Downloading kubelet v1.15.0 * Pulling images ... * Launching Kubernetes ... * Verifying: apiserver proxy etcd scheduler controller dns * Done! kubectl is now configured to use "minikube" Check via libvirt, you will find out a new VM, named: minikube $ virsh list
Id    Name                           State
----------------------------------------------------
1     minikube                       running

### Something gone wrong:

Just delete the VM and configuration directories and start again:

$minikube delete$ rm -rf ~/.minikube/ ~/.kube

## kubectl version

Now let’s run kubectl version again

$kubectl version -o json | jq . { "clientVersion": { "major": "1", "minor": "15", "gitVersion": "v1.15.0", "gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", "gitTreeState": "clean", "buildDate": "2019-06-19T16:40:16Z", "goVersion": "go1.12.5", "compiler": "gc", "platform": "linux/amd64" }, "serverVersion": { "major": "1", "minor": "15", "gitVersion": "v1.15.0", "gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", "gitTreeState": "clean", "buildDate": "2019-06-19T16:32:14Z", "goVersion": "go1.12.5", "compiler": "gc", "platform": "linux/amd64" } } ## Dashboard Start kubernetes dashboard $ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001

### Friday, 12 July 2019

Today I was wondering what the most commonly used license that people use in OpenAPI, so I went and did a quick analysis.

# Results¶

The top 5 (with count in brackets):

1. Apache-2.0 (421)1
2. CC-BY-3.0 (250)
3. MIT (15)
5. “Open Government License – British Columbia” (6)

The striked-out entries are the ones that I would not really consider a proper license.

The license names inside quotation marks are the exact copy-paste from the field. The rest are de-duplicated into their SPDX identifiers.

After those top 5 the long end goes very quickly into only one license per listed API. Several of those seem very odd as well.

# Methodology¶

Note: Before you start complaining, I realise this is probably a very sub-optimal solution code-wise, but it worked for me. In my defence, I did open up my copy of the Sed & Awk Pocket Reference before my eyes went all glassy and I hacked up the following ugly method. Also note that the shell scripts are in Fish shell and may not work directly in a 100% POSIX shell.

First, I needed to get a data set to work on. Hat-tip to Mike Ralphson for pointing me to APIs Guru as a good resource. I analysed their APIs-guru/openapi-directory repository2, where in the APIs folder they keep a big collection of public APIs. Most of them following the OpenAPI (previously Swagger) specification.

git clone https://github.com/APIs-guru/openapi-directory.git
cd openapi-directory/APIs


Next I needed to list all the licenses found there. For this I assumed the name: tag in YAML4 (the one including the name of the license) to be in the very next line after the license: tag3 – I relied on people writing OpenAPI files in the same order as it is laid out in the OpenAPI Specification. I stored the list of all licenses, sorted alphabetically in a separate api_licenses file:

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \
grep 'name:' | sort > api_licenses


Then I generated another file called api_licenses_unique that would include only all names of these licenses.

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \
grep 'name:' | sort | uniq > api_licenses_unique


Because I was too lazy to figure out how to do this properly5, I simply wrapped the same one-liner into a script to go through all the unique license names and count how many times they show up in the (non-duplicated) list of all licenses found.

for license in (grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 \
--no-filename | grep 'name' | sort | uniq)
grep "license" api_licenses --count end  In the end I copied the console output of this last command, opened api_licenses_unique, and pasted said output in the first column (by going into Block Selection Mode in Kate). # Clarification on what I consider “proper license” and re-count of Creative Commons licenses (12 July 2019 update)¶ I was asked what I considered as a “proper license” above, and specifically why I did not consider “Creative Commons” as such. First, if the string did not even remotely look like a name of a license, I did not consider that as a proper license. This is the case e.g. with “This page was built with the Swagger API.”. As for the string “Creative Commons”, it – at best – indicates a family o licenses, which span a vast spectrum from CC0-1.0 (basically public domain) on one end to CC-BY-NC-CA-4.0 (basically, you may copy this, but not change anything, nor get money out of it, and you must keep the same license) on the other. For reference, on the SPDX license list, you will find 32 Creative Commons licenses. And SPDX lists only the International and Universal versions of them7. Admiteldy, – and this is a caveat in my initial method above – it may be that there is an actual license following the lines after the “Creative Commons” string … or, as it turned out to be true, that the initial 255 count of name: Creative Commons licenses included also valid CC license names such as name: Creative Commons Attribution 3.0. So, obviously I made a boo-boo, and therefore went and dug deeper ;) To do so, and after looking at the results a bit more, I noticed that the url: entries of the name: Creative Commons licenses seem to point to actual CC licenses, so I decided to rely on that. Luckily, this turned out to be true. I broadened up the initial search to one extra line, to include the url: line, narrowed down the next search to name: Creative Commons, and in the end only to url: grep 'license:' **/openapi.yaml **/swagger.yaml -A 2 --no-filename | \ grep 'name: Creative Commons' -A 1 | grep 'url' | sort > api_licenses_cc  Next, I searched for the most common license – CC-BY-3.0: grep --count 'creativecommons.org/licenses/by/3.0' api_licenses_cc  The result was 250, so for the remaining6 5 I just opened the api_licenses_cc file and counted them manually. Using this method the list of all “Creative Commons” license turned out to be as follows: 1. CC-BY-3.0 (250, of which one was specific to Australian jurisdiction) 2. CC-BY-4.0 (3) 3. CC-BY-NC-4.0 (1) 4. CC-BY-NC-ND-2.0 (1) In this light, I am amending the results above, and removing the bogus “Creative Commons” entry. Apart from removing the bogus entry, it does not change the ranking, nor the counts, of the top 5 licenses. hook out → not proud of the method, but happy with having results 1. This should come as no surprise, as Apache-2.0 is used as the official specification’s example 2. At the time of this writing, that was commit 506133b 3. I tried it also with 3 lines, and the few extra results that came up where mostly useless. 4. I did a quick check and the repository seems to include no OpenAPIs in JSON format. 5. I expected for license in api_licenses_unique to work, but it did not. 6. The result of wc -l api_licenses_cc was 255. 7. Prior to version 4.0 of Creative Commons licenses each CC license had several versions localised for specific jurisdictions. ### Tuesday, 09 July 2019 ## Beware of some of the Qt 5.13 deprecation porting hints QComboBox::currentIndexChanged(QString) used to have (i.e. in Qt 5.13.0) a deprecation warning that said "Use currentTextChanged() instead". That has recently been reverted since both are not totally equivalent, sure, you can probably "port" from one to the other, but the "use" wording to me seems like a "this is the same" and they are not. Another one of those is QPainter::initFrom, which inits a painter with the pen, background and font to the same as the given widget. This is deprecated, because it's probably wrong ("what is the pen of a widget?") but the deprecation warning says "Use begin(QPaintDevice*)" but again if you look at the implementation, they don't really do the same. Still need to find time to complain to the Qt developers and get it fixed. Anyhow, as usual, when porting make sure you do a correct port and not just blind changes. ### Monday, 08 July 2019 ## Repair a Faulty Disk in Raid-5 Quick notes ## Identify slow disk # hdparm -Tt /dev/sda /dev/sda: Timing cached reads: 2502 MB in 2.00 seconds = 1251.34 MB/sec Timing buffered disk reads: 538 MB in 3.01 seconds = 178.94 MB/sec # hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 2490 MB in 2.00 seconds = 1244.86 MB/sec Timing buffered disk reads: 536 MB in 3.01 seconds = 178.31 MB/sec # hdparm -Tt /dev/sdc /dev/sdc: Timing cached reads: 2524 MB in 2.00 seconds = 1262.21 MB/sec Timing buffered disk reads: 538 MB in 3.00 seconds = 179.15 MB/sec # hdparm -Tt /dev/sdd /dev/sdd: Timing cached reads: 2234 MB in 2.00 seconds = 1117.20 MB/sec Timing buffered disk reads: read(2097152) returned 929792 bytes ## Set disk to Faulty State and Remove it # mdadm --manage /dev/md0 --fail /dev/sdd mdadm: set /dev/sdd faulty in /dev/md0 # mdadm --manage /dev/md0 --remove /dev/sdd mdadm: hot removed /dev/sdd from /dev/md0 ## Verify Status # mdadm --verbose --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 6 15:06:34 2014 Raid Level : raid5 Array Size : 2929893888 (2794.16 GiB 3000.21 GB) Used Dev Size : 976631296 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Jul 8 00:51:14 2019 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : ServerOne:0 (local to host ServerOne) UUID : d635095e:50457059:7e6ccdaf:7da91c9b Events : 18122 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 6 8 32 1 active sync /dev/sdc 4 0 0 4 removed 4 8 0 3 active sync /dev/sda ## Format Disk • quick format to identify bad blocks, • better solution zeroing the disk # mkfs.ext4 -cc -v /dev/sdd  • middle ground to use -c -c Check the device for bad blocks before creating the file system. If this option is specified twice, then a slower read-write test is used instead of a fast read-only test. # mkfs.ext4 -c -v /dev/sdd  output: Running command: badblocks -b 4096 -X -s /dev/sdd 244190645 Checking for bad blocks (read-only test): 9.76% done, 7:37 elapsed ### Remove ext headers # dd if=/dev/zero of=/dev/sdd bs=4096 count=4096 Using dd to remove any ext headers ## Test disk  # hdparm -Tt /dev/sdd /dev/sdd: Timing cached reads: 2174 MB in 2.00 seconds = 1087.20 MB/sec Timing buffered disk reads: 516 MB in 3.00 seconds = 171.94 MB/sec ## Add Disk to Raid  # mdadm --add /dev/md0 /dev/sdd mdadm: added /dev/sdd ## Speed # hdparm -Tt /dev/md0 /dev/md0: Timing cached reads: 2480 MB in 2.00 seconds = 1239.70 MB/sec Timing buffered disk reads: 1412 MB in 3.00 seconds = 470.62 MB/sec  ## Status  # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd[5] sda[4] sdc[6] sdb[0] 2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 0.0% (44032/976631296) finish=369.5min speed=44032K/sec unused devices: <none> ## Verify Raid  # mdadm --verbose --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Feb 6 15:06:34 2014 Raid Level : raid5 Array Size : 2929893888 (2794.16 GiB 3000.21 GB) Used Dev Size : 976631296 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jul 8 00:58:38 2019 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 0% complete Name : ServerOne:0 (local to host ServerOne) UUID : d635095e:50457059:7e6ccdaf:7da91c9b Events : 18244 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 6 8 32 1 active sync /dev/sdc 5 8 48 2 spare rebuilding /dev/sdd 4 8 0 3 active sync /dev/sda Tag(s): mdadm, raid5 ### Sunday, 07 July 2019 ## sRGB↔XYZ conversion In an earlier post, I’ve shown how to calculate an sRGB↔XYZ conversion matrix. It’s only natural to follow up with a code for converting between sRGB and XYZ colour spaces. While the matrix is a significant portion of the algorithm, there is one more step necessary: gamma correction. ## What is gamma correction? Human perception of light’s brightness approximates a power function of its intensity. This can be expressed as $$P = S^\alpha$$ where $$P$$ is the perceived brightness and $$S$$ is linear intensity. $$\alpha$$ has been experimentally measure to be less than one which means that people are more sensitive to changes to dark colours rather than to bright ones. Based on that observation, colour space’s encoding can be made more efficient by using higher precision when encoding dark colours and lower when encoding bright ones. This is akin to precision of floating point numbers scaling with value’s magnitude. In RGB systems, the role of precision scaling is done by gamma correction. When colour is captured (for example from a digital camera) it goes through gamma compression which spaces dark colours apart and packs lighter colours more densely. When displaying an image, the opposite happens and encoded value goes through gamma expansion. Many RGB systems use a simple $$S = E^\gamma$$ expansion formula, where $$E$$ is the encoded (or non-linear) value. With decoding $$\gamma$$ approximating $$1/\alpha$$, equal steps in encoding space correspond roughly to equal steps in perceived brightness. Image on the right demonstrates this by comparing two colour gradients. The first one has been generated by increasing encoded value in equal steps and the second one has been created by doing the same to light intensity. The former includes many dark colours while the latter contains a sudden jump in brightness from black to the next colour. sRGB uses slightly more complicated formula stitching together two functions: \begin{align} E &= \begin{cases} 12.92 × S & \text{if } S ≤ S_0 \\ 1.055 × S^{1/2.4} - 0.055 & \text{otherwise} \end{cases} \\[0.5em] S &= \begin{cases} E / 12.92 & \text{if } E ≤ E_0 \\ ((E + 0.055) / 1.055)^{2.4} & \text{otherwise} \end{cases} \\[0.5em] S_0 &= 0.00313066844250060782371 \\ E_0 &= 12.92 × S_0 \\ &= 0.04044823627710785308233 \end{align} The formulas assume values are normalised to [0, 1] range. This is not always how they are expressed so a scaling step might be necessary. ## sRGB encoding Most common sRGB encoding uses eight bits per channel which introduces a scaling step: $$E_8 = ⌊E × 255⌉$$. In an actual implementation, to increase efficiency and accuracy of gamma operations, it’s best to fuse the multiplication into aforementioned formulas. With that arguably obvious optimisation, the equations become: \begin{align} E_8 &= \begin{cases} ⌊3294.6 × S⌉ & \text{if } S ≤ S_0 \\ ⌊269.025 × S^{1/2.4} - 14.025⌉ & \text{otherwise} \end{cases} \\[0.5em] S &= \begin{cases} E_8 / 3294.6 & \text{if } E_8 ≤ 10 \\ ((E + 14.025) / 269.025)^{2.4} & \text{otherwise} \end{cases} \end{align} This isn’t the only way to represent colours of course. For example, 10-bit colour depth changes the scaling factor to 1024; 16-bit high colour uses five bits for red and blue channels while five or six for green producing different scaling factors for different primaries; and HDTV caps the range to [16, 235]. Needless to say, correct formulas need to be chosen based on the standard in question. ## The implementation And that’s it. Encoding, gamma correction and the conversion matrix are all the necessary pieces to get the conversion implemented. To keep things interesting, let's this time write the code in TypeScript: type Tripple = [number, number, number]; type Matrix = [Tripple, Tripple, Tripple]; /** * A conversion matrix from linear sRGB colour space with coordinates normalised * to [0, 1] range into an XYZ space. */ const xyzFromRgbMatrix: Matrix = [ [0.4123865632529917, 0.35759149092062537, 0.18045049120356368], [0.21263682167732384, 0.7151829818412507, 0.07218019648142547], [0.019330620152483987, 0.11919716364020845, 0.9503725870054354] ]; /** * A conversion matrix from XYZ colour space to a linear sRGB space with * coordinates normalised to [0, 1] range. */ const rgbFromXyzMatrix: Matrix = [ [ 3.2410032329763587, -1.5373989694887855, -0.4986158819963629], [-0.9692242522025166, 1.875929983695176, 0.041554226340084724], [ 0.055639419851975444, -0.20401120612390997, 1.0571489771875335] ]; /** * Performs an sRGB gamma expansion of an 8-bit value, i.e. an integer in [0, * 255] range, into a floating point value in [0, 1] range. */ function gammaExpansion(value255: number): number { return value255 <= 10 ? value255 / 3294.6 : Math.pow((value255 + 14.025) / 269.025, 2.4); } /** * Performs an sRGB gamma compression of a floating point value in [0, 1] range * into an 8-bit value, i.e. an integer in [0, 255] range. */ function gammaCompression(linear: number): number { let nonLinear: number = linear <= 0.00313066844250060782371 ? 3294.6 * linear : (269.025 * Math.pow(linear, 5.0 / 12.0) - 14.025); return Math.round(nonLinear) | 0; } /** * Multiplies a 3✕3 matrix by a 3✕1 column matrix. The result is another 3✕1 * column matrix. The column matrices are represented as single-dimensional * 3-element array. The matrix is represented as a two-dimensional array of * rows. */ function matrixMultiplication3x3x1(matrix: Matrix, column: Tripple): Tripple { return matrix.map((row: Tripple) => ( row[0] * column[0] + row[1] * column[1] + row[2] * column[2] )) as Tripple; } /** * Converts sRGB colour given as a triple of 8-bit integers into XYZ colour * space. */ function xyzFromRgb(rgb: Tripple): Tripple { return matrixMultiplication3x3x1( xyzFromRgbMatrix, rgb.map(gammaExpansion) as Tripple); } /** * Converts colour from XYZ space to sRGB colour represented as a triple of * 8-bit integers. */ function rgbFromXyz(xyz: Tripple): Tripple { return matrixMultiplication3x3x1( rgbFromXyzMatrix, xyz).map(gammaCompression) as Tripple; } ### Wednesday, 03 July 2019 ## Down the troubleshooting rabbit-hole ## Hardware Details HP ProLiant MicroServer AMD Turion(tm) II Neo N54L Dual-Core Processor Memory Size: 2 GB - DIMM Speed: 1333 MT/s Maximum Capacity: 8 GB Running 24×7 from 23/08/2010, so nine years! ## Prologue The above server started it’s life on CentOS 5 and ext3. Re-formatting to run CentOS 6.x with ext4 on 4 x 1TB OEM Hard Disks with mdadm raid-5. That provided 3 TB storage with Fault tolerance 1-drive failure. And believe me, I used that setup to zeroing broken disks or replacing faulty disks. As we are reaching the end of CentOS 6.x and there is no official dist-upgrade path for CentOS, and still waiting for CentOS 8.x, I made decision to switch to Ubuntu 18.04 LTS. At that point this would be the 3rd official OS re-installation of this server. I chose ubuntu so that I can dist-upgrade from LTS to LTS. This is a backup server, no need for huge RAM, but for a reliable system. On that storage I have 2m files that in retrospect are not very big. So with the re-installation I chose to use xfs instead of ext4 filesystem. I am also running an internal snapshot mechanism to have delta for every day and that pushed the storage usage to 87% of the 3Tb. If you do the math, 2m is about 1.2Tb usage, we need a full initial backup, so 2.4Tb (80%) and then the daily (rotate) incremental backups are ~210Mb per day. That gave me space for five (5) daily snapshots aka a work-week. To remove this impediment, I also replaced the disks with WD Red Pro 6TB 7200rpm disks, and use raid-1 instead of raid-5. Usage is now ~45% ## Problem ### Frozen System From time to time, this very new, very clean, very reliable system froze to death! When attached monitor & keyboard no output. Strange enough I can ping the network interfaces but I can not ssh to the server or even telnet (nc) to ssh port. Awkward! Okay - hardware cold reboot then. As this system is remote … in random times, I need to ask from someone to cold-reboot this machine. Awkward again. ### Kernel Panic If that was not enough, this machine also has random kernel panics. ## Errors Let’s start troubleshooting this system # journalctl -p 3 -x ### Important Errors ERST: Failed to get Error Log Address Range. APEI: Can not request [mem 0x7dfab650-0x7dfab6a3] for APEI BERT registers ipmi_si dmi-ipmi-si.0: Could not set up I/O space and more important Errors: INFO: task kswapd0:40 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task xfsaild/dm-0:761 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kworker/u9:2:3612 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kworker/1:0:5327 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task rm:5901 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kworker/u9:1:5902 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kworker/0:0:5906 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kswapd0:40 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task xfsaild/dm-0:761 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. INFO: task kworker/u9:2:3612 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. First impressions ? ## BootOptions After a few (hours) of internet research the suggestion is to disable • ACPI stands for Advanced Configuration and Power Interface. • APIC stands for Advanced Programmable Interrupt Controller. This site is very helpful for ubuntu, although Red Hat still has a huge advanced on describing kernel options better than canonical. ### Grub # vim /etc/default/grub GRUB_CMDLINE_LINUX="noapic acpi=off" then # update-grub Sourcing file /etc/default/grub' Sourcing file /etc/default/grub.d/50-curtin-settings.cfg' Generating grub configuration file ... Found linux image: /boot/vmlinuz-4.15.0-54-generic Found initrd image: /boot/initrd.img-4.15.0-54-generic Found linux image: /boot/vmlinuz-4.15.0-52-generic Found initrd image: /boot/initrd.img-4.15.0-52-generic done Verify # grep noapic /boot/grub/grub.cfg | head -1 linux /boot/vmlinuz-4.15.0-54-generic root=UUID=0c686739-e859-4da5-87a2-dfd5fcccde3d ro noapic acpi=off maybe-ubiquity reboot and check again: # journalctl -p 3 -xb -- Logs begin at Thu 2019-03-14 19:26:12 EET, end at Wed 2019-07-03 21:31:08 EEST. -- Jul 03 21:30:49 servertwo kernel: ipmi_si dmi-ipmi-si.0: Could not set up I/O space okay !!! ## ipmi_si Unfortunately I could not find anything useful regarding # dmesg | grep -i ipm [ 10.977914] ipmi message handler version 39.2 [ 11.188484] ipmi device interface [ 11.203630] IPMI System Interface driver. [ 11.203662] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 11.203665] ipmi_si: SMBIOS: mem 0x0 regsize 1 spacing 1 irq 0 [ 11.203667] ipmi_si: Adding SMBIOS-specified kcs state machine [ 11.203729] ipmi_si: Trying SMBIOS-specified kcs state machine at mem address 0x0, slave address 0x20, irq 0 [ 11.203732] ipmi_si dmi-ipmi-si.0: Could not set up I/O space # ipmitool list Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory # lsmod | grep -i ipmi ipmi_si 61440 0 ipmi_devintf 20480 0 ipmi_msghandler 53248 2 ipmi_devintf,ipmi_si ## blocked for more than 120 seconds. But let’s try to fix the timeout warnings: INFO: task kswapd0:40 blocked for more than 120 seconds. Not tainted 4.15.0-54-generic #58-Ubuntu "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message if you search online the above message, most of the sites will suggest to tweak dirty pages for your system. This is the most common response across different sites: This is a know bug. By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. This especially happens on systems with a lot of memory. Okay this may be the problem but we do not have a lot of memory, only 2GB RAM and 2GB Swap. But even then, our vm.dirty_ratio = 20 setting is 20% instead of 40%. But I have the ability to cross-check ubuntu 18.04 with CentOS 6.10 to compare notes: ### ubuntu 18.04 # uname -r 4.15.0-54-generic # sysctl -a | egrep -i 'swap|dirty|raid'|sort dev.raid.speed_limit_max = 200000 dev.raid.speed_limit_min = 1000 vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirtytime_expire_seconds = 43200 vm.dirty_writeback_centisecs = 500 vm.swappiness = 60 ### CentOS 6.11 # uname -r 2.6.32-754.15.3.el6.centos.plus.x86_64 # sysctl -a | egrep -i 'swap|dirty|raid'|sort dev.raid.speed_limit_max = 200000 dev.raid.speed_limit_min = 1000 vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 10 vm.dirty_bytes = 0 vm.dirty_expire_centisecs = 3000 vm.dirty_ratio = 20 vm.dirty_writeback_centisecs = 500 vm.swappiness = 60 ### Scheduler for Raid This is the best online documentation on the optimize raid Comparing notes we see that both systems have the same settings, even when the kernel version is a lot different, 2.6.32 Vs 4.15.0 !!! Researching on raid optimization there is a note of kernel scheduler. ### Ubuntu 18.04 # for drive in {a..c}; do cat /sys/block/sd{drive}/queue/scheduler; done

noop deadline [cfq] 

# for drive in {a..d}; do cat /sys/block/sd${drive}/queue/scheduler; done noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] noop anticipatory deadline [cfq]  ## Anticipatory scheduling CentOS supports Anticipatory scheduling on the hard disks but nowadays anticipatory scheduler is not supported in modern kernel versions. That said, from the above output we can verify that both systems are running the default scheduler cfq. ## Disks ### Ubuntu 18.04 • Western Digital Red Pro WDC WD6003FFBX-6 # for i in sd{b..c} ; do hdparm -Tt /dev/$i; done

/dev/sdb:
Timing cached reads:   2344 MB in  2.00 seconds = 1171.76 MB/sec
Timing buffered disk reads: 738 MB in  3.00 seconds = 245.81 MB/sec

/dev/sdc:
Timing cached reads:   2264 MB in  2.00 seconds = 1131.40 MB/sec
Timing buffered disk reads: 774 MB in  3.00 seconds = 257.70 MB/sec

### CentOS 6.11

• Seagate ST1000DX001
/dev/sdb:
Timing cached reads:   2490 MB in  2.00 seconds = 1244.86 MB/sec
Timing buffered disk reads: 536 MB in  3.01 seconds = 178.31 MB/sec

/dev/sdc:
Timing cached reads:   2524 MB in  2.00 seconds = 1262.21 MB/sec
Timing buffered disk reads: 538 MB in  3.00 seconds = 179.15 MB/sec

/dev/sdd:
Timing cached reads:   2452 MB in  2.00 seconds = 1226.15 MB/sec
Timing buffered disk reads: 546 MB in  3.01 seconds = 181.64 MB/sec

## So what I am missing ?

My initial personal feeling was the low memory. But after running a manual rsync I’ve realized that:

### cpu

was load average: 0.87, 0.46, 0.19

### mem

was (on high load), when hit 40% of RAM, started to use swap.

KiB Mem :  2008464 total,    77528 free,   635900 used,  1295036 buff/cache
KiB Swap:  2097148 total,  2096624 free,      524 used.  1184220 avail Mem 

So I tweaked a bit the swapiness and reduce it from 60% to 40%

and run a local snapshot (that is a bit heavy on the disks) and doing an upgrade and trying to increase CPU load. Still everything is fine !

I will keep an eye on this story.

## Usability & Productivity Sprint 2019

I [partially, only 2 days out of the 7] attended the Usability & Productivity Sprint 2019 in Valencia two weekends ago.

I was very happy to meet quite some new developer blood, which is something we had been struggling a bit to get lately, so we're starting to get on the right track again :) And I can only imagine it'll get better and better due to the "Onboarding" goal :)

During the sprint we had an interesting discussion about how to get more people to know about usability, and the outcome is that probably we'll try to get some training to members of KDE to increase the knowledge of usability amongst us. Sounds like a good idea to me :)

On the more "what did *you* actually do" side:
* worked on fixing a crash i had on the touchpad kded, (already available on the latest released Plasma!)
* finished part of the implementation for Optional Content Group Links support in Okular (i started that 3 years ago and i was almost sure i had done all the work, but i clearly had not)
* Did some code reviews on existing okular phabricator merge requests (so sad i'm still behind though, we need more people reviewing there other than me)
* Together with Nicolas Fella worked on allowing extra fields from json files to be translated, we even documented it!
* Changed lots of applications released on KDE Applications to follow the KDE Applications versioning scheme, the "winner" was kmag, that had been stuck in version 1.0 for 15 years (and had more than 440 commits since then)
* Fixed a small issue with i18n in kanagram

I would like to thank SLIMBOOK for hosting us in their offices (and providing a shuttle from them to the hotel) and the KDE e.V. for sponsoring my attendance to the sprint, please donate to KDE if you think the work done at sprints is important.

## Dealing with Colors in lower Android versions

I’m currently working on a project which requires me to dynamically set the color of certain UI elements to random RGB values.

Unfortunately and surprisingly, handy methods for dealing with colors in Android are only available since API level 26 (Android O). Luckily though, the developer reference specifies, how colors are encoded internally in the Android operating system, so I was able to create a class with the most important color-related (for my use-case) methods which is able to function in lower Android versions as well.

I hope I can save some of you from headaches by sharing the code. Feel free to reuse as you please

Happy Hacking!

PS: Is there a way to create Github-like gists with Gitea?

## KDE Applications 19.08 Schedule finalized

It is available at the usual place https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

Dependency freeze is two weeks (July 11) and Feature Freeze a week after that, make sure you start finishing your stuff!

P.S: Remember last day to apply for Akademy Travel Support is this Sunday 30 of June!

## A good firewall for a small network

In this article I will outline the setup of my (not so) new firewall at home. I explain how I decided which hardware to get and which software to choose, and I cover the entire process of assembling the machine and installing the operating system. Hopefully this will be helpful to poeple in similar situations.

# Introduction

While the ability of firewalls to protect against all the evils of the internets is certainly exaggerated, there are some important use cases for them: you want to prevent certain inbound traffic and manipulate certain outbound traffic e.g. route it through a VPN.

For a long time I used my home server (whose main purpose is network attached storage) to also do some basic routing and VPN, but this had a couple of important drawbacks:

• Just one NIC on the server meant traffic to/from the internet wasn’t physically required to go through the server.
• Less reliable due to more complex setup → longer downtimes during upgrades, higher chance of failure due to hard drives.
• I wouldn’t give someone else the root password to my data storage, but I did want my flat-mates to be able to reset and configure basic network components that they depend on (Router/Port-forwarding and WiFi).
• I wanted to isolate the ISP-provided router more strongly from the LAN as they have a history of security vulnerabilities.

The different off-the-shelf routers I had used over the years had also worked only so-so (even those that were customisable) so I decided I needed a proper router. Since WiFi access was already out-sourced to dedicated devices I really only needed a filtering and routing device.

# Hardware

## Board & CPU

The central requirements for the device were:

• low energy consumption
• enough CPU power to route traffic at Gbit-speed, run Tor and OpenVPN (we don’t have Gbit/s internet in Berlin, yet, but I still have hopes for the future)
• hardware crypto support to unburden the CPU for crypto tasks
• two NICs, one for the LAN and one for the WAN

I briefly thought about getting an ARM-based embedded board, but most reviews suggested that the performance wouldn’t be enough to satisfy my requirements and also the *BSD support was mixed at best and I didn’t want to rule out running OpenBSD or FreeBSD.

Back to x86-land: I had previously used PC Engines ALIX boards as routers and was really happy with them at the time. Their new APU boards promised better performance, but thanks to the valuable feedback and some benchmarking done by the community over at BSDForen.de, I came to the conclusion that they wouldn’t be able to push more than 200Mbit/s through an OpenVPN tunnel.

In the end I decided on the Gigabyte J3455N-D3H displayed at the top. It sports a rather atypical Intel CPU (Celeron J3455) with

• four physical cores @ 1.5Ghz
• AESNI support
• 10W TDP

Having four actual cores (instead of 2 cores + hyper threading) is pretty cool now that many security-minded operating systems have started deactivating hyper threading to mitigate CPU bugs [OpenBSD] [HardenedBSD]. And the power consumption is also quite low.

I would have liked for the two NICs on the mainboard to be from Intel, but I couldn’t find a mainboard at the time that offered this (other than super-expensive SuperMicro boards). At least the driver support on modern Realteks is quite good.

## Storage & Memory

The board has two memory slots and supports a maximum of 4GiB each. I decided 4GiB are enough for now and gave it one module to allow for future extensions (I know that’s suboptimal for speed).

Storage-wise I originally planned on putting a left-over SATA-SSD into the case, but in the end, I decided a tiny USB3-Stick would provide sufficient performance and be much easier to replace/debug/…

## Case & Power

Since I installed a real 19” wrack in my new flat, of course the case for the firewall would have to fit nicely into that. I had a surprisingly difficult time finding a good case, because I wanted one were the board’s ports would be front-facing. That seems to be quite a rare requirement, although I really don’t understand why. Obviously having the network ports, serial ports and USB-Ports to the front makes changing the setup and debugging so much easier ¯\_(ツ)_/¯

I also couldn’t find a good power supply for such a low-power device, but I still had a 60W PicoPSU supply lying around.

Even though it came with an overpowered PSU and a proprietary IO-Shield (more on that below), I decided on the SuperMicro SC505-203B. It really does look quite good, I have to say!

## Assembly

Mounting the mainboard in the case is pretty straight-forward. The biggest issue was the aforementioned proprietary I/O-Shield that came with the SuperMicro case (and was designed only for SuperMicro-boards). It was possible to remove it, however, the resulting open space did not conform to ATX spec so it wasn’t possible to just fit the Gigabyte board’s shield into it.

I quickly took the measurements and starting cutting away on the shield to make it fit. This worked ok-ish in the end, but is more dangerous than it looks (be smarter than me, wear gloves ☝ ). In retrospect I also recommend that you do not remove the bottom fold on the shield, only left, right and top; that will make it hold a lot better in the case opening.

The board can be fit into the case using standard screws in the designated places. As mentioned above, I removed the original (actively cooled) power supply unit and used the 60W PicoPSU that I had lying around from before. Since it doesn’t have the 4-pin CPU cable I had improvise. There are adaptors for this, but if you have a left-over power supply, you can also tape together something. I also put the transformer into the case (duck-tape, yeah!) so that one can plug in the power cord from the back of the case as usual.

# Software

## Choice

There are many operating systems I could have chosen since I decided to use an x86 platform. My criteria were:

• free software (obviously)
• intuitive web user interface to do at least the basic things
• possibility to login via SSH if things don’t go as planned
• OpenVPN client

I feel better with operating systems based on FreeBSD or OpenBSD, mainly because I have more experience with them than with GNU/Linux distributions nowadays. In previous flats I had also used OpenWRT and dd-wrt based routers, but whenever I needed to tweak something beyond what the web interface offered, it got really painful. In general the whole IPtables based stack on Linux seems overly complicated, but maybe that’s just me.

In any case, there are no OpenBSD-based router operating systems with web interfaces (that I am aware of) so I had the choice between

1. pfsense (FreeBSD-based)
2. OPNSense, fork of pfsense, based on HardenedBSD / FreeBSD

There seem to be historic tensions between the people involved in both operating systems and I couldn’t find out if there were actual distinctions in the goals of the projects. In the end, I asked other people for recommendations and found the interface and feature list of OPNSense more convincing. Also, being based on HardenedBSD sounds good (although I am not sure if HardenedBSD-specifica will really ever play out on the router).

Initially I had some issues with the install and OPNSense people were super-friendly and responded immediately. Also the interface was a lot better than I expected so I am quite sure I made the right decision.

## Install

Setup is very easy:

1. Go to https://opnsense.org/download/, select amd64 and nano and download the image.
2. Unzip the image (easy to forget this).
3. Write the image to the USB-stick with dd (as always with dd: be careful about the target device!)
4. Optionally plug a serial cable into the top serial port (the mainboard has two) and connect your Laptop/Desktop with baud rate 115200
5. Plug the USB-stick into the firewall and boot it up.

There will be some beeping when you start the firewall. Some of this is due to the mainboard complaining that no keyboard is attached (can be ignored) and also OPNSense will play a melody when it is booted. If you are attached to the serial console you can select which interface will be WAN and which will be LAN (and their IP addresses). Otherwise you might need to plug around the LAN cables a bit to find out which is configured as which.

When I built this last year there were some more issues, but all of them have been resolved by the OPNSense people so it really is “plug’n’play”; I verified by doing a re-install!

## Post-install

Go to the configured IP-address (192.168.1.1 by default) and login (root: opnsense by default). If the web-interface comes up everything has worked fine and you can disconnect serial console and do the rest via the web-interface.

After login, I would to the following:

• activate SSH on the LAN interface
• configure internet access and DHCP
• setup any of the other services you want

For me setting up the internet meant doing a “double-NAT” with the ISP-provided router, because I need its modem and nowadays it seems impossible to get a stand-alone VDSL modem. If you do something similar just configure internet as being over DHCP.

If you want hardware accelerated SSL (also OpenVPN), go to System → Firmware → Setting and change the firmware flavour to OpenSSL (instead of LibreSSL). After that check for updates and upgrade. In the OpenVPN profile, under Hardware Crypto, you can now select Intel RDRAND engine - RAND.

Take your time to look through the interface! I found some pretty cool things like automatic backup of the configuration to a nextcloud server! The entire config of the firewall rests in one file so it’s really easy to setup a clean system from scratch.

All-in-all I am very happy with the system. Even though my setup is non-trivial, with only selected outgoing traffic going through the VPN (based on rules), I never had to get my hands dirty on the command line – everything can be done through the Web-UI.

## Information stalls at Linux Week and Veganmania in Vienna

Linux Weeks in Vienna 2018

Veganmania at MQ in Vienna 2018

Linux Weeks in Vienna 2019

Veganmania at MQ in Vienna 2019

Veganmania at MQ in Vienna 2019

As has been tradition for many years now, this year too saw the Viennese FSFE volunteers’ group hold information stalls at the Linuxwochen event and Veganmania in Vienna. Even though the active team has shrunk due to former activists moving away, having children or simply having very demanding jobs, we have still managed to keep up these information stalls in 2019.

## Linux Weeks Vienna 2019

The information stall at the Linux weeks event in May was somewhat limited due to the fact that we didn’t get our usual posters and the roll-up in time. Unfortunately we discovered too late that they had obviously been lent out for an other event and hadn’t been returned afterwards. So we could only use our information material. But since at this event the FSFE is very well known, it wasn’t hard at all to carry out our usual information stall. It’s less about outreach work and more of a who-is-who of the free software community in Vienna anyway. For three days we met old friends and networked. Of course some newbies found their way to the event also. And therefore we could spread our messages a little further too.

In addition, we once again provided well visited workshops for Inkscape and Gimp. The little talk on the free rally game Trigger Rally even motivated an attending dedicated Fedora maintainer to create an up-to-date .rpm package in order to enable distribution of the most recent release to rpm distros.

## Veganmania MQ Vienna 2019

The Veganmania at the Museums Quartier in Vienna is growing bigger every year. In 2019 it took place from 7th to 10th of June. Despite us having a less frequented spot with our information stall at the event due to construction work, it again was a full-blown success. Over the four days in perfect weather, the stall was visited by loads of people. There were times when we were stretched to give some visitors the individual attention they might have wanted. But I think in general we were able to provide almost all people with valuable insights and new ideas for their everyday computing. Once again Veganmania proved to be a very good setting for our FSFE information stall. It is always very rewarding to experience people getting a glimpse for the first time of how they could emancipate themselves from proprietary domination. Our down-to-earth approach seems to be the right way to go.

We do not only explain ethical considerations but also appeal to the self-interest of people concerning independence, reliability and free speech. Edward Snowden’s and Wikileaks discoveries clearly show how vulnerable we make ourselves in blindly trusting governments and companies. We describe with practical examples how free software can help us in working together or recovering old files by building on open standards. Of course pointing to the environmental (and economic) advantages of using old hardware with less resource hungry free software is a winning argument also.

### Material

Alongside the introductory Austrian version of the leaflet about the freedoms free software enables, which was put together as a condensation of RMS’ book Free Software, Free Society, one of our all-time favourite leaflet features 10 popular GNU/Linux Distros with just a few words about their defining differences (advantages and disadvantages). I updated the leaflet just a day before the festival. I replaced Linux Mint, Open Suse and gNewSense with the recently even more popular Manjaro, MX Linux and PureOS. I also updated the information on the importance of open standards on the back. We have run out of our end-user business cards for our local association freie.it which makes knowledgable people available to others searching for help. Therefore, we decided to use the version we originally designed for inviting experts to the platform. It obviously was wrong to order the same amount of cards for both groups. Our selection of information material seems to work well as an invitation for people to give free software a try. Of course it feels probably also like a safeguard that people can contact me if they want to get my support – or that of someone else listed on freie.it.

### Experiences

The first day was rather windy and we had to carefully manage our material if we didn’t want to have our leaflets flying all over the place. In the very early morning of the second day the wind was so strong that some tents where blown away and destroyed. There was even a storm warning which could have forced the organisers to cancel the event. Fortunately our material was well stored and the wind died down over the day. We also had to firmly hold-on to our sunshade because it was very hot, but beside that everything went fine.

It was just coincidence that Richard Matthew Stallman had a talk in Vienna on the evening of the first day of the Veganmania street festival. So at least one of us could take this rare opportunity to see RMS at a live talk while the other carried on with manning the information stall.

As we didn’t have our posters on Linuxwochen we investigated where they were and got them sent to us via snail mail just in time. We didn’t only get our posters but merchandise too. This was a premiere for our stall. It was clear from the beginning that we wouldn’t sell many shirts since most designs assumed prior knowledge of IT related concepts like binary counting. The general public doesn’t seem very aware of such details and people don’t even get the joke. (If we had had the same merchandise at the Linuxwochen we probably would have sold at least as many items despite having reached a much smaller crowd there.)

## Outlook and thanks

There will be another information stall at the second Veganmania in Vienna this year, which takes place in end of August. The whole setting there is a little different as there isn’t a shopping street nearby but instead, the location is on the heavily frequented recreational area of Vienna’s Danube Island. Just like last year, it should be a good place for chatting about free software, as long as the weather is on our side.

I want to thank Martin for his incredible patience and ongoing dedication manning our stall. He is extremely reliable, always friendly and it is just a real pleasure working with him.

Thanks to kinderkutsche.at, a local place to rent and buy carrier bicycles, we could transport all our information material in a very environmentally friedly way.

# MariaDB Galera Cluster on Ubuntu 18.04.2 LTS

Last Edit: 2019 06 11
Thanks to Manolis Kartsonakis for the extra info.

Official Notes here:

a Galera Cluster is a synchronous multi-master cluster setup. Each node can act as master. The XtraDB/InnoDB storage engine can sync its data using rsync. Each data transaction gets a Global unique Id and then using Write Set REPLication the nodes can sync data across each other. When a new node joins the cluster the State Snapshot Transfers (SSTs) synchronize full data but in Incremental State Transfers (ISTs) only the missing data are synced.

With this setup we can have:

• Data Redundancy
• Scalability
• Availability

## Installation

In Ubuntu 18.04.2 LTS three packages should exist in every node.
So run the below commands in all of the nodes - change your internal IPs accordingly

as root

# apt -y install mariadb-server
# apt -y install galera-3
# apt -y install rsync

### host file

as root

# echo 10.10.68.91 gal1 >> /etc/hosts
# echo 10.10.68.92 gal2 >> /etc/hosts
# echo 10.10.68.93 gal3 >> /etc/hosts

## Storage Engine

Start the MariaDB/MySQL in one node and check the default storage engine. It should be

MariaDB [(none)]> show variables like 'default_storage_engine';

or

echo "SHOW Variables like 'default_storage_engine';" | mysql
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+

## Architecture

A Galera Cluster should be behind a Load Balancer (proxy) and you should never talk with a node directly.

## Galera Configuration

Now copy the below configuration file in all 3 nodes:

/etc/mysql/conf.d/galera.cnf
[mysqld]
binlog_format=ROW
default-storage-engine=InnoDB
innodb_autoinc_lock_mode=2

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="galera_cluster"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_name="gal1"

### Per Node

Be careful the last 2 lines should change to each node:

### Node 01

# Galera Node Configuration
wsrep_node_name="gal1"

### Node 02

# Galera Node Configuration
wsrep_node_name="gal2"

### Node 03

# Galera Node Configuration
wsrep_node_name="gal3"

## Galera New Cluster

We are ready to create our galera cluster:

galera_new_cluster

or

mysqld --wsrep-new-cluster

### JournalCTL

Jun 10 15:01:20 gal1 systemd[1]: Starting MariaDB 10.1.40 database server...
Jun 10 15:01:24 gal1 sh[2724]: WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
Jun 10 15:01:24 gal1 mysqld[2865]: 2019-06-10 15:01:24 139897056971904 [Note] /usr/sbin/mysqld (mysqld 10.1.40-MariaDB-0ubuntu0.18.04.1) starting as process 2865 ...
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2906]: Upgrading MySQL tables if necessary.
Jun 10 15:01:24 gal1 systemd[1]: Started MariaDB 10.1.40 database server.
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: Looking for 'mysql' as: /usr/bin/mysql
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: This installation of MySQL is already upgraded to 10.1.40-MariaDB, use --force if you still need to run mysql_upgrade
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2918]: Checking for insecure root accounts.
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2922]: WARNING: mysql.user contains 4 root accounts without password or plugin!
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2923]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
# ss -at '( sport = :mysql )'

LISTEN               0                     80                                        127.0.0.1:mysql                                      0.0.0.0:*         
# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t
wsrep_cluster_conf_id     1
wsrep_cluster_size        1
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          d67e5b7c-8b90-11e9-ba3d-23ea221848fd
wsrep_ready               ON

### Second Node

systemctl restart mariadb.service
root@gal2:~# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     2
wsrep_cluster_size        2
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          a5eaae3e-8b91-11e9-9662-0bbe68c7d690
wsrep_ready               ON

### Third Node

systemctl restart mariadb.service
root@gal3:~# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     3
wsrep_cluster_size        3
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          013e1847-8b92-11e9-9055-7ac5e2e6b947
wsrep_ready               ON

## Primary Component (PC)

The last node in the cluster -in theory- has all the transactions. That means it should be the first to start next time from a power-off.

## State

cat /var/lib/mysql/grastate.dat

eg.

# GALERA saved state
version: 2.1
seqno:   -1
safe_to_bootstrap: 0

if safe_to_bootstrap: 1 then you can bootstrap this node as Primary.

## Common Mistakes

Sometimes DBAs want to setup a new cluster (lets say upgrade into a new scheme - non compatible with the previous) so they want a clean state/directory. The most common way is to move the current mysql directory

mv /var/lib/mysql /var/lib/mysql_BAK

If you try to start your galera node, it will fail:

# systemctl restart mariadb
WSREP: Failed to start mysqld for wsrep recovery:
[Warning] Can't create test file /var/lib/mysql/gal1.lower-test
Failed to start MariaDB 10.1.40 database server

You need to create and initialize the mysql directory first:

mkdir -pv /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql
chmod 0755 /var/lib/mysql
mysql_install_db -u mysql

On another node, cluster_size = 2

# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     4
wsrep_cluster_size        2
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          a5eaae3e-8b91-11e9-9662-0bbe68c7d690
wsrep_ready               ON

then:

# systemctl restart mariadb

rsync from the Primary:


Jun 10 15:19:00 gal1 rsyncd[3857]: rsyncd version 3.1.2 starting, listening on port 4444
Jun 10 15:19:01 gal1 rsyncd[3884]: connect from gal3 (192.168.122.93)
Jun 10 15:19:01 gal1 rsyncd[3884]: rsync to rsync_sst/ from gal3 (192.168.122.93)
Jun 10 15:19:01 gal1 rsyncd[3884]: receiving file list
#  echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     5
wsrep_cluster_size        3
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          12afa7bc-8b93-11e9-88fc-6f41be61a512
wsrep_ready               ON

Be Aware: Try to keep your DATA directory to a seperated storage disk

A healthy Quorum has an odd number of nodes. So when you scale your galera gluster consider adding two (2) at every step!

# echo 10.10.68.94 gal4 >> /etc/hosts
# echo 10.10.68.95 gal5 >> /etc/hosts

wsrep_sst_donor= gal3

After the synchronization:

• comment-out the above line
• restart mysql service and
• put all three nodes behind the Local Balancer

## Split Brain

Find the node with the max

SHOW STATUS LIKE 'wsrep_last_committed';

and set it as master by

SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

## Weighted Quorum for Three Nodes

When configuring quorum weights for three nodes, use the following pattern:

node1: pc.weight = 4
node2: pc.weight = 3
node3: pc.weight = 2
node4: pc.weight = 1
node5: pc.weight = 0

eg.

SET GLOBAL wsrep_provider_options="pc.weight=3";

In the same VPC setting up pc.weight will avoid a split brain situation. In different regions, you can setup something like this:

node1: pc.weight = 2
node2: pc.weight = 2
node3: pc.weight = 2
<->
node4: pc.weight = 1
node5: pc.weight = 1
node6: pc.weight = 1

### Friday, 07 June 2019

And you should too!

Akademy is very important to meet people, discuss future plans, learn about new stuff and make friends for life!

Note this year the recommended accomodations are a bit on the expensive side, so you may want to hurry and apply for Travel support. The last round is open until July 1st.

### Thursday, 06 June 2019

Akademy-es 2019 will be happening this June 28-30 in Vigo.

The talks were just announced recently.

Check them out at https://www.kde-espana.org/akademy-es-2019/programa-akademy-es-2019 it has lots of interesting talks so if you understand Spanish and are interested in KDE or Free Software in general I'd really recommend to attend!

## [Some] KDE Applications 19.04.1 also available in flathub

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

## Partition like a pro with fdisk, sfdisk and cfdisk

• Seravo
• 10:44, Friday, 17 May 2019

Most Linux distributions ship the hard drive partition tool fdisk by default. Knowing how to use it is a good skill for every Linux system administrator since having to rescue a system that has disk issues is a very common task. If the admin is faced with a prompt in a rescue mode boot, often fdisk is the only partitioning tool available and must be used, since if the main root filesystem is broken, one cannot install and use any other partitioning tools.

When installing Debian based systems (e.g. Ubuntu) in the text mode server installer, keep in mind that you can at any time during the installation process press Ctrl+Alt+F2 to jump to a text console running a limited shell prompt (Busybox) and manipulate the systems as you wish, among others run fdisk. When done press Ctrl+Alt+F1 to jump back to the installer screen.

In fact, fdisk is not a single utility but actually a tool that ships with three commands together: fdisk, sfdisk and cfdisk.

## fdisk

Most Linux sysadmins have at some point used fdisk, the classic partitioning tool. There is also a tool with the same name in Windows, but its not the same tool. Across the Unix ecosystem the fdisk tool is however nowadays the same one, even on MacOS X.

To list the current partition layout one can simply run fdisk -l /dev/sda. Below is an example of the output. One can also run fdisk -l /dev/sd* to print the partition info of all sd devices in one go. The fdisk man page lists all the command line parameters available.

If one runs just fdisk it will launch in interactive mode. Pressing m will show the help. To create a new GPT (for modern disk) partition table (resetting any existing partition table) and add a new Linux partition that uses all available disk space one can simply enter the commands g, n and w in sequence and pressing enter to all questions to accept them at their default values.

## cfdisk

The command cfdisk servers the same purpose as fdisk with the difference that it provides a slightly fancier user interface based on ncurses so there are menus one can browse with arrows and the tab button without having to remember the single letter commands fdisk uses.

## sfdisk

The third tool in the suite is sfdisk. This tool is designed to be scripted, enabling administrators to script and automate partitioning operations.

The key to sfdisk operations is to first dump the current layout using the -d argument, for example sfdisk -d /dev/sda > partition-table. An example output would be:

$cat partition-table label: gpt label-id: AF7B83C8-CE8D-463D-99BF-E654A68746DD device: /dev/sda unit: sectors first-lba: 34 last-lba: 937703054 /dev/sda1 : start= 2048, size= 997376, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=F35A875F-1A53-493E-85D4-870A7A749872 /dev/sda2 : start= 999424, size= 936701952, type=A19D880F-05FC-4D3B-A006-743F0F84911E, uuid=725EAB2A-F2E2-475E-81DC-9A5E18E29678 This text file describes the partition type, the layout and also includes the device UUIDs. The file above can be considered a backup of the /dev/sda partition table. If something has gone wrong with the partition table and this file was saved at some earlier time, one can recover the partition table by running: sfdisk /dev/sda < partition-table. ### Copying the partition table to multiple disks One neat application of sfdisk is that it can be used to copy the partition layout to many devices. Say you have a big server computer with 16 hard disks. Once you have partitioned the first disk, you can dump the partition table of the first disk with sfdisk -d and then edit the dump file (remember, it is just a plain-text file) to remove references to the device name and UUID’s, which are unique to a specific device and not something you want to clone to other disks. If the initial dump was the example above, the version with unique identifiers removed would look like this: label: gpt unit: sectors first-lba: 34 last-lba: 937703054 start= 2048, size= 997376, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4 start= 999424, size= 936701952, type=A19D880F-05FC-4D3B-A006-743F0F84911E This can applied to the another disk, for example /dev/sdb simply by running sfdisk /dev/sdb < partition-table. Now all the admin needs to do is run this same command a couple of times with only the one character changed on each invocation. ### Listing device UUID’s with blkid Keep in mind that the Linux kernel uses the device UUIDs for indentifying partitions and file systems. Be vary not to accidentally make two disks have the same UUID with sfdisk. Technically it is possible, and maybe useful in some situation where one wants to replace a hard drive and make the new hard drive 100% identical, but in a running system different disks should all have unique UUIDs. To list all UUIDs use blkid. Below is an example of the output: $ blkid
/dev/sda1: UUID="F379-8147" TYPE="vfat" PARTUUID="f35a875f-1a53-493e-85d4-870a7a749872"
/dev/sda2: UUID="5f12f800-1d8d-6192-0881-966a70daa16f" UUID_SUB="2d667c5b-b9f3-6510-cf76-9231122533ce" LABEL="fi-e3:0" TYPE="linux_raid_member" PARTUUID="725eab2a-f2e2-475e-81dc-9a5e18e29678"
/dev/sdb2: UUID="5f12f800-1d8d-6192-0881-966a70daa16f" UUID_SUB="0221fce5-2762-4b06-2d72-4f4f43310ba0" LABEL="fi-e3:0" TYPE="linux_raid_member" PARTUUID="cd2a477f-0b99-4dfc-baa6-f8ebb302cbbb"
/dev/md0: UUID="dcSgSA-m8WA-IcEG-l29Q-W6ti-6tRO-v7MGr1" TYPE="LVM2_member"
/dev/mapper/ssd-ssd--swap: UUID="2f2a93bd-f532-4a6d-bfa4-fcb96fb71449" TYPE="swap"
/dev/mapper/ssd-ssd--root: UUID="660ce473-5ad7-4be9-a834-4f3d3dfc33c3" TYPE="ext4"

## Extra tip: listing all disk with lsblk

While fdisk -l is nice for listing partition tables, often admins also want to know the partition sizes in human readable formats and what the partitions are used for. For this purpose the command lsblk is handy. While the default output is often enough, supplying the extra arguments -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINTmakes it even better. See below an example of the output:

$lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT NAME SIZE FSTYPE TYPE MOUNTPOINT sda 447,1G disk ├─sda1 487M vfat part /boot/efi └─sda2 446,7G linux_raid_member part └─md0 446,5G LVM2_member raid1 ├─ssd-ssd--swap 8,8G swap lvm [SWAP] └─ssd-ssd--root 437,7G ext4 lvm / sdb 447,1G disk ├─sdb1 487M part └─sdb2 446,7G linux_raid_member part └─md0 446,5G LVM2_member raid1 ├─ssd-ssd--swap 8,8G swap lvm [SWAP] └─ssd-ssd--root 437,7G ext4 lvm / sdc 447,1G disk ├─sdc1 487M part └─sdc2 446,7G part  ## A word of warning… Remember that modifying the partition table is a destructive process. It is something the admin does while installing new systems or recovering broken ones. If done wrongly, all data might be lost! ### Wednesday, 15 May 2019 ## No KDE Applications 19.04.1 available in flathub The flatpak and flathub developers have changed what they consider a valid appdata file, so our appdata files that were valid last month are not valid anymore and thus we can't build the KDE Applications 19.04.1 packages. ### Wednesday, 08 May 2019 ## Of elitists and laypeople Spoilers Game of Thrones. I have been watching Game of Thrones with great interest the past few weeks. It has very strongly highlighted a struggle that has been gripping my mind for a while now: That between elitists and laypeople. And I find myself in a strange in-between. For those not in the know, the latest season of Game of Thrones is a bit controversial to say the least. If you skip past the internet vitriol, you’ll find a lot of people disliking the season for legitimate reasons: The battle tactics don’t make any sense, characters miraculously survive after the camera cuts away, time and distance stopped being an issue in a setting that used to take it slow, and there’s a weird, forced conflict that would go away entirely if these two characters that are already in love would simply marry. And the list goes on, I’m sure. But on the other hand, there appears to be a large body of laypeople who watch and enjoy the series. Millions of people tune in every week to watch fictional people fight over a fictional throne, and they appear to be enjoying themselves. And me? Sure, I’m enjoying myself as well. I had muscle aches from the tension of watching The Long Night, and nothing gripped me more than the half-botched assassination attempt at the end of the episode. So what gives? On the one hand enthusiasts are rightly criticising the writers for some very strange decisions, but on the other hand millions of people are enjoying the series all the same. ## Of power users and newbies I frankly don’t know the answer to the posed question, but I do know an analogy that prompted me to write this blog post. I am a humble contributor to the GNOME Project, chiefly as translator for Esperanto, but also miniscule bits and bobs here and there. GNOME faces a similar problem with detractors: They have their complaints about systemd, customisability, missing power user features, themes breaking, and so forth. And I’m sure they have some valid points, but GNOME remains the default desktop environment on many distributions, and many people use and love GNOME as I do. These detractors often run some heavily customised Arch Linux system with some unintuitive-but-efficient window manager, and don’t have any editor other than Vim installed. Or in other words: They run a system that the vast majority of people could not and do not want to use. And I understand these people, because in one aspect of my digital life, I have been one of them. For at least two years, I ran Spacemacs as my primary editor. For a while I even did my e-mail through that program, and I loved it. Kind of. Sure, everything was customisable, and the keyboard shortcuts were magically fast, but the mental overhead of using that program was slowly grinding me down. Some menial task that I do infrequently would turn out to involve a non-intuitive sequence of keys that you just simply need to know, and I would spend far too long on figuring that out. Or I would accidentally open Vim inside of the Emacs terminal emulator, and :q would be sent to Emacs instead of the emulator. Sure, if you know enough Emacs wizardry, you could easily escape this situation, but that’s the point, isn’t it? The wizardry involved takes effort that I don’t always want to put in, even if I know that it pays off. Kind of. These days I use VSCodium, a Free Software version of Visual Studio Code. I like it well enough for a multitude of reasons, but mainly because the mental overhead of using this editor is a lot lower. Even so, is Emacs a better editor? Probably. If I could be bothered to maintain my Emacs wizarding skills, I am fairly certain that it would be the perfect editor. But that’s a big if. So that’s why I settle for VSCodium. And the same line of reasoning can be extended to why I use and love GNOME. ## Back to Westeros Having made that analogy, can it be mapped onto the kerfuffle surrounding Game of Thrones? Is it a matter of a small group wanting an intricate, advanced plot and a larger group wanting a simple, rudimentary story, because they can’t or don’t want to deal with a complicated story? It seems that way, but the damnedest thing is that I don’t know. I like the latest season of Game of Thrones for what it is: An archetypical fight of good versus evil. The living gathered together to fight an undead army, and the living won. Such a story is a lot easier to get into as a layperson, and there is nothing wrong with enjoying simple, archetypical stories. But that’s not what Game of Thrones is. Game of Thrones is the derivative of an incredibly intricate series of books with so many details and plots, and the TV series stayed faithful to that for a long time. The latest season is a huge diversion from its roots. It is, as far as I can tell, replacing vi with nano. There is nothing wrong with either, but there is a good reason why the two are separate. ## Who are these laypeople, anyway? This question is difficult to answer, because the layperson isn’t me. It can’t possibly be, because here I am writing about the subject. The layperson must be someone who isn’t particularly interested or informed. I imagine that they just turn on the telly and enjoy it for what it is. No deep thoughts, no deep investment. But why don’t these laypeople care? Why should we care about laypeople? Must we really dumb everything down for the lowest common denominator? Why can’t they just get on my level? This is really frustrating! Enter cars. I have a driving license, but I don’t really care about cars. I know how to work the pedals and the steering wheel, and that’s pretty much it. I don’t know why I don’t care about cars. I just want to get from my home to my destination and back. If I can put in as little effort as possible to do that, I’m happy. I just don’t have the time or desire to learn all the intricacies of cars. And knowing that, I suppose that I’m the layperson I was so frustrated over a moment ago. When I walk into the garage with a minor problem, I like to imagine that I’m the sort of person who shows up at tech support because I can’t log in: I accidentally pressed Caps Lock. So the layperson is me. Sometimes. ## Then who are the elitists? Having said all of that, something throws a wrench in the works: Game of Thrones was also immensely popular when it had all the intricacies and inter-weaving plots of the first few seasons. That appears to indicate to me that laypeople aren’t allergic to the kind of story that the enthusiasts want. But they aren’t allergic to the story that is being told in season 8, either, unlike the elitists. So why do the elitists care? Why can’t they just appreciate the same things that laypeople do? Why must it always be so complex? Why should the complaints of a few outweigh the enjoyment of many? And this is where I get stuck. Because frankly, I don’t know. Shouldn’t everything be as accessible as possible? The more the merrier? Why should vi exist when nano suffices? But you can take my vi key bindings from my cold, dead hands. And I love what Game of Thrones used to be, and am sad that it morphed into an archetypical story that used to be its antithesis. I want complex things, even though I switched from Spacemacs to VSCodium and use GNOME instead of i3. Not for the sake of difficulty, but because complexity gives me something that simplicity cannot. So the elitist is me. Sometimes. ## Fin I’m still in a limbo about this clash between elitists and laypeople. Maybe the clash is superficial and the two can exist side-by-side or separately. Maybe the writers of Game of Thrones just aren’t very good and accidentally made the story for laypeople instead of their target audience of elitists. Maybe it’s a sliding scale instead of a binary. I don’t really know. I just wanted to get these thoughts out of my head and into a text box. ### Sunday, 05 May 2019 ## External encrypted disk on LibreELEC Last year I replaced, on the Raspberry Pi, the ArchLinux ARM with just Kodi installed with LibreELEC. Today I plugged an external disk encrypted with dm-crypt, but to my full surprise this isn’t supported. Luckily the project is open source and sky42 already provides a LibreELEC version with dm-crypt built-in support. Once I flashed sky42’s version, I setup automated mount at startup via the autostart.sh script and the corresponding umount via shutdown.sh this way: // copy your keyfile into /storage via SSH$ cat /storage/.config/autostart.sh
cryptsetup luksOpen /dev/sda1 disk1 --key-file /storage/keyfile
mount /dev/mapper/disk1 /media

$cat /storage/.config/shutdown.sh umount /media cryptsetup luksClose disk1  Reboot it and voilà! ## Automount If you want to automatically mount the disk whenever you plug it, then create the following udev rule: // Find out ID_VENDOR_ID and ID_MODEL_ID for your drive by using udevadm info$ cat /storage/.config/udev.rules.d/99-automount.rules
ACTION=="add", SUBSYSTEM=="usb", SUBSYSTEM=="block", ENV{ID_VENDOR_ID}=="0000", ENV{ID_MODEL_ID}=="9999", RUN+="cryptsetup luksOpen \$env{DEVNAME} disk1 --key-file /storage/keyfile", RUN+="mount /dev/mapper/disk1 /media"


## Hardening OpenSSH Server

man 5 sshd_config

## CentOS 6.x

Ciphers aes128-ctr,aes192-ctr,aes256-ctr
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-256,hmac-sha2-512


and change the below setting in /etc/sysconfig/sshd:

AUTOCREATE_SERVER_KEYS=RSAONLY

## CentOS 7.x

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512


## Ubuntu 18.04.2 LTS

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512
HostKeyAlgorithms ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256

## Archlinux

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-128-etm@openssh.com,hmac-sha2-512-etm@openssh.com
HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa

## Renew SSH Host Keys

rm -f /etc/ssh/ssh_host_* && service sshd restart

## Generating ssh moduli file

ssh-keygen -G /tmp/moduli -b 4096

ssh-keygen -T /etc/ssh/moduli -f /tmp/moduli 
Tag(s): openssh

## How I put order in my bookmarks and found a better way to organise them

[TOS]

I have gone through several stages of this and so far nothing has stuck as ideal, but I think I am inching towards it.

To start off, I have to confess that while I love the internet and the web, I loathe having everything in the browser. The browser becoming the OS is what seems to be happening, and I hate that thought. I like to keep things locally, having backups, and control over my documents and data. Although I changed my e-mail provider(s) several times, I still have all my e-mail locally stored from 2003 until today.

I also do not like reading longer texts on an LCD, so I usually put longer texts into either Wallabag or Mozilla’s Pocket to read them later on my eInk reader (Kobo Aura). BTW, Wallabag and Pocket both have their pros and cons themselves. Pocket is more popular and better integrated into a lot of things (e.g. Firefox, Kobo, etc.), while Wallabag is fully FOSS (even the server) and offers some extra features that are in Pocket either subject to subscription or completely missing.

Still, an enormous amount of information is (and should be!) on the web, so each of us needs to somehow keep track and make sense of it :)

So, with that intro out of the way, here is how I tackle(d) this mess.

# Historic overview of methods I used so far¶

## Hierarchy of folders¶

As many of us, I guess, I started with first putting bookmarks in the bookmark bar, but soon had to start organising them into folders … and subfolders … and subsubfolders …and subsubsubfolders … until the screen did not fit the whole tree of them any more when expanded.

Pro:

• can be neat and tidy
• easy to sync between devices through e.g. Firefox Sync

Con:

• can become a huge mess, once it grows to a behemoth
• takes several clicks to put a bookmark into the appropriate (sub)folder

Then I decided to keep it flat and use the Firefox search bar to find what I am looking for.

To achieve that, when I bookmarked something, I renamed it to something useful and added tags (e.g.: shop, tea; or python, sql, howto).

This worked kinda OK, but a big downside is that there is a huge amount of clutter which is not easy to navigate and edit once you want to organise all the already existing bookmarks. The bookmark panel is somewhat helpful, but not a lot.

Pro:

• easy to search
• easy to find a relevant bookmark when you are about to search for something through the combined URL/search bar
• easy to sync between devices through e.g. Firefox Sync

Con:

• your search query must match the name, tag(s), or URL of bookmark
• hard to find or navigate other than searching (for name tag, URL)

## OneTab¶

Several years after that, I learnt about OneTab from an onboarding website of a company I applied to (but did not get the job). The main promise of it is to loads of open tabs into (simple) organised lists on a single page. And all that with a single click (well, two really).

This worked wonders for (still does) for decluttering my tab list. Especially when grouped with Tree Style Tabs, which I very warmly recommend trying out. Even if it looks odd and unrully at first, it is very easy to get used to and helps organise tabs immensely. But back to OneTab…

The good side of OneTab is that it really helps keep your tab bar clean and therefore reduces your computer’s resource usage. It is also super for keeping track of tabs that you may (or maybe not) need to open again later, as you can (re)open a whole group of “tabs” with a single click.

As a practical example, let us say I am travelling to Barcelona in two months. So I book flights and the hotel, and in the process also check out some touristy and other helpful info. Because I will not be needing the touristy and travel stuff for quite some time before the trip, I do not need all the tabs open. But as it is a one-off trip, it is also silly to bookmark it all. So I send them all to OneTab and name the group e.g. “Barcelona trip 2019”. If I stumble upon any new stuff that is relevant, I simply send it to the same Named Group in OneTab. Once I need that info, I either open individual “tabs” or restore the whole group with one click and have it ready. An additional cool thing is that by default if you open a group or a single link “tab” from OneTab, it will remove it from the list. You can decide to keep the links in the list as well.

In practices, I still used tagged bookmarks for links that I wanted to store long-term, while depending on OneTab for short- to mid-term storage.

Pro:

• great for decluttering your tabs
• helps keep your browser’s resource usage low
• great for creating (temporary) lists of tabs that you do not need now, but will in the future
• can easily send a group of “tabs” with others via e-mail

Con:

• no tags, categories or other means of adding meta data – you can only name groups, and cannot even rename links
• no searching other than through the “webpage” list of “tabs”
• as the list of “tabs”/bookmarks grows, the harder it is to keep an overview
• cannot sync between devices
• (proprietary plug-in)

# Worldbrain’s Memex¶

About two months ago, I stumbled upon Worldbrain’s Memex through a FOSDEM talk. It promises to fix bookmarking, searching, note-taking and web history for you … which is quite an impressive lot.

So far, I have to say, I am quite impressed. It is super easy to find stuff you visited, even if you forgot to bookmark it, as it indexes all the websites you visit (unless you put tell Memex to ignore that page or domain).

For more order, you can assign tags to websites and/or store them into collections (i.e. groups or folders). What is more, you can do that even later, if you forgot about it the first time. If you want to especially emphasise a specific website, you can also star it.

An excellent feature missing in other bookmarking methods I have seen so far is that it lets you annotate websites – through highlights and comments and tags attached to those highlights. So, not only can you store comments and tags on the websites, but also on annotations within those websites.

One concern I have is that they might have taken more than what they can chew, but since I started using it, I have seen so much progress that I am (cautiously) optimistic about it.

Pro:

• supports both tags and collections (i.e. groups)
• enables annotations/highlights and comments (as well as tags to both) to websites
• indexes websites, so when you search for something it goes through both the website’s text, as well as your notes to that website and, of course, tags
• starring websites you would like to find more easily
• you can also set specific websites or domain names to be ignored
• it offers quite an advanced search, including limiting by data ranges, stars, or domains
• when you search for something (e.g. using DuckDuckGo or Google) it shows suggested websites that you already visited before
• sharing of annotations and comments with others (as long as they also have Memex installed)
• for annotations it uses the W3C Open Annotation spec
• stores everything locally (with the exception of sharing annotations via a link, of course)

Con:

• it consumes more disk space due to running its own index
• needs an external app for backing up data
• so far no syncing of bookmarks between devices (but it is in the making)
• so far it does not sync annotations between different devices (but both mobile apps for iOS/Android, and Pocket integration are in the making)

# Status quo and looking at the future¶

I currently have still a few dozen bookmarks that I need to tag in Memex and delete from my Firefox bookmarks. And a further several dozen in OneTab.

The most viewed websites, I have in the “Top Sites” in Firefox.

Most of the “tabs” in OneTab, I have already migrated to Memex and I am looking very much forward to trying to use it instead of OneTab. So far it seems a bit more work, as I need to 1) open all tabs into a tab tree (same as in OneTab), 2) open that tab tree in a separate window (extra step), and then 3) use the “Tag all tabs in window” or “Add all tabs in window” option from the extension button (similar as in OneTab), and finally 4) close the tabs by closing the window (extra step). What I usually do is to change a Tab Group from OneTab to a Collection in Memex and then take some extra time to add tags or notes, if appropriate.

So, I am quite confident Memex will be able to replace OneTab for me and most likely also (most) normal bookmarks. I may keep some bookmarks of things that I want to always keep track of, like my online bank’s URL, but I am not sure yet.

The annotations are a god-send as well, which will be very hard to get rid of, as I already got used to them.

Now, if I could only send stuff to my eInk reader (or phone), annotate it there and have those annotations auto-magically show up in the browser and therefore stored locally on my laptop … :D

Oh, oh, and if I could search through Memex from my KDE Plasma desktop and add/view annotations from other documents (e.g. ePub, ODF, PDF) and other applicatios (e.g. Okular, Calibre, LibreOffice). One may dream …

hook out → sipping Vin Santo and planning more order in bookmarks

P.S. This blog post was initially a comment to the topic “How do you organize your bookmarks?” in the ~tech group on Tildes where further discussion is happening as well.

## Automated phone backup with Syncthing

How do you backup your phones? Do you?

I use to perform a copy of all the photos and videos from my and my wife’s phone to my PC monthly and then I copy them to an external HDD attached to a Raspberry Pi.

However, it’s a tedious job mainly because: - I cannot really use the phones during this process; - MTP works one in 3 times - often I have to fallback to ADB; - I have to unmount the SD cards to speed up the copy; - after I copy the files, I have to rsync everything to the external HDD.

## The Syncthing way

Syncthing describes itself as:

Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized.

I installed it to our Android phones and on the Raspberry Pi. On the Raspberry Pi I also enabled remote access.

I started the Syncthing application on the Android phones and I’ve chosen the folders (you can also select the whole Internal memory) to backup. Then, I shared them with the Raspberry Pi only and I set the folder type to “Send Only” because I don’t want the Android phone to retrieve any file from the Raspberry Pi.

On the Raspberry Pi, I accepted the sharing request from the Android phones, but I also changed the folder type to “Receive Only” because I don’t want the Raspberry Pi to send any file to the Android phones.

All done? Not yet.

Syncthing main purpose is to sync, not to backup. This means that, by default, if I delete a photo from my phone, that photo is gone from the Raspberry Pi too and this isn’t what I do need nor what I do want.

However, Syncthing supports File Versioning and best yet it does support a “trash can”-like file versioning which moves your deleted files into a .stversions subfolder, but if this isn’t enough yet you can also write your own file versioning script.

All done? Yes! Whenever I do connect to my own WiFi my photos are backed up!

## [Some] KDE Applications 19.04 also available in flathub

The KDE Applications 19.04 release announcement (read it if you haven't, it's very complete) mentions some of the applications are available at the snap store, but forgets to mention flathub.

Just wanted to bring up that there's also some of the applications available in there https://flathub.org/apps/search/org.kde.

All the ones that are released along KDE Applications 19.04 were updated on release day (except kubrick that has a compilation issue and will be updated for 19.04.1 and kontact which is a best and to be honest i didn't particularly feel like updating it)

If you feel like helping there's more applications that need adding and more automation that needs to happen, so get in touch :)

## Closer Look at the Double Ratchet

In the last blog post, I took a closer look at how the Extended Triple Diffie-Hellman Key Exchange (X3DH) is used in OMEMO and which role PreKeys are playing. This post is about the other big algorithm that makes up OMEMO. The Double Ratchet.

The Double Ratchet algorithm can be seen as the gearbox of the OMEMO machine. In order to understand the Double Ratchet, we will first have to understand what a ratchet is.

Before we start: This post makes no guarantees to be 100% correct. It is only meant to explain the inner workings of the Double Ratchet algorithm in a (hopefully) more or less understandable way. Many details are simplified or omitted for sake of simplicity. If you want to implement this algorithm, please read the Double Ratchet specification.

A ratchet is a tool used to drive nuts and bolts. The distinctive feature of a ratchet tool over an ordinary wrench is, that the part that grips the head of the bolt can only turn in one direction. It is not possible to turn it in the opposite direction as it is supposed to.

In OMEMO, ratchet functions are one-way functions that basically take input keys and derives a new keys from that. Doing it in this direction is easy (like turning the ratchet tool in the right direction), but it is impossible to reverse the process and calculate the original key from the derived key (analogue to turning the ratchet in the opposite direction).

## Symmetric Key Ratchet

One type of ratchet is the symmetric key ratchet (abbrev. sk ratchet). It takes a key and some input data and produces a new key, as well as some output data. The new key is derived from the old key by using a so called Key Derivation Function. Repeating the process multiple times creates a Key Derivation Function Chain (KDF-Chain). The fact that it is impossible to reverse a key derivation is what gives the OMEMO protocol the property of Forward Secrecy.

The above image illustrates the process of using a KDF-Chain to generate output keys from input data. In every step, the KDF-Chain takes the input and the current KDF-Key to generate the output key. Then it derives a new KDF-Key from the old one, replacing it in the process.

To summarize once again: Every time the KDF-Chain is used to generate an output key from some input, its KDF-Key is replaced, so if the input is the same in two steps, the output will still be different due to the changed KDF-Key.

One issue of this ratchet is, that it does not provide future secrecy. That means once an attacker gets access to one of the KDF-Keys of the chain, they can use that key to derive all following keys in the chain from that point on. They basically just have to turn the ratchet forwards.

## Diffie-Hellman Ratchet

The second type of ratchet that we have to take a look at is the Diffie-Hellman Ratchet. This ratchet is basically a repeated Diffie-Hellman Key Exchange with changing key pairs. Every user has a separate DH ratcheting key pair, which is being replaced with new keys under certain conditions. Whenever one of the parties sends a message, they include the public part of their current DH ratcheting key pair in the message. Once the recipient receives the message, they extract that public key and do a handshake with it using their private ratcheting key. The resulting shared secret is used to reset their receiving chain (more on that later).

Once the recipient creates a response message, they create a new random ratchet key and do another handshake with their new private key and the senders public key. The result is used to reset the sending chain (again, more on that later).

As a result, the DH ratchet is forwarded every time the direction of the message flow changes. The resulting keys are used to reset the sending-/receiving chains. This introduces future secrecy in the protocol.

## Chains

A session between two devices has three chains – a root chain, a sending chain and a receiving chain.

The root chain is a KDF chain which is initialized with the shared secret which was established using the X3DH handshake. Both devices involved in the session have the same root chain. Contrary to the sending and receiving chains, the root chain is only initialized/reset once at the beginning of the session.

The sending chain of the session on device A equals the receiving chain on device B. On the other hand, the receiving chain on device A equals the sending chain on device B. The sending chain is used to generate message keys which are used to encrypt messages. The receiving chain on the other hand generates keys which can decrypt incoming messages.

Whenever the direction of the message flow changes, the sending and receiving chains are reset, meaning their keys are replaced with new keys generated by the root chain.

## An Example

I think this rather complex protocol is best explained by an example message flow which demonstrates what actually happens during message sending / receiving etc.

In our example, Obi-Wan and Grievous have a conversation. Obi-Wan starts by establishing a session with Grievous and sends his initial message. Grievous responds by sending two messages back. Unfortunately the first of his replies goes missing.

### Session Creation

In order to establish a session with Grievous, Obi-Wan has to first fetch one of Grievous key bundles. He uses this to establish a shared secret S between him and Grievous by executing a X3DH key exchange. More details on this can be found in my previous post. He also extracts Grievous signed PreKey ratcheting public key. S is used to initialize the root chain.

Obi-Wan now uses Grievous public ratchet key and does a handshake with his own ratchet private key to generate another shared secret which is pumped into the root chain. The output is used to initialize the sending chain and the KDF-Key of the root chain is replaced.

Now Obi-Wan established a session with Grievous without even sending a message. Nice!

### Initial Message

Now the session is established on Obi-Wans side and he can start composing a message. He decides to send a classy “Hello there!” as a greeting. He uses his sending chain to generate a message key which is used to encrypt the message.

Note: In the above image a constant is used as input for the KDF-Chain. This constant is defined by the protocol and isn’t important to understand whats going on.

Now Obi-Wan sends over the encrypted message along with his ratcheting public key and some information on what PreKey he used, the current sending key chain index (1), etc.

When Grievous receives Obi-Wan’s message, he completes his X3DH handshake with Obi-Wan in order to calculate the same exact shared secret S as Obi-Wan did earlier. He also uses S to initialize his root chain.

Now Grevious does a full ratchet step of the Diffie-Hellman Ratchet: He uses his private and Obi-Wans public ratchet key to do a handshake and initialize his receiving chain with the result. Note: The result of the handshake is the same exact value that Obi-Wan earlier calculated when he initialized his sending chain. Fantastic, isn’t it? Next he deletes his old ratchet key pair and generates a fresh one. Using the fresh private key, he does another handshake with Obi-Wans public key and uses the result to initialize his sending chain. This completes the full DH ratchet step.

### Decrypting the Message

Now that Grievous has finalized his side of the session, he can go ahead and decrypt Obi-Wans message. Since the message contains the sending chain index 1, Grievous knows, that he has to use the first message key generated from his receiving chain to decrypt the message. Because his receiving chain equals Obi-Wans sending chain, it will generate the exact same keys, so Grievous can use the first key to successfully decrypt Obi-Wans message.

Grievous is surprised by bold actions of Obi-Wan and promptly goes ahead to send two replies.

He advances his freshly initialized sending chain to generate a fresh message key (with index 1). He uses the key to encrypt his first message “General Kenobi!” and sends it over to Obi-Wan. He includes his public ratchet key in the message.

Unfortunately though the message goes missing and is never received.

He then forwards his sending chain a second time to generate another message key (index 2). Using that key he encrypt the message “You are a bold one.” and sends it to Obi-Wan. This message contains the same public ratchet key as the first one, but has the sending chain index 2. This time the message is received.

Once Obi-Wan receives the second message and does a full ratchet step in order to complete his session with Grevious. First he does a DH handshake between his private and the Grevouos’ public ratcheting key he got from the message. The result is used to setup his receiving chain. He then generates a new ratchet key pair and does a second handshake. The result is used to reset his sending chain.

Obi-Wan notices that the sending chain index of the received message is 2 instead of 1, so he knows that one message must have been missing or delayed. To deal with this problem, he advances his receiving chain twice (meaning he generates two message keys from the receiving chain) and caches the first key. If later the missing message arrives, the cached key can be used to successfully decrypt the message. For now only one message arrived though. Obi-Wan uses the generated message key to successfully decrypt the message.

## Conclusions

What have we learned from this example?

Firstly, we can see that the protocol guarantees forward secrecy. The KDF-Chains used in the three chains can only be advanced forwards, and it is impossible to turn them backwards to generate earlier keys. This means that if an attacker manages to get access to the state of the receiving chain, they can not decrypt messages sent prior to the moment of attack.

But what about future messages? Since the Diffie-Hellman ratchet introduces new randomness in every step (new random keys are generated), an attacker is locked out after one step of the DH ratchet. Since the DH ratchet is used to reset the symmetric ratchets of the sending and receiving chain, the window of the compromise is limited by the next DH ratchet step (meaning once the other party replies, the attacker is locked out again).

On top of this, the double ratchet algorithm can deal with missing or out-of-order messages, as keys generated from the receiving chain can be cached for later use. If at some point Obi-Wan receives the missing message, he can simply use the cached key to decrypt its contents.

This self-healing property was eponymous to the Axolotl protocol (an earlier name of the Signal protocol, the basis of OMEMO).

## Acknowledgements

Thanks to syndace and paul for their feedback and clarification on some points.

## Tenth Anniversary of AltOS

In the early days of the collaboration between Bdale Garbee and Keith Packard that later became Altus Metrum, the software for TeleMetrum was crafted as an application running on top of an existing open source RTOS. It didn't take long to discover that the RTOS was ill-suited to our needs, and Keith had to re-write various parts of it to make things fit in the memory available and work at all.

Eventually, Bdale idly asked Keith how much of the RTOS he'd have to rewrite before it would make sense to just start over from scratch. Keith took that question seriously, and after disappearing for a day or so, the first code for AltOS was committed to revision control on 12 April 2009.

Ten years later, AltOS runs on multiple processor architectures, and is at the heart of all Altus Metrum products.

## Hard drive failure in my zpool 😞

I have a storage box in my house that stores important documents, backups, VM disk images, photos, a copy of the Tor Metrics archive and other odd things. I’ve put a lot of effort into making sure that it is both reliable and performant. When I was working on a modern CollecTor for Tor Metrics recently, I used this to be able to run the entire history of the Tor network through the prototype replacement to see if I could catch any bugs.

I have had my share of data loss events in my life, but since I’ve found ZFS I have hope that it is possible to avoid, or at least seriously minimise the risk of, any catastrophic data loss events ever happening to me again. ZFS has:

• cryptographic checksums to validate data integrity
• mirroring of disks
• “scrub” function that ensures that the data on disk is actually still good even if you’ve not looked at it yourself in a while

ZFS on its own is not the entire solution though. I also mix-and-match hard drive models to ensure that a systematic fault in a particular model won’t wipe out all my mirrors at once, and I also have scheduled SMART self-tests to detect faults before any data loss has occured.

This means I now have to treat that drive as “going to fail soon” which means that I don’t have redundancy in my zpool anymore, so I have to act. Fortunately, in September 2017 when my workstation died, I received some donations towards the hardware I use for my open source work and I did buy a spare HDD for this very situation!

At present my zpool setup looks like:

% zpool status flat
pool: flat
state: ONLINE
scan: scrub repaired 0 in 0 days 07:05:28 with 0 errors on Fri Apr  5 07:05:36 2019
config:

flat                                            ONLINE       0     0     0
mirror-0                                      ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
mirror-1                                      ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
cache
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0

errors: No known data errors


The drives in the two mirrors are 3TB drives, in each mirror is one WD Red and one Toshiba NAS drive. In this case, it is one of the WD Red drives that has failed and I’ll be replacing it with another WD Red. One important thing to note is that you have to replace the drive with one of equal or greater capacity. In this case it is the same model so the capacity should be the same, but not all X TB drives are going to be the same size.

You’ll notice here that it is saying No known data errors. This is because there hasn’t been any issues with the data yet, it is just a SMART failure, and hopefully by replacing the disk any data error can be avoided entirely.

My plan was to move to a new system soon, with 8 bays. In that system I’ll keep the stripe over 2 mirrors but one mirror will run over 3x 6TB drives with the other remaining on 2x 3TB drives. This incident leaves me with only 1 leftover 3TB drive though so maybe I’ll have to rethink this.

My current machine, an HP MicroServer, does not support hot-swapping the drives so I have to start by powering off the machine and replacing the drive.

% zpool status flat
pool: flat
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: scrub repaired 0 in 0 days 07:05:28 with 0 errors on Fri Apr  5 07:05:36 2019
config:

mirror-0                                      ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
xxxxxxxxxxxxxxxxxxxx                        UNAVAIL      0     0     0  was /dev/gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
cache
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx    ONLINE       0     0     0

errors: No known data errors


The disk that was part of the mirror is now unavailable, but the pool is still functioning as the other disk is still present. This means that there are still no data errors and everything is still running. The only downtime was due to the non-hot-swappableness of my SATA controller.

Through the web interface in FreeNAS, it is possible to now use the new disk to replace the old disk in the mirror: Storage -> View Volumes -> Volume Status (under the table, with the zpool highlighted) -> Replace (with the unavailable disk highlighted).

Running zpool status again:

% zpool status flat
pool: flat
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Apr  5 16:55:47 2019
1.30T scanned at 576M/s, 967G issued at 1.12G/s, 4.33T total
4.73G resilvered, 21.82% done, 0 days 00:51:29 to go
config:

flat                                            ONLINE       0     0     0
mirror-0                                      ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
mirror-1                                      ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0  (resilvering)
cache
gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx    ONLINE       0     0     0

errors: No known data errors


And everything should be OK again soon, now with the dangerous disk removed and a hopefully more reliable disk installed.

This has put a dent in my plans to upgrade my storage, so for now I’ve added the hard drives I’m looking for to my Amazon wishlist.

As for the drive that failed, I’ll be doing an ATA Secure Erase and then disposing of it. NIST SP 800-88 thinks that ATA Secure Erase is in the same category as degaussing a hard drive and that it is more effective than overwriting the disk with software. ATA Secure Erase is faster too because it’s the hard drive controller doing the work. I just have to hope that my firmware wasn’t replaced with firmware that only fakes the process (or I’ll just do an overwrite anyway to be sure). According to the same NIST document, “for ATA disk drives manufactured after 2001 (over 15 GB) clearing by overwriting the media once is adequate to protect the media from both keyboard and laboratory attack”.

This blog post is also a little experiment. I’ve used a Unicode emoji in the title, and I want to see how various feed aggregators and bots handle that. Sorry if I broke your aggregator or bot.

## IETF 104 in Prague

Thanks to support from Article 19, I was able to attend IETF 104 in Prague, Czech Republic this week. Primarily this was to present my Internet Draft which takes safe measurement principles from Tor Metrics work and the Research Safety Board and applies them to Internet Measurement in general.

I attended with a free one-day pass for the IETF and free hackathon registration, so more than just the draft presentation happened. During the hackathon I sat at the MAPRG table and worked on PATHspider with Mirja Kühlewind from ETH Zurich. We have the code running again with the latest libraries available in Debian testing and this may become the basis of a future Tor exit scanner (for generating exit lists, and possibly also some bad exit detection). We ran a quick measurement campaign that was reported in the hackathon presentations.

During the hackathon I also spoke to Watson Ladd from Cloudflare about his Roughtime draft which could be interesting for Tor for a number of reasons. One would be for verifying if a consensus is fresh, another would be for Tor Browser to detect if a TLS cert is valid, and another would be providing archive signatures for Tor Metrics. (We’ve started looking at archive signatures since our recent work on modernising CollecTor).

On the Monday, this was the first “real” day of the IETF. The day started off for me at the PEARG meeting. I presented my draft as the first presentation in that session. The feedback was all positive, it seems like having the document is both desirable and timely.

The next presentation was from Ryan Guest at Salesforce. He was talking about privacy considerations for application level logging. I think this would also be a useful draft that compliments my draft on safe measurement, or maybe even becomes part of my draft. I need to follow up with him to see what he wants to do. A future IETF hackathon project might be comparing Tor’s safe logging with whatever guidelines we come up with, and also comparing our web server logs setup.

Nick Sullivan was up next with his presentation on Privacy Pass. It seems like a nice scheme, assuming someone can audit the anti-tagging properties of it. The most interesting thing I took away from it is that federation is being explored which would turn this into a system that isn’t just for Cloudflare.

Amelia Andersdotter and Christoffer Långström then presented on differential privacy. They have been exploring how it can be applied to binary values as opposed to continuous, and how it could be applied to Internet protocols like the QUIC spin bit.

The last research presentation was Martin Schanzenbach presenting on an identity provider based on the GNU Name System. This one was not so interesting for me, but maybe others are interested.

I attended the first part of the Stopping Malware and Researching Threats (SMART) session. There was an update from Symantec based on their ISTR report and I briefly saw the start of a presentation about “Malicious Uses of Evasive Communications and Threats to Privacy“ but had to leave early to attend another meeting. I plan to go back and look through all of the slides from this session later.

The next IETF meeting is directly after the next Tor meeting (I had thought for some reason it directly clashed, but I guess I was wrong). I will plan to remotely participate in PEARG again there and move my draft forwards.

## When the Duke walks, you don't notice it

This is my late submission for transgender day of visibility. It comes almost a week late, but I suppose I’ll use this proverb that is popular among the trans community to justify myself:

The best time to plant a tree was 20 years ago. The second best time is now.

Or: The best time to post about transgender day of visibility was 31st of March. The second best time is 5th of April.

I should preface this post by emphasising that all of its contents are exclusively my own experiences, and may not speak for anybody other than myself. It is written in the spirit of visibility, so that the public knows that transgender people exist, and that ultimately we are normal people.

A second justification for my lateness has been my hesitance to broadcast this to the internet. I like privacy, and I take a lot of steps to safeguard it (e.g, by using Free Software). But there is one step that I have made only halfway, and that is anonymity. I use a VPN to hide my IP address, and I take special care not to give internet giants all of my personal data—I am a nobody when I surf the web.

But when I interact with human beings on the internet, I try to be me. This is terrible privacy advice, because the internet never forgets when you make a mistake. But I find it important, because the internet is a very real place. More and more, the internet affects our collective lives. It allows us to do tangible things such as purchasing items, and it does intangible things such as morphing our perception and opinion of the world. Anonymity allows you to enter this space—this real space—as an ethereal ghost, existing perpetually out of sight, but able to interact just the same. It does not take a creative mind to imagine how this can be abused, and people do.

I could abuse such ghostly powers for good, but I am not comfortable with holding that power. So I wish to be myself in spite of knowing better. To exist online under this name, I must self-censor. I must not say things that I imagine will come back to harm my future self, and I must hide aspects about myself that I do not want everybody to know—say it once, and the whole world knows.

And for years, I haven’t said it: I am transgender. By itself this is unimportant (so what?), but the act of saying it is not. The act of saying it means that anybody, absolutely anybody until the end of the digital age, can discover this about me and hold it against me, and there is no shortage of people who would. And that is frankly quite scary.

But the act of saying it is also activism. By saying it, you assert your existence in the face of an ideology that wishes you didn’t. By saying it, you own the narrative of what it means to be trans, rather than ideologues who would paint you in a dehumanised light. By saying it, you make tangible and visible a human experience that many people do not understand. At the risk of sounding self-aggrandising, there is power in that.

The last point I find especially empowering. Until the exact moment I decided to transition, I simply did not know of the mere existence of trans people. I knew about drag queens flamboyantly dancing on boats in the canals of Amsterdam, but those people were otherworldly to me. They weren’t tangibly real, and they weren’t me. Had I known that transgender people were everyday women and men who care about the same things that I do, I would have spared myself a lot of mental anguish and made the leap a lot sooner.

Instead, something else prompted that realisation. I was reading a Christmas novel in Summer, as you do. The book was called “Let It Snow: Three Holiday Romances” authored by John Green, Lauren Myracle and Maureen Johnson. The book has three POVs in a town struck by a heavy snow storm, and there’s a lot of interplay between the POVs.

I was reading one of John Green’s chapters. His main character had long been great friends with a tomboyish girl (nicknamed “the Duke”) who struggled with her gender expression, and the two embark on a great journey through the snow storm to reach the waffle house. During this trek, there is a scene where he is walking a few paces behind her, and he is looking at her. And while looking, and through the shared experiences, a sudden thought strikes him: “Anyway, the Duke was walking, and there was a certain something to it, and I was kind of disgusted with myself for thinking about that certain something. [â€Ś] When Brittany the cheerleader walks, you notice it. When the Duke walks, you don’t. Usually.”

These two had been friends for the longest time, and for the first time, he entertained the thought that it might be something more than that. And you read on, and on, and this wayward thought starts to become quite real and serious. And suddenly he becomes self-aware of the thought absorbing him:

“Once you think a thought, it is extremely difficult to unthink it. And I had thought the thought.”

It hit me like a brick. In that very same instance, I, too, thought the thought. What had been a feeling for so long, I finally thought out loud in my head, and it was impossible for me to unthink it. It had nothing to do with the novel, and I have no idea how I made that leap, but in that moment I realised for the first time, truly realised, that I did not want to be a boy—that I wanted to be a girl. And I was miserable for it, but eventually better off. A quick search later and I discovered that trans people exist, and that transition is an actual thing that normal people do.

So I did it. And it has been good. Whatever ailed me prior to transition is mostly gone, and I have become a functioning adult who does many non-transgender-related things such as translating GNOME into Esperanto and creating cat monsters for Dungeons & Dragons. But I never really included being transgender in any of my online activities, and I want to change that. I want to be more like the Duke. I want to walk like her, and while people may not always see that walk, I want to call attention to it every now and then. And maybe it will help someone be struck by the thought, whatever spark of madness it is that they need.

Happy transgender day of visibility.

## The Power of Workflow Scripts

Nextcloud has the ability to define some conditions under which external scripts are executed. The app which makes this possible is called “Workflow Script”. I always knew that this powerful tool exists, yet I never really had a use case for it. This changed last week. Task I heavily rely on text files for note taking. I organize them in folders, for example I have a “Projects” folders with sub-folders for each project I work on currently.