Free Software, Free Society!
Thoughts of the FSFE Community (English)

Saturday, 17 August 2019

Building Archlinux Packages in Gitlab

GitLab is my favorite online git hosting provider, and I really love the CI feature (that now most of the online project providers are also starting supporting it).

Archlinux uses git and you can find everything here: Arch Linux git repositories

There are almost 2500 packages there! There are 6500 in core/extra/community (primary repos) and almost 55k Packages in AUR, the Archlinux User Repository.

We are going to use git to retrieve our PKGBUILD from aur archlinux as an example.
The same can be done with one of the core packages by using the above git repo.

So here is a very simple .gitlab-ci.yml file that we can use to build an archlinux package in gitlab

image: archlinux/base:latest

before_script:
    - export PKGNAME=tallow

run-build:
  stage: build
  artifacts:
    paths:
    - "*.pkg.tar.xz"
    expire_in: 1 week
  script:
      # Create "Bob the Builder" !
    - groupadd bob && useradd -m -c "Bob the Builder" -g bob bob
      # Update archlinux and install git
    - pacman -Syy && pacman -Su --noconfirm --needed git base-devel
      # Git Clone package repository
    - git clone https://aur.archlinux.org/$PKGNAME.git
    - chown -R bob:bob $PKGNAME/
      # Read PKGBUILD
    - source $PKGNAME/PKGBUILD
      # Install Dependencies
    - pacman -Syu --noconfirm --needed --asdeps "${makedepends[@]}" "${depends[@]}"
      # Let Bob the Builder, build package
    - su - bob -s /bin/sh -c "cd $(pwd)/$PKGNAME/ && makepkg"
      # Get artifact
    - mv $PKGNAME/*.pkg.tar.xz ./

You can use this link to verify the above example: tallow at gitlab

But let me explain the steps:

  • First we create a user, Bob the Builder as in archlinux we can not use root to build a package for security reasons.
  • Then we update our container and install git and base-devel group. This group contains all relevant archlinux packages for building a new one.
  • After that, we git clone the package repo
  • Install any dependencies. This is a neat trick that I’ve found in archlinux forum using source command to create shell variables (arrays).
  • Now it is time for Bob to build the package !
  • and finally, we move the artifact in our local folder
Tag(s): archlinux, gitlab

Thursday, 15 August 2019

MinIO Intro Notes

MinIO is a high performance object storage server compatible with Amazon S3 APIs

In a previous article, I mentioned minio as an S3 gateway between my system and backblaze b2. I was impressed by minio. So in this blog post, I would like to investigate the primary use of minio as an S3 storage provider!

Install Minio

Minio, is also software written in Go. That means we can simple use the static binary executable in our machine.

Download

The latest release of minio is here:

curl -sLO https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio

Version

./minio version

$ ./minio version

Version: 2019-08-01T22:18:54Z
Release-Tag: RELEASE.2019-08-01T22-18-54Z
Commit-ID: c5ac901e8dac48d45079095a6bab04674872b28b

Operating System

Although we can use the static binary from minio’s site, I would propose to install minio through your distribution’s package manager, in Arch Linux is:

$ sudo pacman -S minio

this method, will also provide you, with a simple systemd service unit and a configuration file.

/etc/minio/minio.conf

# Local export path.
MINIO_VOLUMES="/srv/minio/data/"
# Access Key of the server.
# MINIO_ACCESS_KEY=Server-Access-Key
# Secret key of the server.
# MINIO_SECRET_KEY=Server-Secret-Key
# Use if you want to run Minio on a custom port.
# MINIO_OPTS="--address :9199"

Docker

Or if you like docker, you can use docker!

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data

Standalone

We can run minion as standalone

$ minio server /data

Create a test directory to use as storage:

$ mkdir -pv minio_data/
mkdir: created directory 'minio_data/'

$ /usr/bin/minio server ./minio_data/

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 1 week ago ┃
┃ Update: Run `minio update`                                    ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Endpoint:  http://192.168.1.3:9000  http://192.168.42.1:9000  http://172.17.0.1:9000  http://172.18.0.1:9000  http://172.19.0.1:9000  http://192.168.122.1:9000  http://127.0.0.1:9000
AccessKey: KYAS2LSSPXRZFH9P6RHS
SecretKey: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur 

Browser Access:
   http://192.168.1.3:9000  http://192.168.42.1:9000  http://172.17.0.1:9000  http://172.18.0.1:9000  http://172.19.0.1:9000  http://192.168.122.1:9000  http://127.0.0.1:9000        

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.1.3:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Update Minio

okay, our package is from one week ago, but that’s okay. We can overwrite our package build (although not
recommended) with this:

$ sudo curl -sLo /usr/bin/minio https://dl.min.io/server/minio/release/linux-amd64/minio

again, NOT recommended.

Check version

minio version

Version: 2019-08-01T22:18:54Z
Release-Tag: RELEASE.2019-08-01T22-18-54Z
Commit-ID: c5ac901e8dac48d45079095a6bab04674872b28b

minio update

An alternative way, is to use the built-in update method:

$ sudo minio update

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 5 days ago    ┃
┃ Update: https://dl.min.io/server/minio/release/linux-amd64/minio ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Update to RELEASE.2019-08-07T01-59-21Z ? [y/n]: y
MinIO updated to version RELEASE.2019-08-07T01-59-21Z successfully.

minio version

Version: 2019-08-07T01:59:21Z
Release-Tag: RELEASE.2019-08-07T01-59-21Z
Commit-ID: 930943f058f01f37cfbc2265d5f80ea7026ec55d

Run minio

run minion as standalone and localhost (not exposing our system to outside):

minio server --address 127.0.0.1:9000 ~/./minio_data/

output

$ minio server --address 127.0.0.1:9000 ~/./minio_data/

Endpoint:  http://127.0.0.1:9000
AccessKey: KYAS2LSSPXRZFH9P6RHS
SecretKey: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur 

Browser Access:
   http://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://127.0.0.1:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Web Dashboard

minio comes with it’s own web dashboard!

minio_localhost.png

minio_dashboard.png

New Bucket

Let’s create a new bucket for testing purposes:

minio_create_new_bucket.png

minio_new_bucket.png

minio_new_bucket_name.png

minio_bucket0001.png

Minio Client

minio comes with it’s own minio client or mc

Install minio client

Binary Download

curl -sLO https://dl.min.io/client/mc/release/linux-amd64/mc

or better through your package manager:

sudo pacman -S minio-client

Access key / Secret Key

Now export our AK/SK in our enviroment

export -p MINIO_ACCESS_KEY=KYAS2LSSPXRZFH9P6RHS
export -p MINIO_SECRET_KEY=qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

minio host

or you can configure the minio server as a host:

./mc config host add myminio http://127.0.0.1:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

I prefer this way, cause I dont have to export keys every time.

List buckets

$ mc ls myminio
[2019-08-05 20:44:42 EEST]      0B bucket0001/

$ mc ls myminio/bucket0001
(empty)

List Policy

mc admin policy list myminio

$ mc admin policy list myminio
readonly
readwrite
writeonly

Credentials

If we do not want to get random Credentials every time, we can define them in our environment:

export MINIO_ACCESS_KEY=admin
export MINIO_SECRET_KEY=password
minio server --address 127.0.0.1:9000 .minio_data{1...10}

with minio client:

$ mc config host add myminio http://127.0.0.1:9000 admin password

mc: Configuration written to `/home/ebal/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/home/ebal/.mc/share`.
mc: Initialized share uploads `/home/ebal/.mc/share/uploads.json` file.
mc: Initialized share downloads `/home/ebal/.mc/share/downloads.json` file.
Added `myminio` successfully.

mc admin config get myminio/ | jq .credential

$ mc admin config get myminio/ | jq .credential
{
  "accessKey": "8RMC49VEC1IHYS8FY29Q",
  "expiration": "1970-01-01T00:00:00Z",
  "secretKey": "AY+IjQZomX6ZClIBJrjgxRJ6ugu+Mpcx6rD+kr13",
  "status": "enabled"
}

s3cmd

Let’s configure s3cmd to use our minio data server:

$ sudo pacman -S s3cmd

Configure s3cmd

s3cmd --configure

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: KYAS2LSSPXRZFH9P6RHS
Secret Key: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: http://127.0.0.1:9000
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]: 
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: n
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 
New settings:
  Access Key: KYAS2LSSPXRZFH9P6RHS
  Secret Key: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur
  Default Region: US
  S3 Endpoint: http://127.0.0.1:9000
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
ERROR: Test failed: [Errno -2] Name or service not known

Retry configuration? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/home/ebal/.s3cfg'

Test it

$ s3cmd ls
2019-08-05 17:44  s3://bucket0001

Distributed

Let’s make a more complex example and test the distributed capabilities of minio

Create folders

mkdir -pv .minio_data{1..10}

$ mkdir -pv .minio_data{1..10}

mkdir: created directory '.minio_data1'
mkdir: created directory '.minio_data2'
mkdir: created directory '.minio_data3'
mkdir: created directory '.minio_data4'
mkdir: created directory '.minio_data5'
mkdir: created directory '.minio_data6'
mkdir: created directory '.minio_data7'
mkdir: created directory '.minio_data8'
mkdir: created directory '.minio_data9'
mkdir: created directory '.minio_data10'

Start Server

Be-aware you have to user 3 dots (…) to enable erasure-code distribution (see below).

and start minio server like this:

minio server --address 127.0.0.1:9000 .minio_data{1...10}

$ minio server --address 127.0.0.1:9000 .minio_data{1...10}

Waiting for all other servers to be online to format the disks.

Status:         10 Online, 0 Offline.
Endpoint:  http://127.0.0.1:9000
AccessKey: CDSBN216JQR5B3F3VG71
SecretKey: CE+ti7XuLBrV3uasxSjRyhAKX8oxtZYnnEwRU9ik 

Browser Access:
   http://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://127.0.0.1:9000 CDSBN216JQR5B3F3VG71 CE+ti7XuLBrV3uasxSjRyhAKX8oxtZYnnEwRU9ik

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

configure mc

$ ./mc config host add myminio http://127.0.0.1:9000 WWFUTUKB110NS1V70R27 73ecITehtG2rOF6F08rfRmbF+iqXjNr6qmgAvdb2
Added `myminio` successfully.

admin info

mc admin info myminio

$ mc admin info myminio
●  127.0.0.1:9000
   Uptime: 3 minutes
  Version: 2019-08-07T01:59:21Z
  Storage: Used 25 KiB
   Drives: 0/0 OK

Create files

Creating random files

for i in $(seq 10000) ;do echo $RANDOM > file$i ; done

and by the way, we can use mc to list our local files also!

$ mc ls file* | head

[2019-08-05 21:27:01 EEST]      6B file1
[2019-08-05 21:27:01 EEST]      5B file10
[2019-08-05 21:27:01 EEST]      5B file100
[2019-08-05 21:27:01 EEST]      6B file11
[2019-08-05 21:27:01 EEST]      6B file12
[2019-08-05 21:27:01 EEST]      6B file13
[2019-08-05 21:27:01 EEST]      6B file14
[2019-08-05 21:27:01 EEST]      5B file15
[2019-08-05 21:27:01 EEST]      5B file16

Create bucket

mc ls myminio

$ mc mb myminio/bucket0002
Bucket created successfully `myminio/bucket0002`.

$ mc ls myminio
[2019-08-05 21:41:35 EEST]      0B bucket0002/

Copy files

mc cp file* myminio/bucket0002/

minio_copy_files.png

be patient, even in a local filesystem, it will take a long time.

minio_copy_files_finish.png

Erasure Code

copying from MinIO docs

you may lose up to half (N/2) of the total drives
MinIO shards the objects across N/2 data and N/2 parity drives

Here is the

$ du -sh .minio_data*

79M    .minio_data1
79M    .minio_data10
79M    .minio_data2
79M    .minio_data3
79M    .minio_data4
79M    .minio_data5
79M    .minio_data6
79M    .minio_data7
79M    .minio_data8
79M    .minio_data9

but what size did our files had?

$ du -sh files/
40M     files

Very insteresting.

$ tree .minio_data*

Here is shorter list, to get an idea how objects are structured: minio_data_tree.txt

$ mc ls myminio/bucket0002 | wc -l
10000

minio_dashboard_tree.txt

Delete a folder

Let’s see how handles corrupted disks, but before that let’s keep a hash of our files:

md5sum file* > /tmp/files.before

now remove:

$ rm -rf .minio_data10 

$ ls -la
total 0
drwxr-x---  1 ebal ebal    226 Aug 15 20:25 .
drwx--x---+ 1 ebal ebal   3532 Aug 15 19:13 ..
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data1
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data2
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data3
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data4
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data5
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data6
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data7
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data8
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data9

We’ve noticed that folder: minio_data10 is not there.

This is the msg in minio server console:

API: SYSTEM()
Time: 20:23:50 EEST 08/15/2019
DeploymentID: 7852c1e1-146a-4ce9-8a05-50ad7b925fef
Error: unformatted disk found
       endpoint=.minio_data10
       3: cmd/prepare-storage.go:40:cmd.glob..func15.1()
       2: cmd/xl-sets.go:212:cmd.(*xlSets).connectDisks()
       1: cmd/xl-sets.go:243:cmd.(*xlSets).monitorAndConnectEndpoints()

Error: unformatted disk found

We will see that minio will try to create the disk/volume/folder in our system:

$ du -sh .minio_data*
79M    .minio_data1
0       .minio_data10
79M    .minio_data2
79M    .minio_data3
79M    .minio_data4
79M    .minio_data5
79M    .minio_data6
79M    .minio_data7
79M    .minio_data8
79M    .minio_data9

Heal

Minio comes with a healing ability:

$ mc admin heal --recursive myminio/

minio_heal.png

$ du -sh .minio_data*

79M     .minio_data1
79M     .minio_data10
79M     .minio_data2
79M     .minio_data3
79M     .minio_data4
79M     .minio_data5
79M     .minio_data6
79M     .minio_data7
79M     .minio_data8
79M     .minio_data9
$ mc admin heal --recursive myminio/
 ◐  bucket0002/file9999
    10,000/10,000 objects; 55 KiB in 58m21s
    ┌────────┬────────┬─────────────────────┐
    │ Green  │ 10,004 │ 100.0% ████████████ │
    │ Yellow │      0 │   0.0%              │
    │ Red    │      0 │   0.0%              │
    │ Grey   │      0 │   0.0%              │
    └────────┴────────┴─────────────────────┘
Tag(s): minio, s3

Friday, 09 August 2019

Order your Akademy t-shirt *NOW*

If you want an Akademy 2019 t-shirt you have until Monday 12th Aug at 1100CEST (i.e. in 2 days and a bit) to order it.

Head over to https://akademy.kde.org/2019/akademy-2019-t-shirt and get yourself one of the exclusive t-shirts with Jen's awesome design :)

Sunday, 28 July 2019

My KDE Onboarding Sprint 2019 report

This week I took part on the KDE Onboarding Sprint 2019 (part of what's been known as Nuremberg Megasprint (i.e. KDEConnect+KWin+Onboarding) in, you guessed it, Nuremberg.

The goal of the sprint was "how do we make it easier for people to start contributing". We mostly focused on the "start contributing *code*" side, though we briefly touched artists and translators too.

This is *my* summary, a more official one will appear somewhere else, so don't get annoyed at me if the blog is a bit opinionated (though i'll try it not to)

The main issues we've identified when trying to contribute to KDE software is:
* Getting dependencies is [sometimes] hard
* Actually running the software is [sometimes] hard

Dependencies are hard

Say you want to build dolphin from the git master branch. For that (at the time of writing) you need KDE Frameworks 5.57, this means that if you run the latest Ubuntu or the latest OpenSUSE you can't build it because they ship older versions.

Our current answer for that is kdesrc-build but it's not the most easy to use script, and sometimes you may end up building QtWebEngine or QtWebKit, which as a newbie is something you most likely don't want to do.

Running is hard

Running the software you have just built (once you've passed the dependencies problem) is not trivial either.

Most of our software can't be run uninstalled (KDE Frameworks are a notable exception here, but newbies rarely start developing KDE Frameworks).

This means that you may try to run make install, which if you didn't pass -DCMAKE_INSTALL_PREFIX pointing somewhere in your home you'll probably have to run make install as root since it defaults to /usr/local (this will be fixed in next extra-cmake-modules release to point to a somewhat better prefix) that isn't that useful either since none of your software is looking for stuff in /usr/local. Newbies may be tempted to use -DCMAKE_INSTALL_PREFIX=/usr but that's *VERY* dangerous since it can easily mess up your own system.

For applications, our typical answer is use -DCMAKE_INSTALL_PREFIX=/home/something/else at cmake stage, run make install and then set the environment variables to pick up things from /home/something/else, a newbie will say "which variables" at this stage probably (and not newbies too, I don't think i remember them all). To help with that we generate a prefix.sh in the build dir and after the next extra-cmake-release we will tell the users that they need to run it for things to work.

But still that's quite convoluted and I know from experience answering people in IRC that lots of people get stuck there. It's also very IDE unfriendly since IDEs don't usually have the "install" concept, it's run & build for them.

Solutions

We ended up focusing on two possible solutions:

* Conan: Conan "the C/C++ Package Manager for Developers" (or so they say) is something like pip in the python world but for C/C++. The idea is that by using Conan to get the dependencies we will solve most of the problems in that area. Whether it can help or not with the running side is still unclear, but some of our people involved in the Conan effort think they may either be able to come up with a solution or get the Conan devs to help us with it. Note Conan is not my speciality by far, so this may not be totally correct.

* Flatpak: Flatpak is "a next-generation technology for building and distributing desktop applications on Linux" (or so they say). The benefits of using flatpak are multiple, but focusing on onboarding are. "Getting dependencies is solved", dependencies are either part of the flatpatk SDK and you have them or the flatpak manifest for the application says how to get and build them and that will automagically work for you as it works for everyone else using the same manifest. "Running is solved" because when you build a flatpak it gets built into a self contained artifact so running it is just running it, no installing or environment variable fiddling is needed. We also have [preliminary] support in KDevelop (or you can use Gnome Builder if you want a more flatpak-centric experience for now). The main problem we have with flatpak at this point is that most of our apps are not totally flatpak-ready (e.g. Okular can't print). But that's something we need to fix anyway so it shouldn't be counted as a problem (IMHO).

Summary

*Personally* i think Flatpak is the way to go here, but that means that collectively we need to say "Let's do it", it's something we all have to take into account and thus we have to streamline the manifest handling/updating, focus on fixing the Flatpak related issues that our software may have, etc.

Thanks

I would like to thank SUSE for hosting us in their offices and the KDE e.V. for sponsoring my attendance to the sprint, please donate to KDE if you think the work done at sprints is important.

Saturday, 20 July 2019

A Dead Simple VPN

DSVPN is designed to address the most common use case for using a VPN

Works with TCP, blocks IPv6 leaks, redirect-gateway out-of-the-box!

 

last updated: 20190810

  • iptables rules example added
  • change vpn.key to dsvpn.key
  • add base64 example for easy copy/transfer across machines

 

dsvpn.png

 

dsvpn binary

I keep a personal gitlab CI for dsvpn here: DSVPN

Compile

Notes on the latest ubuntu:18.04 docker image:

# git clone https://github.com/jedisct1/dsvpn.git
Cloning into 'dsvpn'...
remote: Enumerating objects: 88, done.
remote: Counting objects: 100% (88/88), done.
remote: Compressing objects: 100% (59/59), done.
remote: Total 478 (delta 47), reused 65 (delta 29), pack-reused 390
Receiving objects: 100% (478/478), 93.24 KiB | 593.00 KiB/s, done.
Resolving deltas: 100% (311/311), done.

# cd dsvpn

# ls
LICENSE  Makefile  README.md  include  logo.png  src

# make
cc -march=native -Ofast -Wall -W -Wshadow -Wmissing-prototypes -Iinclude -o dsvpn src/dsvpn.c src/charm.c src/os.c
strip dsvpn

# ldd dsvpn
linux-vdso.so.1 (0x00007ffd409ba000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd78480b000)
/lib64/ld-linux-x86-64.so.2 (0x00007fd784e03000)

# ls -l dsvpn
-rwxr-xr-x 1 root root 26840 Jul 20 15:51 dsvpn

Just copy the dsvpn binary to your machines.

 

Symmetric Key

dsvpn uses symmetric-key cryptography, that means both machines uses the same encyrpted key.

dsvpn_key.png

dd if=/dev/urandom of=dsvpn.key count=1 bs=32

Copy the key to both machines using a secure media, like ssh.

base64

An easy way is to convert key to base64

cat dsvpn.key | base64

ZqMa31qBLrfjjNUfhGj8ADgzmo8+FqlyTNJPBzk/x4k=

on the other machine:

echo ZqMa31qBLrfjjNUfhGj8ADgzmo8+FqlyTNJPBzk/x4k= | base64 -d > dsvpn.key

 

Server

It is very easy to run dsvpn in server mode:

eg.

dsvpn server dsvpn.key auto

Interface: [tun0]
net.ipv4.ip_forward = 1
Listening to *:443

ip addr show tun0

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 192.168.192.254 peer 192.168.192.1/32 scope global tun0
       valid_lft forever preferred_lft forever

I prefer to use 10.8.0.0/24 CIDR in my VPNs, so in my VPN setup:

dsvpn server /root/dsvpn.key auto 443 auto 10.8.0.254 10.8.0.2

Using 10.8.0.254 as the VPN Server IP.

systemd service unit - server

I’ve created a simple systemd script dsvpn_server.service

or you can copy it from here:

/etc/systemd/system/dsvpn.service

[Unit]
Description=Dead Simple VPN - Server

[Service]
ExecStart=/usr/local/bin/dsvpn server /root/dsvpn.key auto 443 auto 10.8.0.254 10.8.0.2
Restart=always
RestartSec=20

[Install]
WantedBy=network.target

and then:

systemctl enable dsvpn.service
systemctl  start dsvpn.service

Client

It is also easy to run dsvpn in client mode:

eg.

dsvpn client dsvpn.key 93.184.216.34

# dsvpn client dsvpn.key 93.184.216.34
Interface: [tun0]
Trying to reconnect
Connecting to 93.184.216.34:443...
net.ipv4.tcp_congestion_control = bbr
Connected

ip addr show tun0

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 192.168.192.1 peer 192.168.192.254/32 scope global tun0
       valid_lft forever preferred_lft forever

dsvpn works in redict-gateway mode,
so it will apply routing rules to pass all the network traffic through the VPN.

ip route list

0.0.0.0/1 via 192.168.192.254 dev tun0
default via 192.168.122.1 dev eth0 proto static
93.184.216.34 via 192.168.122.1 dev eth0
128.0.0.0/1 via 192.168.192.254 dev tun0
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.69
192.168.192.254  dev tun0 proto kernel scope link src 192.168.192.1

As I mentioned above, I prefer to use 10.8.0.0/24 CIDR in my VPNs, so in my VPN client:

dsvpn client /root/dsvpn.key 93.184.216.34 443 auto 10.8.0.2 10.8.0.254

Using 10.8.0.2 as the VPN Client IP.

ip addr show tun0

11: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 10.8.0.2 peer 10.8.0.254/32 scope global tun0
       valid_lft forever preferred_lft forever

systemd service unit - client

I’ve also created a simple systemd script for the client dsvpn_client.service

or you can copy it from here:

/etc/systemd/system/dsvpn.service

[Unit]
Description=Dead Simple VPN - Client

[Service]
ExecStart=/usr/local/bin/dsvpn client /root/dsvpn.key 93.184.216.34 443 auto 10.8.0.2 10.8.0.254
Restart=always
RestartSec=20

[Install]
WantedBy=network.target

and then:

systemctl enable dsvpn.service
systemctl  start dsvpn.service

and here is an MTR from the client:

dsvpn_mtr.png

 

Enjoy !

 

firewall

It is important to protect your traffic from network leaks. That mean, sometimes, we do not want our network traffic to pass through our provider if the vpn server/client went down. To prevent any network leak, here is an example of iptables rules for a virtual machine:

# Empty iptables rule file
*filter
:INPUT   ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT  ACCEPT [0:0]

-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT

# LibVirt
-A INPUT -i eth0 -s 192.168.122.0/24 -j ACCEPT

# Reject incoming traffic
-A INPUT -j REJECT

# DSVPN
-A OUTPUT -p tcp -m tcp -o eth0 -d 93.184.216.34 --dport 443 -j ACCEPT
# LibVirt
-A OUTPUT -o eth0 -d 192.168.122.0/24 -j ACCEPT
# Allow tun
-A OUTPUT -o tun+ -j ACCEPT

# Reject outgoing traffic
-A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
-A OUTPUT -p udp -j REJECT --reject-with icmp-port-unreachable

COMMIT

Here is the prefable output:

dsvpn_ping.png

 

Tag(s): vpn, dsvpn

Friday, 19 July 2019

Popular licenses in OpenAPI

Today I was wondering what the most commonly used license that people use in OpenAPI, so I went and did a quick analysis.

Results

The top 5 (with count and percentage; n=552):

License name count percentage
CC-BY-3.0 250 45,29%
Apache-2.01 218 39,49%
MIT 15 2,71%
“This page was built with the Swagger API.” 8 1,44%
“Open Government License – British Columbia” 6 1,09%

The striked-out entries are the ones that I would not really consider a proper license.

The license names inside quotation marks are the exact copy-paste from the field. The rest are de-duplicated into their SPDX identifiers.

After those top 5 the long end goes very quickly into only one license per listed API. Several of those seem very odd as well.

Methodology

Note: Before you start complaining, I realise this is probably a very sub-optimal solution code-wise, but it worked for me. In my defence, I did open up my copy of the Sed & Awk Pocket Reference before my eyes went all glassy and I hacked up the following ugly method. Also note that the shell scripts are in Fish shell and may not work directly in a 100% POSIX shell.

First, I needed to get a data set to work on. Hat-tip to Mike Ralphson for pointing me to APIs Guru as a good resource. I analysed their APIs-guru/openapi-directory repository2, where in the APIs folder they keep a big collection of public APIs. Most of them following the OpenAPI (previously Swagger) specification.

git clone https://github.com/APIs-guru/openapi-directory.git
cd openapi-directory/APIs

Next I needed to list all the licenses found there. For this I assumed the name: tag in YAML4 (the one including the name of the license) to be in the very next line after the license: tag3 – I relied on people writing OpenAPI files in the same order as it is laid out in the OpenAPI Specification. I stored the list of all licenses, sorted alphabetically in a separate api_licenses file:

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \
grep 'name:' | sort > api_licenses

Then I generated another file called api_licenses_unique that would include only all names of these licenses.

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \
grep 'name:' | sort | uniq > api_licenses_unique

Because I was too lazy to figure out how to do this properly5, I simply wrapped the same one-liner into a script to go through all the unique license names and count how many times they show up in the (non-duplicated) list of all licenses found.

for license in (grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 \
--no-filename | grep 'name' | sort | uniq)
                           grep "$license" api_licenses --count
                       end

In the end I copied the console output of this last command, opened api_licenses_unique, and pasted said output in the first column (by going into Block Selection Mode in Kate).

Clarification on what I consider “proper license” and re-count of Creative Commons licenses (12 July 2019 update)

I was asked what I considered as a “proper license” above, and specifically why I did not consider “Creative Commons” as such.

First, if the string did not even remotely look like a name of a license, I did not consider that as a proper license. This is the case e.g. with “This page was built with the Swagger API.”.

As for the string “Creative Commons”, it – at best – indicates a family o licenses, which span a vast spectrum from CC0-1.0 (basically public domain) on one end to CC-BY-NC-CA-4.0 (basically, you may copy this, but not change anything, nor get money out of it, and you must keep the same license) on the other. For reference, on the SPDX license list, you will find 32 Creative Commons licenses. And SPDX lists only the International and Universal versions of them7.

Admiteldy, – and this is a caveat in my initial method above – it may be that there is an actual license following the lines after the “Creative Commons” string … or, as it turned out to be true, that the initial 255 count of name: Creative Commons licenses included also valid CC license names such as name: Creative Commons Attribution 3.0.

So, obviously I made a boo-boo, and therefore went and dug deeper ;)

To do so, and after looking at the results a bit more, I noticed that the url: entries of the name: Creative Commons licenses seem to point to actual CC licenses, so I decided to rely on that. Luckily, this turned out to be true.

I broadened up the initial search to one extra line, to include the url: line, narrowed down the next search to name: Creative Commons, and in the end only to url:

grep 'license:' **/openapi.yaml **/swagger.yaml -A 2 --no-filename | \
grep 'name: Creative Commons' -A 1 | grep 'url' | sort > api_licenses_cc

Next, I searched for the most common license – CC-BY-3.0:

grep --count 'creativecommons.org/licenses/by/3.0' api_licenses_cc

The result was 250, so for the remaining6 5 I just opened the api_licenses_cc file and counted them manually.

Using this method the list of all “Creative Commons” license turned out to be as follows:

  1. CC-BY-3.0 (250, of which one was specific to Australian jurisdiction)
  2. CC-BY-4.0 (3)
  3. CC-BY-NC-4.0 (1)
  4. CC-BY-NC-ND-2.0 (1)

In this light, I am amending the results above, and removing the bogus “Creative Commons” entry. Apart from removing the bogus entry, it does not change the ranking, nor the counts, of the top 5 licenses.

Further clean-up of Apache (17 July 2019 update)

Upon further inspection it looked odd that I was getting so many Apache-2.0 matches – if you added all the Apache-2.0 hits (initially 421) with all the CC-BY-3.0 hits (250), you already reached a higher number than all the occurrances of the license: field in all the files (552). Clearly something was off.

So I re-counted the Apache hits by limiting myself only to the url: field of the license:, instead of the name: and came to a half of the original number. Which brought it from first down to second place. Basically I applied the same method as above for counting Creative Commons licenses.

Better method (25 July 2019 update)

I just learnt from Jaka “Lynx” Kranjc of a better solution. Basically, I could cut down quite a bit by simply using uniq --count, which produces a unique list and prepends a column of how many times it found that occurance – super useful!

I will not edit my findings above again, but am mentioning the better method below, together with the attached results, so others can simply check.

grep 'license:' **/openapi.yaml **/swagger.yaml -A 1 --no-filename | \
grep 'name:' |  uniq -c | sort > OpenAPI_grouped_by_license_name.txt

… produces OpenAPI_grouped_by_license_name.txt

grep 'license:' **/openapi.yaml **/swagger.yaml -A 2 --no-filename | \
grep 'url:' |  uniq -c | sort > OpenAPI_grouped_by_license_url.txt

… produces OpenAPI_grouped_by_license_url.txt

hook out → not proud of the method, but happy with having results


  1. This should come as no surprise, as Apache-2.0 is used as the official specification’s example

  2. At the time of this writing, that was commit 506133b

  3. I tried it also with 3 lines, and the few extra results that came up where mostly useless. 

  4. I did a quick check and the repository seems to include no OpenAPIs in JSON format. 

  5. I expected for license in api_licenses_unique to work, but it did not. 

  6. The result of wc -l api_licenses_cc was 255. 

  7. Prior to version 4.0 of Creative Commons licenses each CC license had several versions localised for specific jurisdictions. 

Thursday, 18 July 2019

slack-desktop and xdg-open

Notes from archlinux

xdg-open - opens a file or URL in the user’s preferred application

When you are trying to authenticate to a new workspace (with 2fa) using the slack-desktop, it will open your default browser and after the authentication your browser will re-direct you to the slack-desktop again using something like this

slack://6f69f7c8b/magic-login/t3bnakl6qabc-16869c6603bdb64f3a6f69f7c8b2d920fa26149f990e0556b4e5c6f26984db0a

This is mime query !

$ xdg-mime query default x-scheme-handler/slack
slack.desktop

$ locate slack.desktop
/usr/share/applications/slack.desktop
$ more /usr/share/applications/slack.desktop

[Desktop Entry]
Name=Slack
Comment=Slack Desktop
GenericName=Slack Client for Linux
Exec=/usr/bin/slack --disable-gpu %U
Icon=/usr/share/pixmaps/slack.png
Type=Application
StartupNotify=true
Categories=GNOME;GTK;Network;InstantMessaging;
MimeType=x-scheme-handler/slack;

I had to change the Exec entry above to point to my slack-desktop binary

Tag(s): slack, xdg

Monday, 15 July 2019

KDE Applications 19.08 branches created

Make sure you commit anything you want to end up in the KDE Applications 19.08 release to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 18 of July.

More interesting dates
August 1, 2019: KDE Applications 19.08 RC (19.07.90) Tagging and Release
August 8, 2019: KDE Applications 19.08 Tagging
August 15, 2019: KDE Applications 19.08 Release

https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

Sunday, 14 July 2019

kubernetes with minikube - Intro Notes

Notes based on Ubuntu 18.04 LTS

My notes for this k8s blog post are based upon an Ubuntu 18.05 LTS KVM Virtual Machine. The idea is to use nested-kvm to run minikube inside a VM, that then minikube will create a kvm node.

minikube builds a local kubernetes cluster on a single node with a set of small resources to run a small kubernetes deployment.

Archlinux –> VM Ubuntu 18.04 LTS runs minikube/kubeclt —> KVM minikube node

 

Pre-requirements

Nested kvm

Host

(archlinux)

$ grep ^NAME /etc/os-release
NAME="Arch Linux"

Check that nested-kvm is already supported:

$ cat /sys/module/kvm_intel/parameters/nested
N

If the output is N (No) then remove & enable kernel module again:

$ sudo modprobe -r kvm_intel
$ sudo modprobe kvm_intel nested=1

Check that nested-kvm is now enabled:

$ cat /sys/module/kvm_intel/parameters/nested
Y

 

Guest

Inside the virtual machine:

$ grep NAME /etc/os-release
NAME="Ubuntu"
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
$ egrep -o 'vmx|svm|0xc0f' /proc/cpuinfo

vmx
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

 

LibVirtd

If the above step fails, try to edit the xml libvirtd configuration file in your host:

# virsh edit ubuntu_18.04

and change cpu mode to passthrough:

from

  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='allow'>Nehalem</model>
  </cpu>

to

  <cpu mode='host-passthrough' check='none'/>

 

Install Virtualization Tools

Inside the VM

 

sudo apt -y install
  qemu-kvm
  bridge-utils
  libvirt-clients
  libvirt-daemon-system

Permissions

We need to be included in the libvirt group

sudo usermod -a -G libvirt $(whoami)
newgrp libvirt

 

kubectl

kubectl is a command line interface for running commands against Kubernetes clusters.

size: ~41M

$ export VERSION=$(curl -sL https://storage.googleapis.com/kubernetes-release/release/stable.txt)
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/amd64/kubectl

$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/kubectl

$ kubectl completion bash | sudo tee -a /etc/bash_completion.d/kubectl
$ kubectl version

if you wan to use bash autocompletion without logout/login use this:

source <(kubectl completion bash)

What the json output of kubectl version looks like:

$ kubectl version -o json | jq .
The connection to the server localhost:8080 was refused - did you specify the right host or port?
{
  "clientVersion": {
    "major": "1",
    "minor": "15",
    "gitVersion": "v1.15.0",
    "gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529",
    "gitTreeState": "clean",
    "buildDate": "2019-06-19T16:40:16Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

Message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

it’s okay if minikube hasnt started yet.

 

minikube

size: ~40M

$ curl -sLO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

$ chmod +x minikube-linux-amd64

$ sudo mv minikube-linux-amd64 /usr/local/bin/minikube

$ minikube version
minikube version: v1.2.0

$ minikube update-check
CurrentVersion: v1.2.0
LatestVersion: v1.2.0

$ minikube completion bash | sudo tee -a /etc/bash_completion.d/minikube 

To include bash completion without login/logout:

source $(minikube completion bash)

 

KVM2 driver

We need a driver so that minikube can build a kvm image/node for our kubernetes cluster.

size: ~36M

$ curl -sLO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2

$ chmod +x docker-machine-driver-kvm2

$ mv docker-machine-driver-kvm2 /usr/local/bin/

 

Start minikube

$ minikube start --vm-driver kvm2

* minikube v1.2.0 on linux (amd64)
* Downloading Minikube ISO ...
 129.33 MB / 129.33 MB [============================================] 100.00% 0s
* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
* Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
* Downloading kubeadm v1.15.0
* Downloading kubelet v1.15.0
* Pulling images ...
* Launching Kubernetes ...
* Verifying: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"

Check via libvirt, you will find out a new VM, named: minikube

$ virsh list
 Id    Name                           State
----------------------------------------------------
 1     minikube                       running

 

Something gone wrong:

Just delete the VM and configuration directories and start again:

$ minikube delete
$ rm -rf ~/.minikube/ ~/.kube

kubectl version

Now let’s run kubectl version again

$ kubectl version -o json | jq .

{
  "clientVersion": {
    "major": "1",
    "minor": "15",
    "gitVersion": "v1.15.0",
    "gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529",
    "gitTreeState": "clean",
    "buildDate": "2019-06-19T16:40:16Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "15",
    "gitVersion": "v1.15.0",
    "gitCommit": "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529",
    "gitTreeState": "clean",
    "buildDate": "2019-06-19T16:32:14Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

 

Dashboard

Start kubernetes dashboard

$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001

minikube_dashboard.png

 

Tuesday, 09 July 2019

Beware of some of the Qt 5.13 deprecation porting hints

QComboBox::currentIndexChanged(QString) used to have (i.e. in Qt 5.13.0) a deprecation warning that said "Use currentTextChanged() instead".

That has recently been reverted since both are not totally equivalent, sure, you can probably "port" from one to the other, but the "use" wording to me seems like a "this is the same" and they are not.

Another one of those is QPainter::initFrom, which inits a painter with the pen, background and font to the same as the given widget. This is deprecated, because it's probably wrong ("what is the pen of a widget?") but the deprecation warning says "Use begin(QPaintDevice*)" but again if you look at the implementation, they don't really do the same. Still need to find time to complain to the Qt developers and get it fixed.

Anyhow, as usual, when porting make sure you do a correct port and not just blind changes.

Monday, 08 July 2019

Repair a Faulty Disk in Raid-5

Quick notes

Identify slow disk

# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   2502 MB in  2.00 seconds = 1251.34 MB/sec
 Timing buffered disk reads: 538 MB in  3.01 seconds = 178.94 MB/sec

# hdparm -Tt /dev/sdb

/dev/sdb:
 Timing cached reads:   2490 MB in  2.00 seconds = 1244.86 MB/sec
 Timing buffered disk reads: 536 MB in  3.01 seconds = 178.31 MB/sec

# hdparm -Tt /dev/sdc

/dev/sdc:
 Timing cached reads:   2524 MB in  2.00 seconds = 1262.21 MB/sec
 Timing buffered disk reads: 538 MB in  3.00 seconds = 179.15 MB/sec

# hdparm -Tt /dev/sdd

/dev/sdd:
 Timing cached reads:   2234 MB in  2.00 seconds = 1117.20 MB/sec
 Timing buffered disk reads: read(2097152) returned 929792 bytes

 

Set disk to Faulty State and Remove it

#  mdadm --manage /dev/md0 --fail /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0

#  mdadm --manage /dev/md0 --remove  /dev/sdd
mdadm: hot removed /dev/sdd from /dev/md0

Verify Status

# mdadm --verbose --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Feb  6 15:06:34 2014
     Raid Level : raid5
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Jul  8 00:51:14 2019
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : ServerOne:0  (local to host ServerOne)
           UUID : d635095e:50457059:7e6ccdaf:7da91c9b
         Events : 18122

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       6       8       32        1      active sync   /dev/sdc
       4       0        0        4      removed
       4       8        0        3      active sync   /dev/sda

Format Disk

  • quick format to identify bad blocks,
  • better solution zeroing the disk
# mkfs.ext4 -cc -v  /dev/sdd 
  • middle ground to use -c

-c Check the device for bad blocks before creating the file system. If this option is specified twice, then a slower read-write test is used instead of a fast read-only test.

# mkfs.ext4 -c -v  /dev/sdd 

output:

Running command: badblocks -b 4096 -X -s /dev/sdd 244190645
Checking for bad blocks (read-only test):   9.76% done, 7:37 elapsed

Remove ext headers

# dd if=/dev/zero of=/dev/sdd bs=4096 count=4096

Using dd to remove any ext headers

Test disk


# hdparm -Tt /dev/sdd

/dev/sdd:
 Timing cached reads:   2174 MB in  2.00 seconds = 1087.20 MB/sec
 Timing buffered disk reads: 516 MB in  3.00 seconds = 171.94 MB/sec

Add Disk to Raid


# mdadm --add /dev/md0 /dev/sdd
mdadm: added /dev/sdd

Speed

# hdparm -Tt /dev/md0

/dev/md0:
 Timing cached reads:   2480 MB in  2.00 seconds = 1239.70 MB/sec
 Timing buffered disk reads: 1412 MB in  3.00 seconds = 470.62 MB/sec

Status


# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[5] sda[4] sdc[6] sdb[0]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      [>....................]  recovery =  0.0% (44032/976631296) finish=369.5min speed=44032K/sec

unused devices: <none>

Verify Raid


# mdadm --verbose --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Feb  6 15:06:34 2014
     Raid Level : raid5
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Jul  8 00:58:38 2019
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 0% complete

           Name : ServerOne:0  (local to host ServerOne)
           UUID : d635095e:50457059:7e6ccdaf:7da91c9b
         Events : 18244

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       6       8       32        1      active sync   /dev/sdc
       5       8       48        2      spare rebuilding   /dev/sdd
       4       8        0        3      active sync   /dev/sda
Tag(s): mdadm, raid5

Sunday, 07 July 2019

sRGB↔XYZ conversion

In an earlier post, I’ve shown how to calculate an sRGB↔XYZ conversion matrix. It’s only natural to follow up with a code for converting between sRGB and XYZ colour spaces. While the matrix is a significant portion of the algorithm, there is one more step necessary: gamma correction.

What is gamma correction?

Human perception of light’s brightness approximates a power function of its intensity. This can be expressed as \(P = S^\alpha\) where \(P\) is the perceived brightness and \(S\) is linear intensity. \(\alpha\) has been experimentally measure to be less than one which means that people are more sensitive to changes to dark colours rather than to bright ones.

Based on that observation, colour space’s encoding can be made more efficient by using higher precision when encoding dark colours and lower when encoding bright ones. This is akin to precision of floating point numbers scaling with value’s magnitude. In RGB systems, the role of precision scaling is done by gamma correction. When colour is captured (for example from a digital camera) it goes through gamma compression which spaces dark colours apart and packs lighter colours more densely. When displaying an image, the opposite happens and encoded value goes through gamma expansion.1.00.90.80.70.60.50.40.30.20.10.0EncodedIntensity

Many RGB systems use a simple \(S = E^\gamma\) expansion formula, where \(E\) is the encoded (or non-linear) value. With decoding \(\gamma\) approximating \(1/\alpha\), equal steps in encoding space correspond roughly to equal steps in perceived brightness. Image on the right demonstrates this by comparing two colour gradients. The first one has been generated by increasing encoded value in equal steps and the second one has been created by doing the same to light intensity. The former includes many dark colours while the latter contains a sudden jump in brightness from black to the next colour.

sRGB uses slightly more complicated formula stitching together two functions: $$ \begin{align} E &= \begin{cases} 12.92 × S & \text{if } S ≤ S_0 \\ 1.055 × S^{1/2.4} - 0.055 & \text{otherwise} \end{cases} \\[0.5em] S &= \begin{cases} E / 12.92 & \text{if } E ≤ E_0 \\ ((E + 0.055) / 1.055)^{2.4} & \text{otherwise} \end{cases} \\[0.5em] S_0 &= 0.00313066844250060782371 \\ E_0 &= 12.92 × S_0 \\ &= 0.04044823627710785308233 \end{align} $$

The formulas assume values are normalised to [0, 1] range. This is not always how they are expressed so a scaling step might be necessary.

sRGB encoding

Most common sRGB encoding uses eight bits per channel which introduces a scaling step: \(E_8 = ⌊E × 255⌉\). In an actual implementation, to increase efficiency and accuracy of gamma operations, it’s best to fuse the multiplication into aforementioned formulas. With that arguably obvious optimisation, the equations become: $$ \begin{align} E_8 &= \begin{cases} ⌊3294.6 × S⌉ & \text{if } S ≤ S_0 \\ ⌊269.025 × S^{1/2.4} - 14.025⌉ & \text{otherwise} \end{cases} \\[0.5em] S &= \begin{cases} E_8 / 3294.6 & \text{if } E_8 ≤ 10 \\ ((E + 14.025) / 269.025)^{2.4} & \text{otherwise} \end{cases} \end{align} $$

This isn’t the only way to represent colours of course. For example, 10-bit colour depth changes the scaling factor to 1024; 16-bit high colour uses five bits for red and blue channels while five or six for green producing different scaling factors for different primaries; and HDTV caps the range to [16, 235]. Needless to say, correct formulas need to be chosen based on the standard in question.

The implementation

And that’s it. Encoding, gamma correction and the conversion matrix are all the necessary pieces to get the conversion implemented. To keep things interesting, let's this time write the code in TypeScript:

type Tripple = [number, number, number];
type Matrix = [Tripple, Tripple, Tripple];

/**
 * A conversion matrix from linear sRGB colour space with coordinates normalised
 * to [0, 1] range into an XYZ space.
 */
const xyzFromRgbMatrix: Matrix = [
	[0.4123865632529917,   0.35759149092062537, 0.18045049120356368],
	[0.21263682167732384,  0.7151829818412507,  0.07218019648142547],
	[0.019330620152483987, 0.11919716364020845, 0.9503725870054354]
];

/**
 * A conversion matrix from XYZ colour space to a linear sRGB space with
 * coordinates normalised to [0, 1] range.
 */
const rgbFromXyzMatrix: Matrix = [
	[ 3.2410032329763587,   -1.5373989694887855,  -0.4986158819963629],
	[-0.9692242522025166,    1.875929983695176,    0.041554226340084724],
	[ 0.055639419851975444, -0.20401120612390997,  1.0571489771875335]
];

/**
 * Performs an sRGB gamma expansion of an 8-bit value, i.e. an integer in [0,
 * 255] range, into a floating point value in [0, 1] range.
 */
function gammaExpansion(value255: number): number {
	return value255 <= 10
		? value255 / 3294.6
		: Math.pow((value255 + 14.025) / 269.025, 2.4);
}

/**
 * Performs an sRGB gamma compression of a floating point value in [0, 1] range
 * into an 8-bit value, i.e. an integer in [0, 255] range.
 */
function gammaCompression(linear: number): number {
	let nonLinear: number = linear <= 0.00313066844250060782371
		? 3294.6 * linear
		: (269.025 * Math.pow(linear, 5.0 / 12.0) - 14.025);
	return Math.round(nonLinear) | 0;
}

/**
 * Multiplies a 3✕3 matrix by a 3✕1 column matrix.  The result is another 3✕1
 * column matrix.  The column matrices are represented as single-dimensional
 * 3-element array.  The matrix is represented as a two-dimensional array of
 * rows.
 */
function matrixMultiplication3x3x1(matrix: Matrix, column: Tripple): Tripple {
	return matrix.map((row: Tripple) => (
		row[0] * column[0] + row[1] * column[1] + row[2] * column[2]
	)) as Tripple;
}

/**
 * Converts sRGB colour given as a triple of 8-bit integers into XYZ colour
 * space.
 */
function xyzFromRgb(rgb: Tripple): Tripple {
	return matrixMultiplication3x3x1(
		xyzFromRgbMatrix, rgb.map(gammaExpansion) as Tripple);
}

/**
 * Converts colour from XYZ space to sRGB colour represented as a triple of
 * 8-bit integers.
 */
function rgbFromXyz(xyz: Tripple): Tripple {
	return matrixMultiplication3x3x1(
		rgbFromXyzMatrix, xyz).map(gammaCompression) as Tripple;
}

Wednesday, 03 July 2019

Down the troubleshooting rabbit-hole

Hardware Details

HP ProLiant MicroServer
AMD Turion(tm) II Neo N54L Dual-Core Processor
Memory Size: 2 GB - DIMM Speed: 1333 MT/s
Maximum Capacity: 8 GB

Running 24×7 from 23/08/2010, so nine years!

N54L

 

Prologue

The above server started it’s life on CentOS 5 and ext3. Re-formatting to run CentOS 6.x with ext4 on 4 x 1TB OEM Hard Disks with mdadm raid-5. That provided 3 TB storage with Fault tolerance 1-drive failure. And believe me, I used that setup to zeroing broken disks or replacing faulty disks.

 

As we are reaching the end of CentOS 6.x and there is no official dist-upgrade path for CentOS, and still waiting for CentOS 8.x, I made decision to switch to Ubuntu 18.04 LTS. At that point this would be the 3rd official OS re-installation of this server. I chose ubuntu so that I can dist-upgrade from LTS to LTS.

 

This is a backup server, no need for huge RAM, but for a reliable system. On that storage I have 2m files that in retrospect are not very big. So with the re-installation I chose to use xfs instead of ext4 filesystem.

 

I am also running an internal snapshot mechanism to have delta for every day and that pushed the storage usage to 87% of the 3Tb. If you do the math, 2m is about 1.2Tb usage, we need a full initial backup, so 2.4Tb (80%) and then the daily (rotate) incremental backups are ~210Mb per day. That gave me space for five (5) daily snapshots aka a work-week.

To remove this impediment, I also replaced the disks with WD Red Pro 6TB 7200rpm disks, and use raid-1 instead of raid-5. Usage is now ~45%

 

Problem

Frozen System

From time to time, this very new, very clean, very reliable system froze to death!

When attached monitor & keyboard no output. Strange enough I can ping the network interfaces but I can not ssh to the server or even telnet (nc) to ssh port. Awkward! Okay - hardware cold reboot then.

As this system is remote … in random times, I need to ask from someone to cold-reboot this machine. Awkward again.

Kernel Panic

If that was not enough, this machine also has random kernel panics.

damn_disk.jpeg

 

Errors

Let’s start troubleshooting this system

# journalctl -p 3 -x

 

Important Errors

ERST: Failed to get Error Log Address Range.
APEI: Can not request [mem 0x7dfab650-0x7dfab6a3] for APEI BERT registers
ipmi_si dmi-ipmi-si.0: Could not set up I/O space

and more important Errors:

INFO: task kswapd0:40 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task xfsaild/dm-0:761 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kworker/u9:2:3612 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kworker/1:0:5327 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task rm:5901 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kworker/u9:1:5902 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kworker/0:0:5906 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kswapd0:40 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task xfsaild/dm-0:761 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
INFO: task kworker/u9:2:3612 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.

 

First impressions ?

damn.jpeg

 

BootOptions

After a few (hours) of internet research the suggestion is to disable

  • ACPI stands for Advanced Configuration and Power Interface.
  • APIC stands for Advanced Programmable Interrupt Controller.

This site is very helpful for ubuntu, although Red Hat still has a huge advanced on describing kernel options better than canonical.

Grub

# vim /etc/default/grub
GRUB_CMDLINE_LINUX="noapic acpi=off"

then

# update-grub
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-curtin-settings.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-54-generic
Found initrd image: /boot/initrd.img-4.15.0-54-generic
Found linux image: /boot/vmlinuz-4.15.0-52-generic
Found initrd image: /boot/initrd.img-4.15.0-52-generic
done

Verify

# grep noapic /boot/grub/grub.cfg | head -1

        linux   /boot/vmlinuz-4.15.0-54-generic root=UUID=0c686739-e859-4da5-87a2-dfd5fcccde3d ro noapic acpi=off maybe-ubiquity

reboot and check again:

#  journalctl -p 3 -xb
-- Logs begin at Thu 2019-03-14 19:26:12 EET, end at Wed 2019-07-03 21:31:08 EEST. --
Jul 03 21:30:49 servertwo kernel: ipmi_si dmi-ipmi-si.0: Could not set up I/O space

okay !!!

 

ipmi_si

Unfortunately I could not find anything useful regarding

# dmesg | grep -i ipm
[   10.977914] ipmi message handler version 39.2
[   11.188484] ipmi device interface
[   11.203630] IPMI System Interface driver.
[   11.203662] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS
[   11.203665] ipmi_si: SMBIOS: mem 0x0 regsize 1 spacing 1 irq 0
[   11.203667] ipmi_si: Adding SMBIOS-specified kcs state machine
[   11.203729] ipmi_si: Trying SMBIOS-specified kcs state machine at mem address 0x0, slave address 0x20, irq 0
[   11.203732] ipmi_si dmi-ipmi-si.0: Could not set up I/O space

# ipmitool list
Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory

# lsmod | grep -i ipmi
ipmi_si                61440  0
ipmi_devintf           20480  0
ipmi_msghandler        53248  2 ipmi_devintf,ipmi_si

 

blocked for more than 120 seconds.

But let’s try to fix the timeout warnings:

INFO: task kswapd0:40 blocked for more than 120 seconds.
      Not tainted 4.15.0-54-generic #58-Ubuntu
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message

if you search online the above message, most of the sites will suggest to tweak dirty pages for your system.

This is the most common response across different sites:

This is a know bug. By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds. This especially happens on systems with a lot of memory.

Okay this may be the problem but we do not have a lot of memory, only 2GB RAM and 2GB Swap. But even then, our vm.dirty_ratio = 20 setting is 20% instead of 40%.

 

But I have the ability to cross-check ubuntu 18.04 with CentOS 6.10 to compare notes:

 

ubuntu 18.04

# uname -r
4.15.0-54-generic

# sysctl -a | egrep -i  'swap|dirty|raid'|sort
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirtytime_expire_seconds = 43200
vm.dirty_writeback_centisecs = 500
vm.swappiness = 60

 

CentOS 6.11

#  uname -r
2.6.32-754.15.3.el6.centos.plus.x86_64

# sysctl -a | egrep -i  'swap|dirty|raid'|sort
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.swappiness = 60

 

Scheduler for Raid

This is the best online documentation on the
optimize raid

Comparing notes we see that both systems have the same settings, even when the kernel version is a lot different, 2.6.32 Vs 4.15.0 !!!

Researching on raid optimization there is a note of kernel scheduler.

 

Ubuntu 18.04

# for drive in {a..c}; do cat /sys/block/sd${drive}/queue/scheduler; done

noop deadline [cfq]
noop deadline [cfq]
noop deadline [cfq] 

 

CentOS 6.11

# for drive in {a..d}; do cat /sys/block/sd${drive}/queue/scheduler; done

noop anticipatory deadline [cfq]
noop anticipatory deadline [cfq]
noop anticipatory deadline [cfq]
noop anticipatory deadline [cfq] 

 

Anticipatory scheduling

CentOS supports Anticipatory scheduling on the hard disks but nowadays anticipatory scheduler is not supported in modern kernel versions.

That said, from the above output we can verify that both systems are running the default scheduler cfq.

Disks

Ubuntu 18.04

  • Western Digital Red Pro WDC WD6003FFBX-6
# for i in sd{b..c} ; do hdparm -Tt  /dev/$i; done

/dev/sdb:
 Timing cached reads:   2344 MB in  2.00 seconds = 1171.76 MB/sec
 Timing buffered disk reads: 738 MB in  3.00 seconds = 245.81 MB/sec

/dev/sdc:
 Timing cached reads:   2264 MB in  2.00 seconds = 1131.40 MB/sec
 Timing buffered disk reads: 774 MB in  3.00 seconds = 257.70 MB/sec

CentOS 6.11

  • Seagate ST1000DX001
/dev/sdb:
 Timing cached reads:   2490 MB in  2.00 seconds = 1244.86 MB/sec
 Timing buffered disk reads: 536 MB in  3.01 seconds = 178.31 MB/sec

/dev/sdc:
 Timing cached reads:   2524 MB in  2.00 seconds = 1262.21 MB/sec
 Timing buffered disk reads: 538 MB in  3.00 seconds = 179.15 MB/sec

/dev/sdd:
 Timing cached reads:   2452 MB in  2.00 seconds = 1226.15 MB/sec
 Timing buffered disk reads: 546 MB in  3.01 seconds = 181.64 MB/sec

 

So what I am missing ?

My initial personal feeling was the low memory. But after running a manual rsync I’ve realized that:

cpu

was load average: 0.87, 0.46, 0.19

mem

was (on high load), when hit 40% of RAM, started to use swap.

KiB Mem :  2008464 total,    77528 free,   635900 used,  1295036 buff/cache
KiB Swap:  2097148 total,  2096624 free,      524 used.  1184220 avail Mem 

So I tweaked a bit the swapiness and reduce it from 60% to 40%

and run a local snapshot (that is a bit heavy on the disks) and doing an upgrade and trying to increase CPU load. Still everything is fine !

I will keep an eye on this story.

fantastic

 

Monday, 01 July 2019

Usability & Productivity Sprint 2019

I [partially, only 2 days out of the 7] attended the Usability & Productivity Sprint 2019 in Valencia two weekends ago.

I was very happy to meet quite some new developer blood, which is something we had been struggling a bit to get lately, so we're starting to get on the right track again :) And I can only imagine it'll get better and better due to the "Onboarding" goal :)

During the sprint we had an interesting discussion about how to get more people to know about usability, and the outcome is that probably we'll try to get some training to members of KDE to increase the knowledge of usability amongst us. Sounds like a good idea to me :)

On the more "what did *you* actually do" side:
* worked on fixing a crash i had on the touchpad kded, (already available on the latest released Plasma!)
* finished part of the implementation for Optional Content Group Links support in Okular (i started that 3 years ago and i was almost sure i had done all the work, but i clearly had not)
* Did some code reviews on existing okular phabricator merge requests (so sad i'm still behind though, we need more people reviewing there other than me)
* Together with Nicolas Fella worked on allowing extra fields from json files to be translated, we even documented it!
* Changed lots of applications released on KDE Applications to follow the KDE Applications versioning scheme, the "winner" was kmag, that had been stuck in version 1.0 for 15 years (and had more than 440 commits since then)
* Fixed a small issue with i18n in kanagram

I would like to thank SLIMBOOK for hosting us in their offices (and providing a shuttle from them to the hotel) and the KDE e.V. for sponsoring my attendance to the sprint, please donate to KDE if you think the work done at sprints is important.

Sunday, 30 June 2019

Dealing with Colors in lower Android versions

I’m currently working on a project which requires me to dynamically set the color of certain UI elements to random RGB values.

Unfortunately and surprisingly, handy methods for dealing with colors in Android are only available since API level 26 (Android O). Luckily though, the developer reference specifies, how colors are encoded internally in the Android operating system, so I was able to create a class with the most important color-related (for my use-case) methods which is able to function in lower Android versions as well.

Link to the gist

I hope I can save some of you from headaches by sharing the code. Feel free to reuse as you please đŸ™‚

Happy Hacking!

PS: Is there a way to create Github-like gists with Gitea?

Thursday, 27 June 2019

KDE Applications 19.08 Schedule finalized

It is available at the usual place https://community.kde.org/Schedules/Applications/19.08_Release_Schedule

Dependency freeze is two weeks (July 11) and Feature Freeze a week after that, make sure you start finishing your stuff!


P.S: Remember last day to apply for Akademy Travel Support is this Sunday 30 of June!

Sunday, 23 June 2019

A good firewall for a small network

In this article I will outline the setup of my (not so) new firewall at home. I explain how I decided which hardware to get and which software to choose, and I cover the entire process of assembling the machine and installing the operating system. Hopefully this will be helpful to poeple in similar situations.

Introduction

While the ability of firewalls to protect against all the evils of the internets is certainly exaggerated, there are some important use cases for them: you want to prevent certain inbound traffic and manipulate certain outbound traffic e.g. route it through a VPN.

For a long time I used my home server (whose main purpose is network attached storage) to also do some basic routing and VPN, but this had a couple of important drawbacks:

  • Just one NIC on the server meant traffic to/from the internet wasn’t physically required to go through the server.
  • Less reliable due to more complex setup → longer downtimes during upgrades, higher chance of failure due to hard drives.
  • I wouldn’t give someone else the root password to my data storage, but I did want my flat-mates to be able to reset and configure basic network components that they depend on (Router/Port-forwarding and WiFi).
  • I wanted to isolate the ISP-provided router more strongly from the LAN as they have a history of security vulnerabilities.

The different off-the-shelf routers I had used over the years had also worked only so-so (even those that were customisable) so I decided I needed a proper router. Since WiFi access was already out-sourced to dedicated devices I really only needed a filtering and routing device.

Hardware

Board & CPU

The central requirements for the device were:

  • low energy consumption
  • enough CPU power to route traffic at Gbit-speed, run Tor and OpenVPN (we don’t have Gbit/s internet in Berlin, yet, but I still have hopes for the future)
  • hardware crypto support to unburden the CPU for crypto tasks
  • two NICs, one for the LAN and one for the WAN

I briefly thought about getting an ARM-based embedded board, but most reviews suggested that the performance wouldn’t be enough to satisfy my requirements and also the *BSD support was mixed at best and I didn’t want to rule out running OpenBSD or FreeBSD.

Back to x86-land: I had previously used PC Engines ALIX boards as routers and was really happy with them at the time. Their new APU boards promised better performance, but thanks to the valuable feedback and some benchmarking done by the community over at BSDForen.de, I came to the conclusion that they wouldn’t be able to push more than 200Mbit/s through an OpenVPN tunnel.

In the end I decided on the Gigabyte J3455N-D3H displayed at the top. It sports a rather atypical Intel CPU (Celeron J3455) with

  • four physical cores @ 1.5Ghz
  • AESNI support
  • 10W TDP

Having four actual cores (instead of 2 cores + hyper threading) is pretty cool now that many security-minded operating systems have started deactivating hyper threading to mitigate CPU bugs [OpenBSD] [HardenedBSD]. And the power consumption is also quite low.

I would have liked for the two NICs on the mainboard to be from Intel, but I couldn’t find a mainboard at the time that offered this (other than super-expensive SuperMicro boards). At least the driver support on modern Realteks is quite good.

Storage & Memory

The board has two memory slots and supports a maximum of 4GiB each. I decided 4GiB are enough for now and gave it one module to allow for future extensions (I know that’s suboptimal for speed).

Storage-wise I originally planned on putting a left-over SATA-SSD into the case, but in the end, I decided a tiny USB3-Stick would provide sufficient performance and be much easier to replace/debug/…

Case & Power

Since I installed a real 19” wrack in my new flat, of course the case for the firewall would have to fit nicely into that. I had a surprisingly difficult time finding a good case, because I wanted one were the board’s ports would be front-facing. That seems to be quite a rare requirement, although I really don’t understand why. Obviously having the network ports, serial ports and USB-Ports to the front makes changing the setup and debugging so much easier ¯\_(ツ)_/¯

I also couldn’t find a good power supply for such a low-power device, but I still had a 60W PicoPSU supply lying around.

Even though it came with an overpowered PSU and a proprietary IO-Shield (more on that below), I decided on the SuperMicro SC505-203B. It really does look quite good, I have to say!

Assembly

Mounting the mainboard in the case is pretty straight-forward. The biggest issue was the aforementioned proprietary I/O-Shield that came with the SuperMicro case (and was designed only for SuperMicro-boards). It was possible to remove it, however, the resulting open space did not conform to ATX spec so it wasn’t possible to just fit the Gigabyte board’s shield into it.

I quickly took the measurements and starting cutting away on the shield to make it fit. This worked ok-ish in the end, but is more dangerous than it looks (be smarter than me, wear gloves ☝ ). In retrospect I also recommend that you do not remove the bottom fold on the shield, only left, right and top; that will make it hold a lot better in the case opening.

The board can be fit into the case using standard screws in the designated places. As mentioned above, I removed the original (actively cooled) power supply unit and used the 60W PicoPSU that I had lying around from before. Since it doesn’t have the 4-pin CPU cable I had improvise. There are adaptors for this, but if you have a left-over power supply, you can also tape together something. I also put the transformer into the case (duck-tape, yeah!) so that one can plug in the power cord from the back of the case as usual.

Software

OPNSense logo

Choice

There are many operating systems I could have chosen since I decided to use an x86 platform. My criteria were:

  • free software (obviously)
  • intuitive web user interface to do at least the basic things
  • possibility to login via SSH if things don’t go as planned
  • OpenVPN client

I feel better with operating systems based on FreeBSD or OpenBSD, mainly because I have more experience with them than with GNU/Linux distributions nowadays. In previous flats I had also used OpenWRT and dd-wrt based routers, but whenever I needed to tweak something beyond what the web interface offered, it got really painful. In general the whole IPtables based stack on Linux seems overly complicated, but maybe that’s just me.

In any case, there are no OpenBSD-based router operating systems with web interfaces (that I am aware of) so I had the choice between

  1. pfsense (FreeBSD-based)
  2. OPNSense, fork of pfsense, based on HardenedBSD / FreeBSD

There seem to be historic tensions between the people involved in both operating systems and I couldn’t find out if there were actual distinctions in the goals of the projects. In the end, I asked other people for recommendations and found the interface and feature list of OPNSense more convincing. Also, being based on HardenedBSD sounds good (although I am not sure if HardenedBSD-specifica will really ever play out on the router).

Initially I had some issues with the install and OPNSense people were super-friendly and responded immediately. Also the interface was a lot better than I expected so I am quite sure I made the right decision.

Install

Setup is very easy:

  1. Go to https://opnsense.org/download/, select amd64 and nano and download the image.
  2. Unzip the image (easy to forget this).
  3. Write the image to the USB-stick with dd (as always with dd: be careful about the target device!)
  4. Optionally plug a serial cable into the top serial port (the mainboard has two) and connect your Laptop/Desktop with baud rate 115200
  5. Plug the USB-stick into the firewall and boot it up.

There will be some beeping when you start the firewall. Some of this is due to the mainboard complaining that no keyboard is attached (can be ignored) and also OPNSense will play a melody when it is booted. If you are attached to the serial console you can select which interface will be WAN and which will be LAN (and their IP addresses). Otherwise you might need to plug around the LAN cables a bit to find out which is configured as which.

When I built this last year there were some more issues, but all of them have been resolved by the OPNSense people so it really is “plug’n’play”; I verified by doing a re-install!

Post-install

Go to the configured IP-address (192.168.1.1 by default) and login (root: opnsense by default). If the web-interface comes up everything has worked fine and you can disconnect serial console and do the rest via the web-interface.

After login, I would to the following:

  • change the password
  • activate SSH on the LAN interface
  • configure internet access and DHCP
  • setup any of the other services you want

For me setting up the internet meant doing a “double-NAT” with the ISP-provided router, because I need its modem and nowadays it seems impossible to get a stand-alone VDSL modem. If you do something similar just configure internet as being over DHCP.

If you want hardware accelerated SSL (also OpenVPN), go to System → Firmware → Setting and change the firmware flavour to OpenSSL (instead of LibreSSL). After that check for updates and upgrade. In the OpenVPN profile, under Hardware Crypto, you can now select Intel RDRAND engine - RAND.

Take your time to look through the interface! I found some pretty cool things like automatic backup of the configuration to a nextcloud server! The entire config of the firewall rests in one file so it’s really easy to setup a clean system from scratch.

All-in-all I am very happy with the system. Even though my setup is non-trivial, with only selected outgoing traffic going through the VPN (based on rules), I never had to get my hands dirty on the command line – everything can be done through the Web-UI.

Sunday, 16 June 2019

Information stalls at Linux Week and Veganmania in Vienna

Linuxwochen Vienna 2018
Linux Weeks in Vienna 2018

Veganmania Vienna 2018
Veganmania at MQ in Vienna 2018

Linuxwochen Vienna 2019
Linux Weeks in Vienna 2019

Information stall at Veganmania 2019
Veganmania at MQ in Vienna 2019

Information stall at Veganmania 2019
Veganmania at MQ in Vienna 2019

As has been tradition for many years now, this year too saw the Viennese FSFE volunteers’ group hold information stalls at the Linuxwochen event and Veganmania in Vienna. Even though the active team has shrunk due to former activists moving away, having children or simply having very demanding jobs, we have still managed to keep up these information stalls in 2019.

Linux Weeks Vienna 2019

The information stall at the Linux weeks event in May was somewhat limited due to the fact that we didn’t get our usual posters and the roll-up in time. Unfortunately we discovered too late that they had obviously been lent out for an other event and hadn’t been returned afterwards. So we could only use our information material. But since at this event the FSFE is very well known, it wasn’t hard at all to carry out our usual information stall. It’s less about outreach work and more of a who-is-who of the free software community in Vienna anyway. For three days we met old friends and networked. Of course some newbies found their way to the event also. And therefore we could spread our messages a little further too.

In addition, we once again provided well visited workshops for Inkscape and Gimp. The little talk on the free rally game Trigger Rally even motivated an attending dedicated Fedora maintainer to create an up-to-date .rpm package in order to enable distribution of the most recent release to rpm distros.

Veganmania MQ Vienna 2019

The Veganmania at the Museums Quartier in Vienna is growing bigger every year. In 2019 it took place from 7th to 10th of June. Despite us having a less frequented spot with our information stall at the event due to construction work, it again was a full-blown success. Over the four days in perfect weather, the stall was visited by loads of people. There were times when we were stretched to give some visitors the individual attention they might have wanted. But I think in general we were able to provide almost all people with valuable insights and new ideas for their everyday computing. Once again Veganmania proved to be a very good setting for our FSFE information stall. It is always very rewarding to experience people getting a glimpse for the first time of how they could emancipate themselves from proprietary domination. Our down-to-earth approach seems to be the right way to go.

We do not only explain ethical considerations but also appeal to the self-interest of people concerning independence, reliability and free speech. Edward Snowden’s and Wikileaks discoveries clearly show how vulnerable we make ourselves in blindly trusting governments and companies. We describe with practical examples how free software can help us in working together or recovering old files by building on open standards. Of course pointing to the environmental (and economic) advantages of using old hardware with less resource hungry free software is a winning argument also.

Material

Alongside the introductory Austrian version of the leaflet about the freedoms free software enables, which was put together as a condensation of RMS’ book Free Software, Free Society, one of our all-time favourite leaflet features 10 popular GNU/Linux Distros with just a few words about their defining differences (advantages and disadvantages). I updated the leaflet just a day before the festival. I replaced Linux Mint, Open Suse and gNewSense with the recently even more popular Manjaro, MX Linux and PureOS. I also updated the information on the importance of open standards on the back. We have run out of our end-user business cards for our local association freie.it which makes knowledgable people available to others searching for help. Therefore, we decided to use the version we originally designed for inviting experts to the platform. It obviously was wrong to order the same amount of cards for both groups. Our selection of information material seems to work well as an invitation for people to give free software a try. Of course it feels probably also like a safeguard that people can contact me if they want to get my support – or that of someone else listed on freie.it.

Experiences

The first day was rather windy and we had to carefully manage our material if we didn’t want to have our leaflets flying all over the place. In the very early morning of the second day the wind was so strong that some tents where blown away and destroyed. There was even a storm warning which could have forced the organisers to cancel the event. Fortunately our material was well stored and the wind died down over the day. We also had to firmly hold-on to our sunshade because it was very hot, but beside that everything went fine.

It was just coincidence that Richard Matthew Stallman had a talk in Vienna on the evening of the first day of the Veganmania street festival. So at least one of us could take this rare opportunity to see RMS at a live talk while the other carried on with manning the information stall.

As we didn’t have our posters on Linuxwochen we investigated where they were and got them sent to us via snail mail just in time. We didn’t only get our posters but merchandise too. This was a premiere for our stall. It was clear from the beginning that we wouldn’t sell many shirts since most designs assumed prior knowledge of IT related concepts like binary counting. The general public doesn’t seem very aware of such details and people don’t even get the joke. (If we had had the same merchandise at the Linuxwochen we probably would have sold at least as many items despite having reached a much smaller crowd there.)

Outlook and thanks

There will be another information stall at the second Veganmania in Vienna this year, which takes place in end of August. The whole setting there is a little different as there isn’t a shopping street nearby but instead, the location is on the heavily frequented recreational area of Vienna’s Danube Island. Just like last year, it should be a good place for chatting about free software, as long as the weather is on our side.

I want to thank Martin for his incredible patience and ongoing dedication manning our stall. He is extremely reliable, always friendly and it is just a real pleasure working with him.

Thanks to kinderkutsche.at, a local place to rent and buy carrier bicycles, we could transport all our information material in a very environmentally friedly way.

Monday, 10 June 2019

MariaDB Galera Cluster on Ubuntu 18.04.2 LTS

MariaDB Galera Cluster on Ubuntu 18.04.2 LTS

Last Edit: 2019 06 11
Thanks to Manolis Kartsonakis for the extra info.

 

Official Notes here:
MariaDB Galera Cluster

a Galera Cluster is a synchronous multi-master cluster setup. Each node can act as master. The XtraDB/InnoDB storage engine can sync its data using rsync. Each data transaction gets a Global unique Id and then using Write Set REPLication the nodes can sync data across each other. When a new node joins the cluster the State Snapshot Transfers (SSTs) synchronize full data but in Incremental State Transfers (ISTs) only the missing data are synced.

With this setup we can have:

  • Data Redundancy
  • Scalability
  • Availability

galeracluster.png

 

Installation

In Ubuntu 18.04.2 LTS three packages should exist in every node.
So run the below commands in all of the nodes - change your internal IPs accordingly

as root

# apt -y install mariadb-server
# apt -y install galera-3
# apt -y install rsync

host file

as root

# echo 10.10.68.91 gal1 >> /etc/hosts
# echo 10.10.68.92 gal2 >> /etc/hosts
# echo 10.10.68.93 gal3 >> /etc/hosts

 

Storage Engine

Start the MariaDB/MySQL in one node and check the default storage engine. It should be

MariaDB [(none)]> show variables like 'default_storage_engine';

or

echo "SHOW Variables like 'default_storage_engine';" | mysql
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+

 

Architecture

A Galera Cluster should be behind a Load Balancer (proxy) and you should never talk with a node directly.

galeracluster_elb.png

Galera Configuration

Now copy the below configuration file in all 3 nodes:

/etc/mysql/conf.d/galera.cnf
[mysqld]
binlog_format=ROW
default-storage-engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://10.10.68.91,10.10.68.92,10.10.68.93"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address="10.10.68.91"
wsrep_node_name="gal1"

Per Node

Be careful the last 2 lines should change to each node:

Node 01

# Galera Node Configuration
wsrep_node_address="10.10.68.91"
wsrep_node_name="gal1"

Node 02

# Galera Node Configuration
wsrep_node_address="10.10.68.92"
wsrep_node_name="gal2"

Node 03

# Galera Node Configuration
wsrep_node_address="10.10.68.93"
wsrep_node_name="gal3"

 

Galera New Cluster

We are ready to create our galera cluster:

galera_new_cluster

or

mysqld --wsrep-new-cluster

JournalCTL

Jun 10 15:01:20 gal1 systemd[1]: Starting MariaDB 10.1.40 database server...
Jun 10 15:01:24 gal1 sh[2724]: WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
Jun 10 15:01:24 gal1 mysqld[2865]: 2019-06-10 15:01:24 139897056971904 [Note] /usr/sbin/mysqld (mysqld 10.1.40-MariaDB-0ubuntu0.18.04.1) starting as process 2865 ...
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2906]: Upgrading MySQL tables if necessary.
Jun 10 15:01:24 gal1 systemd[1]: Started MariaDB 10.1.40 database server.
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: Looking for 'mysql' as: /usr/bin/mysql
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2909]: This installation of MySQL is already upgraded to 10.1.40-MariaDB, use --force if you still need to run mysql_upgrade
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2918]: Checking for insecure root accounts.
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2922]: WARNING: mysql.user contains 4 root accounts without password or plugin!
Jun 10 15:01:24 gal1 /etc/mysql/debian-start[2923]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
# ss -at '( sport = :mysql )'

State                Recv-Q                Send-Q                                Local Address:Port                                  Peer Address:Port
LISTEN               0                     80                                        127.0.0.1:mysql                                      0.0.0.0:*         
# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t
wsrep_cluster_conf_id     1
wsrep_cluster_size        1
wsrep_cluster_state_uuid  8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          d67e5b7c-8b90-11e9-ba3d-23ea221848fd
wsrep_local_state_uuid    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_ready               ON

 

Second Node

systemctl restart mariadb.service
root@gal2:~# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     2
wsrep_cluster_size        2
wsrep_cluster_state_uuid  8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          a5eaae3e-8b91-11e9-9662-0bbe68c7d690
wsrep_local_state_uuid    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_ready               ON

 

Third Node

systemctl restart mariadb.service
root@gal3:~# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     3
wsrep_cluster_size        3
wsrep_cluster_state_uuid  8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          013e1847-8b92-11e9-9055-7ac5e2e6b947
wsrep_local_state_uuid    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_ready               ON

 

Primary Component (PC)

The last node in the cluster -in theory- has all the transactions. That means it should be the first to start next time from a power-off.

State

cat /var/lib/mysql/grastate.dat

eg.

# GALERA saved state
version: 2.1
uuid:    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
seqno:   -1
safe_to_bootstrap: 0

if safe_to_bootstrap: 1 then you can bootstrap this node as Primary.

 

Common Mistakes

Sometimes DBAs want to setup a new cluster (lets say upgrade into a new scheme - non compatible with the previous) so they want a clean state/directory. The most common way is to move the current mysql directory

mv /var/lib/mysql /var/lib/mysql_BAK

If you try to start your galera node, it will fail:

# systemctl restart mariadb
WSREP: Failed to start mysqld for wsrep recovery:
[Warning] Can't create test file /var/lib/mysql/gal1.lower-test
Failed to start MariaDB 10.1.40 database server

You need to create and initialize the mysql directory first:

mkdir -pv /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql
chmod 0755 /var/lib/mysql
mysql_install_db -u mysql

On another node, cluster_size = 2

# echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     4
wsrep_cluster_size        2
wsrep_cluster_state_uuid  8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          a5eaae3e-8b91-11e9-9662-0bbe68c7d690
wsrep_local_state_uuid    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_ready               ON

then:

# systemctl restart mariadb

rsync from the Primary:


Jun 10 15:19:00 gal1 rsyncd[3857]: rsyncd version 3.1.2 starting, listening on port 4444
Jun 10 15:19:01 gal1 rsyncd[3884]: connect from gal3 (192.168.122.93)
Jun 10 15:19:01 gal1 rsyncd[3884]: rsync to rsync_sst/ from gal3 (192.168.122.93)
Jun 10 15:19:01 gal1 rsyncd[3884]: receiving file list
#  echo "SHOW STATUS LIKE 'wsrep_%';" | mysql  | egrep -i 'cluster|uuid|ready' | column -t

wsrep_cluster_conf_id     5
wsrep_cluster_size        3
wsrep_cluster_state_uuid  8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_cluster_status      Primary
wsrep_gcomm_uuid          12afa7bc-8b93-11e9-88fc-6f41be61a512
wsrep_local_state_uuid    8abc6a1b-8adc-11e9-a42b-c6022ea4412c
wsrep_ready               ON

Be Aware: Try to keep your DATA directory to a seperated storage disk

 

Adding new Nodes

A healthy Quorum has an odd number of nodes. So when you scale your galera gluster consider adding two (2) at every step!

# echo 10.10.68.94 gal4 >> /etc/hosts
# echo 10.10.68.95 gal5 >> /etc/hosts

Data Replication will lock your donor-node so it is best to put-off your donor-node from your Load Balancer:

galeracluster_elb_donor.png

Then explicit point your donor-node to your new nodes by adding the below line in your configuration file:

wsrep_sst_donor= gal3

After the synchronization:

  • comment-out the above line
  • restart mysql service and
  • put all three nodes behind the Local Balancer

 

Split Brain

Find the node with the max

SHOW STATUS LIKE 'wsrep_last_committed';

and set it as master by

SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

 

Weighted Quorum for Three Nodes

When configuring quorum weights for three nodes, use the following pattern:

node1: pc.weight = 4
node2: pc.weight = 3
node3: pc.weight = 2
node4: pc.weight = 1
node5: pc.weight = 0

eg.

SET GLOBAL wsrep_provider_options="pc.weight=3";

In the same VPC setting up pc.weight will avoid a split brain situation. In different regions, you can setup something like this:

node1: pc.weight = 2
node2: pc.weight = 2
node3: pc.weight = 2
  <->
node4: pc.weight = 1
node5: pc.weight = 1
node6: pc.weight = 1

 

Friday, 07 June 2019

I'm going to Akademy!

And you should too!

Akademy is free to attend however you need to register to reserve your space, so head to https://akademy.kde.org/2019/register and press the buttons.

Akademy is very important to meet people, discuss future plans, learn about new stuff and make friends for life!

Note this year the recommended accomodations are a bit on the expensive side, so you may want to hurry and apply for Travel support. The last round is open until July 1st.


I'm going to Akademy 2019

Thursday, 06 June 2019

Akademy-es 2019 talks announced!

Akademy-es 2019 will be happening this June 28-30 in Vigo.

The talks were just announced recently.

Check them out at https://www.kde-espana.org/akademy-es-2019/programa-akademy-es-2019 it has lots of interesting talks so if you understand Spanish and are interested in KDE or Free Software in general I'd really recommend to attend!

Friday, 17 May 2019

[Some] KDE Applications 19.04.1 also available in flathub

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

Partition like a pro with fdisk, sfdisk and cfdisk

  • Seravo
  • 10:44, Friday, 17 May 2019

Most Linux distributions ship the hard drive partition tool fdisk by default. Knowing how to use it is a good skill for every Linux system administrator since having to rescue a system that has disk issues is a very common task. If the admin is faced with a prompt in a rescue mode boot, often fdisk is the only partitioning tool available and must be used, since if the main root filesystem is broken, one cannot install and use any other partitioning tools.

When installing Debian based systems (e.g. Ubuntu) in the text mode server installer, keep in mind that you can at any time during the installation process press Ctrl+Alt+F2 to jump to a text console running a limited shell prompt (Busybox) and manipulate the systems as you wish, among others run fdisk. When done press Ctrl+Alt+F1 to jump back to the installer screen.

In fact, fdisk is not a single utility but actually a tool that ships with three commands together: fdisk, sfdisk and cfdisk.

fdisk

Most Linux sysadmins have at some point used fdisk, the classic partitioning tool. There is also a tool with the same name in Windows, but its not the same tool. Across the Unix ecosystem the fdisk tool is however nowadays the same one, even on MacOS X.

To list the current partition layout one can simply run fdisk -l /dev/sda. Below is an example of the output. One can also run fdisk -l /dev/sd* to print the partition info of all sd devices in one go. The fdisk man page lists all the command line parameters available.

Example output of fdisk -l

If one runs just fdisk it will launch in interactive mode. Pressing m will show the help. To create a new GPT (for modern disk) partition table (resetting any existing partition table) and add a new Linux partition that uses all available disk space one can simply enter the commands g, n and w in sequence and pressing enter to all questions to accept them at their default values.

In-command help in fdisk, printed when pressing m

cfdisk

The command cfdisk servers the same purpose as fdisk with the difference that it provides a slightly fancier user interface based on ncurses so there are menus one can browse with arrows and the tab button without having to remember the single letter commands fdisk uses.

Screenshot of cfdisk

sfdisk

The third tool in the suite is sfdisk. This tool is designed to be scripted, enabling administrators to script and automate partitioning operations.

The key to sfdisk operations is to first dump the current layout using the -d argument, for example sfdisk -d /dev/sda > partition-table. An example output would be:

$ cat partition-table
label: gpt
label-id: AF7B83C8-CE8D-463D-99BF-E654A68746DD
device: /dev/sda
unit: sectors
first-lba: 34
last-lba: 937703054

/dev/sda1 : start=        2048, size=      997376, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=F35A875F-1A53-493E-85D4-870A7A749872
/dev/sda2 : start=      999424, size=   936701952, type=A19D880F-05FC-4D3B-A006-743F0F84911E, uuid=725EAB2A-F2E2-475E-81DC-9A5E18E29678

This text file describes the partition type, the layout and also includes the device UUIDs. The file above can be considered a backup of the /dev/sda partition table. If something has gone wrong with the partition table and this file was saved at some earlier time, one can recover the partition table by running: sfdisk /dev/sda < partition-table.

Copying the partition table to multiple disks

One neat application of sfdisk is that it can be used to copy the partition layout to many devices. Say you have a big server computer with 16 hard disks. Once you have partitioned the first disk, you can dump the partition table of the first disk with sfdisk -d and then edit the dump file (remember, it is just a plain-text file) to remove references to the device name and UUID’s, which are unique to a specific device and not something you want to clone to other disks. If the initial dump was the example above, the version with unique identifiers removed would look like this:

label: gpt
unit: sectors
first-lba: 34
last-lba: 937703054

start=        2048, size=      997376, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4
start=      999424, size=   936701952, type=A19D880F-05FC-4D3B-A006-743F0F84911E

This can applied to the another disk, for example /dev/sdb simply by running sfdisk /dev/sdb < partition-table. Now all the admin needs to do is run this same command a couple of times with only the one character changed on each invocation.

Listing device UUID’s with blkid

Keep in mind that the Linux kernel uses the device UUIDs for indentifying partitions and file systems. Be vary not to accidentally make two disks have the same UUID with sfdisk. Technically it is possible, and maybe useful in some situation where one wants to replace a hard drive and make the new hard drive 100% identical, but in a running system different disks should all have unique UUIDs.

To list all UUIDs use blkid. Below is an example of the output:

$ blkid
/dev/sda1: UUID="F379-8147" TYPE="vfat" PARTUUID="f35a875f-1a53-493e-85d4-870a7a749872"
/dev/sda2: UUID="5f12f800-1d8d-6192-0881-966a70daa16f" UUID_SUB="2d667c5b-b9f3-6510-cf76-9231122533ce" LABEL="fi-e3:0" TYPE="linux_raid_member" PARTUUID="725eab2a-f2e2-475e-81dc-9a5e18e29678"
/dev/sdb2: UUID="5f12f800-1d8d-6192-0881-966a70daa16f" UUID_SUB="0221fce5-2762-4b06-2d72-4f4f43310ba0" LABEL="fi-e3:0" TYPE="linux_raid_member" PARTUUID="cd2a477f-0b99-4dfc-baa6-f8ebb302cbbb"
/dev/md0: UUID="dcSgSA-m8WA-IcEG-l29Q-W6ti-6tRO-v7MGr1" TYPE="LVM2_member"
/dev/mapper/ssd-ssd--swap: UUID="2f2a93bd-f532-4a6d-bfa4-fcb96fb71449" TYPE="swap"
/dev/mapper/ssd-ssd--root: UUID="660ce473-5ad7-4be9-a834-4f3d3dfc33c3" TYPE="ext4"

Extra tip: listing all disk with lsblk

While fdisk -l is nice for listing partition tables, often admins also want to know the partition sizes in human readable formats and what the partitions are used for. For this purpose the command lsblk is handy. While the default output is often enough, supplying the extra arguments -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINTmakes it even better. See below an example of the output:

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME                  SIZE FSTYPE            TYPE  MOUNTPOINT
sda                 447,1G                   disk  
├─sda1                487M vfat              part  /boot/efi
└─sda2              446,7G linux_raid_member part  
  └─md0             446,5G LVM2_member       raid1 
    ├─ssd-ssd--swap   8,8G swap              lvm   [SWAP]
    └─ssd-ssd--root 437,7G ext4              lvm   /
sdb                 447,1G                   disk  
├─sdb1                487M                   part  
└─sdb2              446,7G linux_raid_member part  
  └─md0             446,5G LVM2_member       raid1 
    ├─ssd-ssd--swap   8,8G swap              lvm   [SWAP]
    └─ssd-ssd--root 437,7G ext4              lvm   /
sdc                 447,1G                   disk  
├─sdc1                487M                   part  
└─sdc2              446,7G                   part  

A word of warning…

Remember that modifying the partition table is a destructive process. It is something the admin does while installing new systems or recovering broken ones. If done wrongly, all data might be lost!

Wednesday, 15 May 2019

No KDE Applications 19.04.1 available in flathub

The flatpak and flathub developers have changed what they consider a valid appdata file, so our appdata files that were valid last month are not valid anymore and thus we can't build the KDE Applications 19.04.1 packages.

Wednesday, 08 May 2019

Of elitists and laypeople

Spoilers Game of Thrones.

I have been watching Game of Thrones with great interest the past few weeks. It has very strongly highlighted a struggle that has been gripping my mind for a while now: That between elitists and laypeople. And I find myself in a strange in-between.

For those not in the know, the latest season of Game of Thrones is a bit controversial to say the least. If you skip past the internet vitriol, you’ll find a lot of people disliking the season for legitimate reasons: The battle tactics don’t make any sense, characters miraculously survive after the camera cuts away, time and distance stopped being an issue in a setting that used to take it slow, and there’s a weird, forced conflict that would go away entirely if these two characters that are already in love would simply marry. And the list goes on, I’m sure.

But on the other hand, there appears to be a large body of laypeople who watch and enjoy the series. Millions of people tune in every week to watch fictional people fight over a fictional throne, and they appear to be enjoying themselves. And me? Sure, I’m enjoying myself as well. I had muscle aches from the tension of watching The Long Night, and nothing gripped me more than the half-botched assassination attempt at the end of the episode.

So what gives? On the one hand enthusiasts are rightly criticising the writers for some very strange decisions, but on the other hand millions of people are enjoying the series all the same.

Of power users and newbies

I frankly don’t know the answer to the posed question, but I do know an analogy that prompted me to write this blog post. I am a humble contributor to the GNOME Project, chiefly as translator for Esperanto, but also miniscule bits and bobs here and there. GNOME faces a similar problem with detractors: They have their complaints about systemd, customisability, missing power user features, themes breaking, and so forth. And I’m sure they have some valid points, but GNOME remains the default desktop environment on many distributions, and many people use and love GNOME as I do.

These detractors often run some heavily customised Arch Linux system with some unintuitive-but-efficient window manager, and don’t have any editor other than Vim installed. Or in other words: They run a system that the vast majority of people could not and do not want to use.

And I understand these people, because in one aspect of my digital life, I have been one of them. For at least two years, I ran Spacemacs as my primary editor. For a while I even did my e-mail through that program, and I loved it. Kind of. Sure, everything was customisable, and the keyboard shortcuts were magically fast, but the mental overhead of using that program was slowly grinding me down. Some menial task that I do infrequently would turn out to involve a non-intuitive sequence of keys that you just simply need to know, and I would spend far too long on figuring that out. Or I would accidentally open Vim inside of the Emacs terminal emulator, and :q would be sent to Emacs instead of the emulator. Sure, if you know enough Emacs wizardry, you could easily escape this situation, but that’s the point, isn’t it? The wizardry involved takes effort that I don’t always want to put in, even if I know that it pays off. Kind of.

These days I use VSCodium, a Free Software version of Visual Studio Code. I like it well enough for a multitude of reasons, but mainly because the mental overhead of using this editor is a lot lower. Even so, is Emacs a better editor? Probably. If I could be bothered to maintain my Emacs wizarding skills, I am fairly certain that it would be the perfect editor. But that’s a big if. So that’s why I settle for VSCodium. And the same line of reasoning can be extended to why I use and love GNOME.

Back to Westeros

Having made that analogy, can it be mapped onto the kerfuffle surrounding Game of Thrones? Is it a matter of a small group wanting an intricate, advanced plot and a larger group wanting a simple, rudimentary story, because they can’t or don’t want to deal with a complicated story?

It seems that way, but the damnedest thing is that I don’t know. I like the latest season of Game of Thrones for what it is: An archetypical fight of good versus evil. The living gathered together to fight an undead army, and the living won. Such a story is a lot easier to get into as a layperson, and there is nothing wrong with enjoying simple, archetypical stories.

But that’s not what Game of Thrones is. Game of Thrones is the derivative of an incredibly intricate series of books with so many details and plots, and the TV series stayed faithful to that for a long time. The latest season is a huge diversion from its roots. It is, as far as I can tell, replacing vi with nano. There is nothing wrong with either, but there is a good reason why the two are separate.

Who are these laypeople, anyway?

This question is difficult to answer, because the layperson isn’t me. It can’t possibly be, because here I am writing about the subject. The layperson must be someone who isn’t particularly interested or informed. I imagine that they just turn on the telly and enjoy it for what it is. No deep thoughts, no deep investment.

But why don’t these laypeople care? Why should we care about laypeople? Must we really dumb everything down for the lowest common denominator? Why can’t they just get on my level? This is really frustrating!

Enter cars.

I have a driving license, but I don’t really care about cars. I know how to work the pedals and the steering wheel, and that’s pretty much it. I don’t know why I don’t care about cars. I just want to get from my home to my destination and back. If I can put in as little effort as possible to do that, I’m happy. I just don’t have the time or desire to learn all the intricacies of cars.

And knowing that, I suppose that I’m the layperson I was so frustrated over a moment ago. When I walk into the garage with a minor problem, I like to imagine that I’m the sort of person who shows up at tech support because I can’t log in: I accidentally pressed Caps Lock.

So the layperson is me. Sometimes.

Then who are the elitists?

Having said all of that, something throws a wrench in the works: Game of Thrones was also immensely popular when it had all the intricacies and inter-weaving plots of the first few seasons. That appears to indicate to me that laypeople aren’t allergic to the kind of story that the enthusiasts want. But they aren’t allergic to the story that is being told in season 8, either, unlike the elitists.

So why do the elitists care? Why can’t they just appreciate the same things that laypeople do? Why must it always be so complex? Why should the complaints of a few outweigh the enjoyment of many?

And this is where I get stuck. Because frankly, I don’t know. Shouldn’t everything be as accessible as possible? The more the merrier? Why should vi exist when nano suffices?

But you can take my vi key bindings from my cold, dead hands. And I love what Game of Thrones used to be, and am sad that it morphed into an archetypical story that used to be its antithesis. I want complex things, even though I switched from Spacemacs to VSCodium and use GNOME instead of i3. Not for the sake of difficulty, but because complexity gives me something that simplicity cannot.

So the elitist is me. Sometimes.

Fin

I’m still in a limbo about this clash between elitists and laypeople. Maybe the clash is superficial and the two can exist side-by-side or separately. Maybe the writers of Game of Thrones just aren’t very good and accidentally made the story for laypeople instead of their target audience of elitists. Maybe it’s a sliding scale instead of a binary.

I don’t really know. I just wanted to get these thoughts out of my head and into a text box.

Sunday, 05 May 2019

External encrypted disk on LibreELEC

Last year I replaced, on the Raspberry Pi, the ArchLinux ARM with just Kodi installed with LibreELEC.

Today I plugged an external disk encrypted with dm-crypt, but to my full surprise this isn’t supported.

Luckily the project is open source and sky42 already provides a LibreELEC version with dm-crypt built-in support.

Once I flashed sky42’s version, I setup automated mount at startup via the autostart.sh script and the corresponding umount via shutdown.sh this way:

// copy your keyfile into /storage via SSH
$ cat /storage/.config/autostart.sh
cryptsetup luksOpen /dev/sda1 disk1 --key-file /storage/keyfile
mount /dev/mapper/disk1 /media

$ cat /storage/.config/shutdown.sh
umount /media
cryptsetup luksClose disk1

Reboot it and voilà!

Automount

If you want to automatically mount the disk whenever you plug it, then create the following udev rule:

// Find out ID_VENDOR_ID and ID_MODEL_ID for your drive by using `udevadm info`
$ cat /storage/.config/udev.rules.d/99-automount.rules
ACTION=="add", SUBSYSTEM=="usb", SUBSYSTEM=="block", ENV{ID_VENDOR_ID}=="0000", ENV{ID_MODEL_ID}=="9999", RUN+="cryptsetup luksOpen $env{DEVNAME} disk1 --key-file /storage/keyfile", RUN+="mount /dev/mapper/disk1 /media"

Saturday, 04 May 2019

Hardening OpenSSH Server

start by reading:

man 5 sshd_config

 

CentOS 6.x

Ciphers aes128-ctr,aes192-ctr,aes256-ctr
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-256,hmac-sha2-512

and change the below setting in /etc/sysconfig/sshd:

AUTOCREATE_SERVER_KEYS=RSAONLY

 

CentOS 7.x

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512

 

Ubuntu 18.04.2 LTS

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512
HostKeyAlgorithms ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256

 

Archlinux

Ciphers chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
MACs umac-128-etm@openssh.com,hmac-sha2-512-etm@openssh.com
HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa

 

Renew SSH Host Keys

rm -f /etc/ssh/ssh_host_* && service sshd restart

 

Generating ssh moduli file

for advance users only

ssh-keygen -G /tmp/moduli -b 4096

ssh-keygen -T /etc/ssh/moduli -f /tmp/moduli 
Tag(s): openssh

How I put order in my bookmarks and found a better way to organise them

[TOS]

I have gone through several stages of this and so far nothing has stuck as ideal, but I think I am inching towards it.

To start off, I have to confess that while I love the internet and the web, I loathe having everything in the browser. The browser becoming the OS is what seems to be happening, and I hate that thought. I like to keep things locally, having backups, and control over my documents and data. Although I changed my e-mail provider(s) several times, I still have all my e-mail locally stored from 2003 until today.

I also do not like reading longer texts on an LCD, so I usually put longer texts into either Wallabag or Mozilla’s Pocket to read them later on my eInk reader (Kobo Aura). BTW, Wallabag and Pocket both have their pros and cons themselves. Pocket is more popular and better integrated into a lot of things (e.g. Firefox, Kobo, etc.), while Wallabag is fully FOSS (even the server) and offers some extra features that are in Pocket either subject to subscription or completely missing.

Still, an enormous amount of information is (and should be!) on the web, so each of us needs to somehow keep track and make sense of it :)

So, with that intro out of the way, here is how I tackle(d) this mess.

Historic overview of methods I used so far

Hierarchy of folders

As many of us, I guess, I started with first putting bookmarks in the bookmark bar, but soon had to start organising them into folders … and subfolders … and subsubfolders …and subsubsubfolders … until the screen did not fit the whole tree of them any more when expanded.

Pro:

  • can be neat and tidy
  • easy to sync between devices through e.g. Firefox Sync

Con:

  • can become a huge mess, once it grows to a behemoth
  • takes several clicks to put a bookmark into the appropriate (sub)folder

Then I decided to keep it flat and use the Firefox search bar to find what I am looking for.

To achieve that, when I bookmarked something, I renamed it to something useful and added tags (e.g.: shop, tea; or python, sql, howto).

This worked kinda OK, but a big downside is that there is a huge amount of clutter which is not easy to navigate and edit once you want to organise all the already existing bookmarks. The bookmark panel is somewhat helpful, but not a lot.

Pro:

  • easy to search
  • easy to find a relevant bookmark when you are about to search for something through the combined URL/search bar
  • easy to sync between devices through e.g. Firefox Sync

Con:

  • your search query must match the name, tag(s), or URL of bookmark
  • hard to find or navigate other than searching (for name tag, URL)

OneTab

Several years after that, I learnt about OneTab from an onboarding website of a company I applied to (but did not get the job). The main promise of it is to loads of open tabs into (simple) organised lists on a single page. And all that with a single click (well, two really).

This worked wonders for (still does) for decluttering my tab list. Especially when grouped with Tree Style Tabs, which I very warmly recommend trying out. Even if it looks odd and unrully at first, it is very easy to get used to and helps organise tabs immensely. But back to OneTab…

The good side of OneTab is that it really helps keep your tab bar clean and therefore reduces your computer’s resource usage. It is also super for keeping track of tabs that you may (or maybe not) need to open again later, as you can (re)open a whole group of “tabs” with a single click.

As a practical example, let us say I am travelling to Barcelona in two months. So I book flights and the hotel, and in the process also check out some touristy and other helpful info. Because I will not be needing the touristy and travel stuff for quite some time before the trip, I do not need all the tabs open. But as it is a one-off trip, it is also silly to bookmark it all. So I send them all to OneTab and name the group e.g. “Barcelona trip 2019”. If I stumble upon any new stuff that is relevant, I simply send it to the same Named Group in OneTab. Once I need that info, I either open individual “tabs” or restore the whole group with one click and have it ready. An additional cool thing is that by default if you open a group or a single link “tab” from OneTab, it will remove it from the list. You can decide to keep the links in the list as well.

In practices, I still used tagged bookmarks for links that I wanted to store long-term, while depending on OneTab for short- to mid-term storage.

Pro:

  • great for decluttering your tabs
  • helps keep your browser’s resource usage low
  • great for creating (temporary) lists of tabs that you do not need now, but will in the future
  • can easily send a group of “tabs” with others via e-mail

Con:

  • no tags, categories or other means of adding meta data – you can only name groups, and cannot even rename links
  • no searching other than through the “webpage” list of “tabs”
  • as the list of “tabs”/bookmarks grows, the harder it is to keep an overview
  • cannot sync between devices
  • (proprietary plug-in)

Worldbrain’s Memex

About two months ago, I stumbled upon Worldbrain’s Memex through a FOSDEM talk. It promises to fix bookmarking, searching, note-taking and web history for you … which is quite an impressive lot.

So far, I have to say, I am quite impressed. It is super easy to find stuff you visited, even if you forgot to bookmark it, as it indexes all the websites you visit (unless you put tell Memex to ignore that page or domain).

For more order, you can assign tags to websites and/or store them into collections (i.e. groups or folders). What is more, you can do that even later, if you forgot about it the first time. If you want to especially emphasise a specific website, you can also star it.

An excellent feature missing in other bookmarking methods I have seen so far is that it lets you annotate websites – through highlights and comments and tags attached to those highlights. So, not only can you store comments and tags on the websites, but also on annotations within those websites.

One concern I have is that they might have taken more than what they can chew, but since I started using it, I have seen so much progress that I am (cautiously) optimistic about it.

Pro:

  • supports both tags and collections (i.e. groups)
  • enables annotations/highlights and comments (as well as tags to both) to websites
  • indexes websites, so when you search for something it goes through both the website’s text, as well as your notes to that website and, of course, tags
  • starring websites you would like to find more easily
  • you can also set specific websites or domain names to be ignored
  • it offers quite an advanced search, including limiting by data ranges, stars, or domains
  • when you search for something (e.g. using DuckDuckGo or Google) it shows suggested websites that you already visited before
  • sharing of annotations and comments with others (as long as they also have Memex installed)
  • for annotations it uses the W3C Open Annotation spec
  • stores everything locally (with the exception of sharing annotations via a link, of course)

Con:

  • it consumes more disk space due to running its own index
  • needs an external app for backing up data
  • so far no syncing of bookmarks between devices (but it is in the making)
  • so far it does not sync annotations between different devices (but both mobile apps for iOS/Android, and Pocket integration are in the making)

Status quo and looking at the future

I currently have still a few dozen bookmarks that I need to tag in Memex and delete from my Firefox bookmarks. And a further several dozen in OneTab.

The most viewed websites, I have in the “Top Sites” in Firefox.

Most of the “tabs” in OneTab, I have already migrated to Memex and I am looking very much forward to trying to use it instead of OneTab. So far it seems a bit more work, as I need to 1) open all tabs into a tab tree (same as in OneTab), 2) open that tab tree in a separate window (extra step), and then 3) use the “Tag all tabs in window” or “Add all tabs in window” option from the extension button (similar as in OneTab), and finally 4) close the tabs by closing the window (extra step). What I usually do is to change a Tab Group from OneTab to a Collection in Memex and then take some extra time to add tags or notes, if appropriate.

So, I am quite confident Memex will be able to replace OneTab for me and most likely also (most) normal bookmarks. I may keep some bookmarks of things that I want to always keep track of, like my online bank’s URL, but I am not sure yet.

The annotations are a god-send as well, which will be very hard to get rid of, as I already got used to them.

Now, if I could only send stuff to my eInk reader (or phone), annotate it there and have those annotations auto-magically show up in the browser and therefore stored locally on my laptop … :D

Oh, oh, and if I could search through Memex from my KDE Plasma desktop and add/view annotations from other documents (e.g. ePub, ODF, PDF) and other applicatios (e.g. Okular, Calibre, LibreOffice). One may dream …

hook out → sipping Vin Santo and planning more order in bookmarks


P.S. This blog post was initially a comment to the topic “How do you organize your bookmarks?” in the ~tech group on Tildes where further discussion is happening as well.

Automated phone backup with Syncthing

How do you backup your phones? Do you?

I use to perform a copy of all the photos and videos from my and my wife’s phone to my PC monthly and then I copy them to an external HDD attached to a Raspberry Pi.

However, it’s a tedious job mainly because: - I cannot really use the phones during this process; - MTP works one in 3 times - often I have to fallback to ADB; - I have to unmount the SD cards to speed up the copy; - after I copy the files, I have to rsync everything to the external HDD.

The Syncthing way

Syncthing describes itself as:

Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized.

I installed it to our Android phones and on the Raspberry Pi. On the Raspberry Pi I also enabled remote access.

I started the Syncthing application on the Android phones and I’ve chosen the folders (you can also select the whole Internal memory) to backup. Then, I shared them with the Raspberry Pi only and I set the folder type to “Send Only” because I don’t want the Android phone to retrieve any file from the Raspberry Pi.

On the Raspberry Pi, I accepted the sharing request from the Android phones, but I also changed the folder type to “Receive Only” because I don’t want the Raspberry Pi to send any file to the Android phones.

All done? Not yet.

Syncthing main purpose is to sync, not to backup. This means that, by default, if I delete a photo from my phone, that photo is gone from the Raspberry Pi too and this isn’t what I do need nor what I do want.

However, Syncthing supports File Versioning and best yet it does support a “trash can”-like file versioning which moves your deleted files into a .stversions subfolder, but if this isn’t enough yet you can also write your own file versioning script.

All done? Yes! Whenever I do connect to my own WiFi my photos are backed up!

Monday, 22 April 2019

[Some] KDE Applications 19.04 also available in flathub

The KDE Applications 19.04 release announcement (read it if you haven't, it's very complete) mentions some of the applications are available at the snap store, but forgets to mention flathub.

Just wanted to bring up that there's also some of the applications available in there https://flathub.org/apps/search/org.kde.

All the ones that are released along KDE Applications 19.04 were updated on release day (except kubrick that has a compilation issue and will be updated for 19.04.1 and kontact which is a best and to be honest i didn't particularly feel like updating it)

If you feel like helping there's more applications that need adding and more automation that needs to happen, so get in touch :)

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

        /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Chris Woolfrey -- FSFE UK Team Member  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Max's weblog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Nikos Roussos - opensource  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  english – Davide Giunchi  foss – vanitasvitae's blog  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog