Free Software, Free Society!
Thoughts of the FSFE Community (English)

Saturday, 11 July 2020

20.08 releases branches created

Make sure you commit anything you want to end up in the 20.08 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 16 of July.

More interesting dates
    July 30, 2020: 20.08 RC (20.07.90) Tagging and Release
  August  6, 2020: 20.08 Tagging
  August 13, 2020: 20.08 Release

https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Cheers,
  Albert

Friday, 10 July 2020

Light OpenStreetMapping with GPS

Now that lockdown is lifting a bit in Scotland, I’ve been going a bit further for exercise. One location I’ve been to a few times is Tyrebagger Woods. In theory, I can walk here from my house via Brimmond Hill although I’m not yet fit enough to do that in one go.

Instead of following the main path, I took a detour along some route that looked like it wanted to be a path but it hadn’t been maintained for a while. When I decided I’d had enough of this, I looked for a way back to the main path but OpenStreetMap didn’t seem to have the footpaths mapped out here yet.

I’ve done some OpenStreetMap surveying before so I thought I’d take a look at improving this, and moving some of the tracks on the map closer to where they are in reality. In the past I’ve used OSMTracker which was great, but now I’m on iOS there doesn’t seem to be anything that matches up.

My new handheld radio, a Kenwood TH-D74 has the ability to record GPS logs so I thought I’d give this a go. It records the logs to the SD card with one file per session. It’s a very simple logger that records the NMEA strings as they are received. The only sentences I see in the file are GPGGA (Global Positioning System Fix Data) and GPRMC (Recommended Minimum Specific GPS/Transit Data).

I tried to import this directly with JOSM but it seemed to throw an error and crash. I’ve not investigated this, but I thought a way around could be to convert this to GPX format. This was easier than expected:

apt install gpsbabel
gpsbabel -i nmea -f "/sdcard/KENWOOD/TH-D74/GPS_LOG/25062020_165017.nme" \
                 -o gpx,gpxver=1.1 -F "/tmp/tyrebagger.gpx"

This imported into JOSM just fine and I was able to adjust some of the tracks to better fit where they actually are.

I’ll take the radio with me when I go in future and explore some of the other paths, to see if I can get the whole woods mapped out nicely. It is fun to just dive into the trees sometimes, along the paths that looks a little forgotten and overgrown, but also it’s nice to be able to find your way out again when you get lost.

Tuesday, 23 June 2020

How to build a SSH Bastion host

[this is a technical blog post, but easy to follow]

recently I had to setup and present my idea of a ssh bastion host. You may have already heard this as jump host or a security ssh hoping station or ssh gateway or even something else.

The main concept

SSH bastion

Disclaimer: This is just a proof of concept (PoC). May need a few adjustments.

The destination VM may be on another VPC, perhaps it does not have a public DNS or even a public IP. Think of this VM as not accessible. Only the ssh bastion server can reach this VM. So we need to first reach the bastion.

SSH Config

To begin with, I will share my initial sshd_config to get an idea of my current ssh setup

AcceptEnv LANG LC_*
ChallengeResponseAuthentication no
Compression no
MaxSessions 3
PasswordAuthentication no
PermitRootLogin no
Port 12345
PrintMotd no
Subsystem sftp  /usr/lib/openssh/sftp-server
UseDNS no
UsePAM yes
X11Forwarding no
AllowUsers ebal
  • I only allow, user ebal to connect via ssh.
  • I do not allow the root user to login via ssh.
  • I use ssh keys instead of passwords.

This configuration is almost identical to both VMs

  • bastion (the name of the VM that acts as a bastion server)
  • VM (the name of the destination VM that is behind a DMZ/firewall)

~/.ssh/config

I am using the ssh config file to have an easier user experience when using ssh

Host bastion
     Hostname 127.71.16.12
     Port 12345
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     Port 23456

Host *
    User ebal
    ServerAliveInterval 300
    ServerAliveCountMax 10
    ConnectTimeout=60

Create a new user to test this

Let us create a new user for testing.

User/Group

$ sudo groupadd ebal_test
$ sudo useradd -g ebal_test -m ebal_test

$ id ebal_test
uid=1000(ebal_test) gid=1000(ebal_test) groups=1000(ebal_test)

Perms

Copy .ssh directory from current user (<== lazy sysadmin)

$ sudo cp -ravx /home/ebal/.ssh/ /home/ebal_test/
$ sudo chown -R ebal_test:ebal_test /home/ebal_test/.ssh

$ sudo ls -la ~ebal_test/.ssh/
total 12
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 .
drwx------. 3 ebal_test ebal_test 4096 Jun 23 15:56 ..
-r--r-----. 1 ebal_test ebal_test  181 Sep 20  2019 authorized_keys

$ sudo ls -ld ~ebal_test/.ssh/
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 /home/ebal_test/.ssh/

bastion sshd config

Edit the ssh daemon configuration file to append the below entries

cat /etc/ssh/sshd_config

AllowUsers ebal ebal_test

Match User ebal_test
   AllowAgentForwarding no
   AllowTcpForwarding yes
   X11Forwarding no
   PermitTunnel no
   GatewayPorts no
   ForceCommand echo 'This account can only be used for ProxyJump (ssh -J)'

Don’t forget to restart sshd

systemctl restart sshd

As you have seen above, I now allow two (2) users to access the ssh daemon (AllowUsers). This can also work with AllowGroups

Testing bastion

Let’s try to connect to this bastion VM

$ ssh bastion -l ebal_test uptime
This account can only be used for ProxyJump (ssh -J)
$ ssh bastion -l ebal_test
This account can only be used for ProxyJump (ssh -J)
Connection to 127.71.16.12 closed.

Interesting …

We can not login into this machine.

Let’s try with our personal user

$ ssh bastion -l ebal uptime
 18:49:14 up 3 days,  9:14,  0 users,  load average: 0.00, 0.00, 0.00

Perfect.

Let’s try from windows (mobaxterm)

mobaxterm is putty on steroids! There is also a portable version, so there is no need of installation. You can just download and extract it.

mobaxterm_proxyjump.png

Interesting…

Destination VM

Now it is time to test our access to the destination VM

$ ssh VM
ssh: connect to host 192.13.13.186 port 23456: Connection refused

bastion

$ ssh -J ebal_test@bastion ebal@vm uptime
 19:07:25 up 22:24,  2 users,  load average: 0.00, 0.01, 0.00
$ ssh -J ebal_test@bastion ebal@vm
Last login: Tue Jun 23 19:05:29 2020 from 94.242.59.170
ebal@vm:~$
ebal@vm:~$ exit
logout

Success !

Explain Command

Using this command

ssh -J ebal_test@bastion ebal@vm

  • is telling the ssh client command to use the ProxyJump feature.
  • Using the user ebal_test on bastion machine and
  • connect with the user ebal on vm.

So we can have different users!

ssh/config

Now, it is time to put everything under our ~./ssh/config file

Host bastion
     Hostname 127.71.16.12
     Port 12345
     User ebal_test
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     ProxyJump bastion
     User ebal
     Port 23456

and try again

$ ssh vm uptime
 19:17:19 up 22:33,  1 user,  load average: 0.22, 0.11, 0.03

mobaxterm with bastion

mobaxterm_settings.png

mobaxterm_proxyjump_final.png

Monday, 15 June 2020

Keyboard cleaning 101

  • Seravo
  • 06:00, Monday, 15 June 2020

You wake up and the sun is shining. Everything is great and you sit down at your desk, ready to join the first video call of the day. And then – OOPS! You spill your breakfast smoothie over your keyboard. Does this sound familiar?

If you’ve ever encountered a situation such as the one described above and didn’t know how to act, after this post you will. We’ve written here easy to follow, step-by-step instructions on how to clean your keyboard.

Steps for salvaging the situation

  1. Turn off your device and place it upside down and wait for the liquid to drain.
  2. Turn the device around and clean away all visible traces of the liquid.
  3. Take a photo of your keyboard for future reference.
  4. Find a flat but sturdy tool, that can be inserted under the keycap. A flat-headed screwdriver is an option, but it might scratch the device and its components. Use a plastic tool, if possible.
  5. Some keyboards require the keys to be removed from the top, some from the bottom. Insert the tool under the keycap and gently apply upward pressure by pressing the tool. Continue to do so until you hear a snap and the keycap comes off. Remove also the small plastic parts underneath. Notice: If there is unusual resistance when detaching a key, stop! The keys might require a different method for removal.
  6. Place all the plastic parts in a bowl of warm water with mild detergent, such as dishwasher soap. Let them soak for a while to allow the dirt to come off.
  7. Rinse and repeat until all the plastic parts look clean. If the parts are extra dirty, use fingertips to rub off the dirt. Notice: Be careful not to rinse the plastic parts down the drain.
  8. Place the plastic parts on a paper towel and let them dry.

Cleaning the bare keyboard

While the plastic parts are drying, clean rest of the keyboard. You will need two clean cloths (preferably microfiber) and cotton swabs. The cloth used for cleaning should be damp. You can use either water or isopropyl alcohol. Alcohol evaporates quickly and is less likely to leave any moisture around. Cotton swabs are great for hard to reach areas. Finish the cleaning with a dry cloth.

Use a cotton swab to wipe the dirt away.

Reassembly

Once everything has dried up, you can start reassembling the keyboard. Now is a good time to look at that photo you took in the beginning. If for some reason you didn’t take a photo or you lost it, no worries – Youtube and Google are your friends.

When reassembling a key, do not use excessive force. It should easily snap in place and once so, test that it feels correct. If not, disassemble and reassemble until it does.

When done, the keyboard should look just like new. Issue fixed!

Done!

Maintenance in the future

Cleaning your keyboard every once in a while is a good practice to keep it working properly. Dust, grease and crumbs can easily get under the keycaps and potentially harm the device. A thorough clean-up takes about an hour, but it ensures that your keyboard will be in the best working condition. If you want to try and speed up the process, use compressed air to blow out any loose dirt.

Thursday, 11 June 2020

20.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Dependency freeze is in four weeks (July 9) and Feature Freeze a week after that, make sure you start finishing your stuff!

Wednesday, 10 June 2020

How to use cloud-init with Edis

It is a known fact, that my favorite hosting provider is edis. I’ve seen them improving their services all these years, without forgeting their customers. Their support is great and I am really happy working with them.

That said, they dont offer (yet) a public infrastructre API like hetzner, linode or digitalocean but they offer an Auto Installer option to configure your VPS via a post-install shell script, put your ssh key and select your basic OS image.

edis_auto_installer.png

I am experimenting with this option the last few weeks, but I wanted to use my currect cloud-init configuration file without making many changes. The goal is to produce a VPS image that when finished will be ready to accept my ansible roles without making any addition change or even login to this VPS.

So here is my current solution on how to use the post-install option to provide my current cloud-init configuration!

cloud-init

cloud-init documentation

cloud-init.png
Josh Powers @ DebConf17

I will not get into cloud-init details in this blog post, but tldr; has stages, has modules, you provide your own user-data file (yaml) and it supports datasources. All these things is for telling cloud-init what to do, what to configure and when to configure it (in which step).

NoCloud Seed

I am going to use NoCloud datastore for this experiment.

so I need to configure these two (2) files

/var/lib/cloud/seed/nocloud-net/meta-data
/var/lib/cloud/seed/nocloud-net/user-data

Install cloud-init

My first entry in the post-install shell script should be

apt-get update && apt-get install cloud-init

thus I can be sure of two (2) things. First the VPS has already network access and I dont need to configure it, and second install cloud-init software, just to be sure that is there.

Variables

I try to keep my user-data file very simple but I would like to configure hostname and the sshd port.

HOSTNAME="complimentary"
SSHDPORT=22422

Users

Add a single user, provide a public ssh key for authentication and enable sudo access to this user.

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

Hardening SSH

  • Change sshd port
  • Disable Root
  • Disconnect Idle Sessions
  • Disable Password Auth
  • Disable X Forwarding
  • Allow only User (or group)
write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        ClientAliveInterval 600
        ClientAliveCountMax 3
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

enable firewall

ufw allow $SSHDPORT/tcp && ufw enable

remove cloud-init

and last but not least, I need to remove cloud-init in the end

apt-get -y autoremove --purge cloud-init lxc lxd snapd

Post Install Shell script

let’s put everything together

#!/bin/bash

apt-get update && apt-get install cloud-init

HOSTNAME="complimentary"
SSHDPORT=22422

mkdir -p /var/lib/cloud/seed/nocloud-net

# Meta Data
cat > /var/lib/cloud/seed/nocloud-net/meta-data <<EOF
dsmode: local
EOF

# User Data
cat > /var/lib/cloud/seed/nocloud-net/user-data <<EOF
#cloud-config

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

# Set TimeZone
timezone: Europe/Athens

HOSTNAME: $HOSTNAME

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet $HOSTNAME > /etc/motd
  - updatedb
  # Firewall
  - ufw allow $SSHDPORT/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

EOF

cloud-init clean --logs

cloud-init init --local

That’s it !

After a while (needs a few reboot) our VPS is up & running and we can use ansible to configure it.

Tag(s): edis, cloud-init

Monday, 08 June 2020

Racism is bad - a Barcelona centered reflection

I hope most of people reading this will not find "Racism is bad" to be controversial.

The problem is even if most (i'd hope to say all) of us think racism is bad, some of us are still consciously or unconsciously racist.

I am not going to speak about the Black Lives Matters movement, because it's mostly USA centered (which is kind of far for me) and there's much better people to listen than me, so go and listen to them.

In this case I'm going to speak about the Romani/Roma/gypsies in Barcelona (and from what i can see in this Pew Research article, most of Europe).

Institutionalized hate against them is so deep that definition 2 of 6 in the Catalan dictionary for the Romani word (gitano) is "those that act egoistically and try deceive others" and definition 5 of 8 in the Spanish dictionary is "those that with wits and lies try to cheat someone in a particular matter"

Here it is "common" to hear people say "don't be a gitano" meaning to say "don't cheat/play me" and nobody bats an eye when hearing that phrase.

It's a fact that this "community" tends to live in ghettos and has a higher percent crime ratio. I've heard people that probably think themselves as non racist say "that's just the lifestyle they like".

SARCASM: Sure, people love living in unsafe neighbourhoods and crappy houses and risking going to jail just to be able to eat.

The thing is when 50% of the population has an unfavourable view of you just because of who your parents are, or your surname is, it's hard to get a [nice] job, a place to live outside the ghetto, etc.

So please, in addition to saying that you're not a racist (which is a good first step), try to actually not be racist too.

Thursday, 28 May 2020

Send your talks for Akademy 2020 *now*

The Call for Participation is still open for two weeks more, but please make us a favour and send yours *now*.

This way we don't have to panic thinking if we are going to need to go chasing people or not, or if we're going to have too few or too many proposals.

Also if you ask the talks committee for review, we can review your talk early, give you feedback and improve it, so it's a win-win.

So head over to https://akademy.kde.org/2020/cfp, find the submit link in the middle of that wall of text and click it ;)

Sunday, 24 May 2020

chmk a simple CHM viewer

Okular can view CHM files, to do so it uses KHTML, makes sense CHM is basically HTML with images all compressed into a single file.

This is somewhat problematic since KHTML is largely unmaintained and i doubt it'll get a Qt6 port.

The problem is that the only other Qt based HTML rendering engine is QtWebEngine and while great it doesn't support stuff we would need to use it in Okular, since Okular needs to access to the rendered image of the page and also to the text since it uses the same API for all formats, be it CHM, PDF, epub, wathever.

The easiest plan to move forward is probably drop CHM from Okular, but that means no more chm viewing in KDE software, which would be a bit sad.

So I thought, ok maybe I can do a quick CHM viewer just based in QtWebEngine without trying to fit it into the Okular backend to support different formats.

And ChmK was born https://invent.kde.org/aacid/chmk.

It's still very simple, but the basics work, if you give it a file in the command line, it'll open it and you'll be able to browse it.



As you can see it doesn't have *any* UI yet, so Merge Requests more than welcome.

Saturday, 16 May 2020

Network Namespaces - Part Three

Previously on … Network Namespaces - Part Two we provided internet access to the namespace, enabled a different DNS than our system and run a graphical application (xterm/firefox) from within.

The scope of this article is to run vpn service from this namespace. We will run a vpn-client and try to enable firewall rules inside.

ip-netns07

dsvpn

My VPN choice of preference is dsvpn and you can read in the below blog post, how to setup it.

dsvpn is a TCP, point-to-point VPN, using a symmetric key.

The instructions in this article will give you an understanding how to run a different vpn service.

Find your external IP

Before running the vpn client, let’s see what is our current external IP address

ip netns exec ebal curl ifconfig.co

62.103.103.103

The above IP is an example.

IP address and route of the namespace

ip netns exec ebal ip address show v-ebal

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip netns exec ebal ip route show

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Firefox

Open firefox (see part-two) and visit ifconfig.co we noticed see that the location of our IP is based in Athens, Greece.

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-before.png

Run VPN client

We have the symmetric key dsvpn.key and we know the VPN server’s IP.

ip netns exec ebal dsvpn client dsvpn.key 93.184.216.34 443

Interface: [tun0]
Trying to reconnect
Connecting to 93.184.216.34:443...
net.ipv4.tcp_congestion_control = bbr
Connected

Host

We can not see this tunnel vpn interface from our host machine

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 94:de:80:6a:de:0e brd ff:ff:ff:ff:ff:ff

376: v-eth0@if375: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:f7:c2:fb:60:ea brd ff:ff:ff:ff:ff:ff link-netns ebal

netns

but it exists inside the namespace, we can see tun0 interface here

ip netns exec ebal ip link

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Find your external IP again

Checking your external internet IP from within the namespace

ip netns exec ebal curl ifconfig.co

93.184.216.34

Firefox netns

running again firefox, we will noticed that our the location of our IP is based in Helsinki (vpn server’s location).

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-after.png

systemd

We can wrap the dsvpn client under a systemcd service

[Unit]
Description=Dead Simple VPN - Client

[Service]
ExecStart=ip netns exec ebal /usr/local/bin/dsvpn client /root/dsvpn.key 93.184.216.34 443
Restart=always
RestartSec=20

[Install]
WantedBy=network.target

Start systemd service

systemctl start dsvpn.service

Verify

ip -n ebal a

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 192.168.192.1 peer 192.168.192.254/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 64:ff9b::c0a8:c001 peer 64:ff9b::c0a8:c0fe/96 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ee69:bdd8:3554:d81/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip -n ebal route

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20
192.168.192.254 dev tun0 proto kernel scope link src 192.168.192.1

Firewall

We can also have different firewall policies for each namespace

outside

# iptables -nvL | wc -l
127

inside

ip netns exec ebal iptables -nvL

Chain INPUT (policy ACCEPT 9 packets, 2547 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 2 packets, 216 bytes)
 pkts bytes target     prot opt in     out     source        destination

So for the VPN service running inside the namespace, we can REJECT every network traffic, except the traffic towards our VPN server and of course the veth interface (point-to-point) to our host machine.

iptable rules

Enter the namespace

inside

ip netns exec ebal bash

Before

verify that iptables rules are clear

iptables -nvL

Chain INPUT (policy ACCEPT 25 packets, 7373 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 4 packets, 376 bytes)
 pkts bytes target     prot opt in     out     source        destination

Enable firewall

./iptables.netns.ebal.sh

The content of this file

## iptable rules

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP
iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT

## netns - incoming
iptables -A INPUT -i v-ebal -s 10.0.0.0/8 -j ACCEPT

## Reject incoming traffic
iptables -A INPUT -j REJECT

## DSVPN
iptables -A OUTPUT -p tcp -m tcp -o v-ebal -d 93.184.216.34 --dport 443 -j ACCEPT

## net-ns outgoing
iptables -A OUTPUT -o v-ebal -d 10.0.0.0/8 -j ACCEPT

## Allow tun
iptables -A OUTPUT -o tun+ -j ACCEPT

## Reject outgoing traffic
iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
iptables -A OUTPUT -p udp -j REJECT --reject-with icmp-port-unreachable

After

iptables -nvL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    1   349 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    0     0 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

PS: We reject tcp/udp traffic (last 2 linew), but allow icmp (ping).

ip-netns08

End of part three.

Friday, 15 May 2020

Improve your security with two-factor authentication

  • Seravo
  • 09:54, Friday, 15 May 2020

Two-factor authentication (or simply 2FA) is a way of authentication where a user must provide additional verification after username and password login. The form of verification can be a string of characters delivered via text message or generated with TOTP client. Two-factor authentication improves security because compromised username and password are not enough to get the account breached.

This article will explain how to use TOTP clients for two-factor authentication and why TOTP is better than many other two-factor methods. As an example, I will show how to enable and set up TOTP client Google Authenticator in Google’s services.

What is TOTP?

TOTP is a Time-based One-Time Password algorithm, and it can be used in two-factor authentication instead of SMS or paper list verification codes. TOTP client generates periodically a new code that can be used to verify the login.

TOTP is based on cryptographic functions. When enabling TOTP, the service gives a shared secret that is entered to the client, either by scanning a QR code or writing it in text. Cryptographic functions use this shared secret and current time to generate codes to verify the login. The code generation also works without connection to the internet.

The TOTP client on a smartphone is safer than many other two-factor authentication methods, like codes sent via SMS or email. Getting hold of someone’s SMS is somewhat troublesome but still possible. An email account can also be compromised – especially if the password is also used in other places.

It’s a good idea to backup all the shared secrets that are used to generate TOTP codes. Either in text format or as a screenshot from the QR code. That way, you don’t get locked out from your accounts even if your phone gets lost. Backups of TOTP shared secrets should be stored with the same carefulness as passwords because anyone that receives the shared secrets can generate the same codes. All services don’t provide the shared secret in a text format, which should be considered when choosing a backup method.

Google Authenticator is a popular and user-friendly TOTP client, but it doesn’t have functionality for backups. User needs to take care of backing up the shared secrets someplace else when adding them in the client. The client has import and export functionality, but it’s suitable only for transferring the data from an old phone to a new one. The old phone generates a QR code for the transfer, but if taking a screenshot is disabled, that can’t be easily utilized as a backup. Google Authenticator is available for Android and iOS.

There are also other TOTP clients, for example, AuthyDuo MobileFreeOTP, and andOTP. Authy and Duo Mobile have a cloud backup feature. FreeOTP has no backup functionality, but andOTP allows exporting shared secrets in JSON format (plain-text or encrypted) and storing to your place of choice.

Preparing Google account for 2FA

Next, I will show you how to enable two-factor authentication in Google services. After that, we will install Google Authenticator and enable 2FA with Google account. In this guide, I will log in to a Google account with a desktop browser, which is very similar to how the process works for other services.

Login to your Google Account and proceed in the menu to Security> Signing into Google > 2-step verification.

If two-step verification is enabled on your Google account, you should already see an option for Google Authenticator on this page, and you can continue to the next part of this article (Installing Google Authenticator). Otherwise, continue this part. Google has now opened a window where is introduced two-step verification. You can read it through and then click forward.

On the next page, you need to select the phone you would like to use. If you can’t see your preferred phone there you need to add the Google account in question to your preferred phone:

Android: Go Settings > Accounts > Add account. Choose Google and log in.

iOS: Install Google app and login with your Google account.

Google doesn’t let you choose Google Authenticator right away. First, you need to enable some other way of two-step verification: Google Prompts (default), security key, text message, or a phone call. Google Prompts is a window that will open on your phone and let you select yes or no. Keep the default, and move forward.

Select ”yes” in the prompt when it on your phone. After that, you still need to set a backup option for getting verification codes. You can add a secondary phone number to get into your account even if your phone is lost. You can use your primary phone number here if you promise to back up the shared secret, which we will get to later.

Select the text message or a call and write the code you get to the window that opens after pressing ”send.” Text messages can sometimes take a long time to arrive, and a phone call could be faster (in this alternative, a robot voice reads you the code).

After that one more time is asked if you want to enable 2-step verification. Select yes and continue.

Installing Google Authenticator

Installing Google Authenticator to a smartphone is straightforward. It can be installed just like any other app from the app store. The app is free and will also work offline after the installation.

When opening the app the first time, it will show you a few introduction slides. Browse through them and press forward. Then you will see the following view where accounts (shared secrets) can be added to the app. Google Authenticator is now installed, and it’s ready to receive shared secrets. Leave your phone wait in this view.

Your desktop browser should now be in Google account’s page 2-step verification (if not, go to Security > Signing into Google > 2-step verification). Roll down until you see the Authenticator app and press ”set up.” In the opening window you can choose your phone’s OS:

After this, a window opens where you can scan the QR code. Choose the option ”scan a barcode” on your phone and scan the code.

After scanning, you can press the link ”Can’t scan it?” to get the shared secret in text format for backup purposes. After the scan, your Google Authenticator should look something like this:

On the right side of the code, you can see a diminishing pie icon, which indicates how long the current code is valid. A new code is generated every 30 seconds by default.

After this, proceed in the browser window and click next. The next page asks you to insert a verification code. Write the code you see in the app and hit verify before a new code is generated.

After this, Google Authenticator has been successfully enabled.

From now on, Google will ask a verification code when you log in to Google. Just opening the app on your phone is enough, and you can see the code. Other accounts can be added to the app from the plus sign button in the bottom right.

Please note that if you want to disable two-factor authentication, you must first disable it from Google’s settings (or whichever service is in question). Just removing the Google Authenticator or the shared secret does not disable two-factor authentication. You might only end up locking yourself out from your account.

You have now significantly improved the security of your Google account. Next, you should check which other services that you use offer two-factor authentication.

Tuesday, 12 May 2020

How fun does my Ring Fit Adventure² look like after some time

So after a full “Ring Fit” month – i.e. 30 days played1 – let us see how this workout game held up for me.

Historic context: COVID-19 and Ring Fit Adventure

This might be less interesting for those reading now, but might be relevant information for anyone reading this in a few years.

The first part of the year 2020 has been marked by COVID-19. This disease has since spread to a pandemic, impacting the whole globe, with many countries going into a more or less full lockdown. This situation is slowly changing, as in May many countries in Europe are removing the lockdown.

As everyone who could had to stay at home, the demand for Ring Fit Adventure rose drastically. Not only is it almost impossible to get one, due to it being out of stock pretty much everywhere, there are even rumours that in China you can only get one on the black/grey market for 250 $ (i.e. more than 3x the MSRP).

30 workout days in 76 days

The main question is obviously how much did Ring Fit Adventure get me to work out.

First off, I have to admit that I did not train every work day, as I hoped, but still much more than usual.

One reason I skipped two full weeks was that I had to send the Joy-Cons for repair, as their notorious drift started to show quite badly. This prevented me to work out, as without joy-cons I could not play itt

I took another fortnightly break from it before that, due to not feeling so well.

Now let us look at some data.

According to Ring Fit Adventure itself:

  • I reached world 9 of the story3
  • my character is now at level 75
  • and my difficulty setting is at level 20
  • the total time I exercised2 is 8h 18'
  • in which I burned 2670 kcal of energy
  • and ran a total of 28 km

Looking at my own log, this is how much I trained in the past few months:

yoga gymnastics rowing Ring Fit
2019-12 3
2020-01 1 1
2020-02 2 10
2020-03 12
2020-04 7
2020-05 3 1

This table needs some comment:

  • I got my Ring Fit Adventure in early February. But as you can see, my training regime was a bit lackluster in the two months before it. It was somewhat better the previous year, but I lost that piece of paper.
  • Rowing is something I did go to last year, but have not mustered the courage to go in several months … and then COVID-19 kicked in, so I am not even allowed to go (ha! another good excuse).
  • We are only half-way through May and I just got my joy-cons back from repair yesterday, so I was not able to play Ring Fit Adventure for the whole first half of the month. And the rest of the month is still ahead of me.

In my opinion, this clearly shows that Ring Fit Adventure did trick me into training more often. Including all the hiccups it took me 76 days to work out 30 days (or 36, if you ask the game1), which is roughly every second day. As a comparison, reaching my goal of exercising every work day, I would have to play it for 55 days in the same time frame.

In sum, I am happy with the results, but will try to train more in the future, with the hope to soon be able to go work out with others (e.g. go rowing).

Adventure content so far

As mentioned, in those 30 days I reached World 93 – in fact, I could have taken on the boss today and reached World 10, but decided to instead revisit some old levels and side quests.

Following is a short summary of what Worlds 5 to 9 introduce new to the game. I have to say that so far every world brought something new and refreshing, so the gameplay never got stale. In fact, I have a small problem that I have almost too many fighting skills to choose from. Good thing that by now old fighting skills are getting levelled up, so they are again a reasonable choice to bring to battle.

What also proved as a great feature – especially after a longer break – is that when you start up Adventure mode, it summarises the story so far.

Update: Voices and languages

In the update, they also included more voice languages and the choice for Ring to have either a male or female voice.

I find that as a neat addition, and I am currently using a male French voice with English subtitles to brush up on my French while I am working out.

World 5 & 6

  • introduce days that target only specific skills/muscle groups and does so in a very nice, fun and varied way
  • introduces skill points and a skill tree
  • gyms now have also skill sets – a great and easy way to get some targeted work out in Adventure mode as well

World 7

  • the plot twist is becoming apparent and the story is becoming more varied – while nothing award-winning it is actually enjoyable
  • unlocks All Sets as a choice for setting fighting skills – i.e. choosing to go to fights with workout skills that focus on a certain muscle group, posture, cardio workout, etc.
  • new enemy that can buff defense

World 8 & 9

  • expand the skill tree, including upgrades to previous fighting skills
  • adds a new movement skill that enables to reach alternative paths (also in previous) levels, adding to replayability of past levels; and if timed right, doubles as battle evasion

Other modes

Since my last blog entry about Ring Fit Adventure, the game has received an update that introduces two new modes:

  • Rhythm game – in this mode you press and pull the RingCon and/or twist your torso to the beat of a song, while also occassionaly having to squat or stand up. It seems like a pretty fun party game, especially if they introduce more songs. The only issue I have with it is that I have trouble keeping the RingCon in place when doing the fast twists.
  • Running mode – in this mode, you simply run through a level, without any mob encounters or similar. I have not played this mode much, but for a more lazy day, it seems fitting and relaxing.

Final general thoughts

To start with the negaive, in the past weeks my RingCon did not recognise the transition from presses to pulls really well. But that was easily fixed with a simple recalibration from the options menu. I have not had those issues since.

So, after a full Ring Fit month, how do I feel about it?

While it did not make me lose much weight or gain any major muscles, my back has not ached since. Well, except maybe a little bit when I did not train for two weeks. But even then it took only one or two days of playing for it to stop aching again. I also do feel more fit in general.

In that regard I would say it is a great success.

Did it trick me into training more? Perhaps not as often as were my ambitious expectations, but it has undeniably improved my workout regime immensely.

I do think the story mode is a great way to pull me in, and apart from the fun story and overall atmosphere, I think this is in great part to the RPG elements. I am not a big fan of gamification of everything, and “adding RPG elements” is usually just an euphemism for introducing some levels, badges and collectibles to trick the player into investing more time. But in my opinion this game does it right.

That being said, I am continuing to work out using Ring Fit Adventure and will continue to bump up the difficulty to keep my workout to Moderate Workout.

hook out → very happy to be able to move and play again


  1. The game claims I have been playing for 36 days, but I suspect this includes non-adventure modes. 

  2. Ring Fit Adventure records only time actually spent exercising. For comparison, my Switch records that I played the game for 20 h. 

  3. Apparently there are 20 worlds in total (encompassing 100 levels all together) with a NG+ and NG++ as well. So with 30 full days, I am probably not even half way through. If the trend continues, the worlds are only going to get longer, not shorter. 

Network Namespaces - Part Two

Previously on… Network Namespaces - Part One we discussed how to create an isolated network namespace and use a veth interfaces to talk between the host system and the namespace.

In this article we continue our story and we will try to connect that namespace to the internet.

recap previous commands

ip netns add ebal
ip link add v-eth0 type veth peer name v-ebal
ip link set v-ebal netns ebal
ip addr add 10.10.10.10/24 dev v-eth0
ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal
ip link set v-eth0 up
ip netns exec ebal ip link set v-ebal up

Access namespace

ip netns exec ebal bash

# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:07:60:da:d5:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::e007:60ff:feda:d5cf/64 scope link
       valid_lft forever preferred_lft forever

# ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping Veth

It’s not a gateway, this is a point-to-point connection.

# ping -c3 10.10.10.10

PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.415 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.126 ms

--- 10.10.10.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2008ms
rtt min/avg/max/mdev = 0.107/0.216/0.415/0.140 ms

ip-netns03

Ping internet

trying to access anything else …

ip netns exec ebal ping -c2 192.168.122.80
ip netns exec ebal ping -c2 192.168.122.1
ip netns exec ebal ping -c2 8.8.8.8
ip netns exec ebal ping -c2 google.com
root@ubuntu2004:~# ping 192.168.122.80
ping: connect: Network is unreachable

root@ubuntu2004:~# ping 8.8.8.8
ping: connect: Network is unreachable

root@ubuntu2004:~# ping google.com
ping: google.com: Temporary failure in name resolution

root@ubuntu2004:~# exit
exit

exit from namespace.

Gateway

We need to define a gateway route from within the namespace

ip netns exec ebal ip route add default via 10.10.10.10

root@ubuntu2004:~# ip netns exec ebal ip route list
default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

test connectivity - system

we can reach the host system, but we can not visit anything else

# ip netns exec ebal ping -c1 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.075 ms

--- 192.168.122.80 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms

# ip netns exec ebal ping -c3 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from 192.168.122.80: icmp_seq=3 ttl=64 time=0.126 ms

--- 192.168.122.80 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.026/0.093/0.128/0.047 ms

# ip netns exec ebal ping -c3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms

root@ubuntu2004:~# ip netns exec ebal ping -c3 google.com
ping: google.com: Temporary failure in name resolution

ip-netns05

Forward

What is the issue here ?

We added a default route to the network namespace. Traffic will start from our v-ebal (veth interface inside the namespace), goes to the v-eth0 (veth interface to our system) and then … then nothing.

The eth0 receive the network packages but does not know what to do with them. We need to create a forward rule to our host, so the eth0 network interface will know to forward traffic from the namespace to the next hop.

echo 1 > /proc/sys/net/ipv4/ip_forward

or

sysctl -w net.ipv4.ip_forward=1

permanent forward

If we need to permanent tell the eth0 to always forward traffic, then we need to edit /etc/sysctl.conf and add below line:

net.ipv4.ip_forward = 1

To enable this option without reboot our system

sysctl -p /etc/sysctl.conf

verify

root@ubuntu2004:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Masquerade

but if we test again, we will notice that nothing happened. Actually something indeed happened but not what we expected. At this moment, eth0 knows how to forward network packages to the next hope (perhaps next hope is the router or internet gateway) but next hop will get a package from an unknown network.

Remember that our internal network, is 10.10.10.20 with a point-to-point connection to 10.10.10.10. So there is no way for network 192.168.122.0/24 to know how to talk to 10.0.0.0/8.

We have to Masquerade all packages that come from 10.0.0.0/8 and the easiest way to do this if via iptables.
Using the postrouting nat table. That means the outgoing traffic with source 10.0.0.0/8 will have a mask, will pretend to be from 192.168.122.80 (eth0) before going to the next hop (gateway).

# iptables --table nat --flush
iptables --table nat --append POSTROUTING --source 10.0.0.0/8 --jump MASQUERADE
iptables --table nat --list
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.0.0/8           anywhere

Test connectivity

test again the namespace connectivity

# ip netns exec ebal ping -c2 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.139 ms

--- 192.168.122.80 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1017ms
rtt min/avg/max/mdev = 0.054/0.096/0.139/0.042 ms

# ip netns exec ebal ping -c2 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=63 time=0.242 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=63 time=0.636 ms

--- 192.168.122.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.242/0.439/0.636/0.197 ms

# ip netns exec ebal ping -c2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=57.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=58.0 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 57.805/57.896/57.988/0.091 ms

# ip netns exec ebal ping -c2 google.com
ping: google.com: Temporary failure in name resolution

success

ip-netns06.png

DNS

almost!

If you carefully noticed above, ping on the IP works.
But no with name resolution.

netns - resolv

Reading ip-netns manual

# man ip-netns | tbl | grep resolv

  resolv.conf for a network namespace used to isolate your vpn you would name it /etc/netns/myvpn/resolv.conf.

we can create a resolver configuration file on this location:
/etc/netns/<namespace>/resolv.conf

mkdir -pv /etc/netns/ebal/
echo nameserver 88.198.92.222 > /etc/netns/ebal/resolv.conf

I am using radicalDNS for this namespace.

Verify DNS

# ip netns exec ebal cat /etc/resolv.conf
nameserver 88.198.92.222

Connect to the namespace

ip netns exec ebal bash

root@ubuntu2004:~# cat /etc/resolv.conf
nameserver 88.198.92.222

root@ubuntu2004:~# ping -c 5 ipv4.balaskas.gr
PING ipv4.balaskas.gr (158.255.214.14) 56(84) bytes of data.
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=1 ttl=50 time=64.3 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=2 ttl=50 time=64.2 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=3 ttl=50 time=66.9 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=4 ttl=50 time=63.8 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=5 ttl=50 time=63.3 ms

--- ipv4.balaskas.gr ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 63.344/64.502/66.908/1.246 ms

root@ubuntu2004:~# ping -c3 google.com
PING google.com (172.217.22.46) 56(84) bytes of data.
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=1 ttl=51 time=57.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=2 ttl=51 time=55.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=3 ttl=51 time=55.2 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 55.209/55.984/57.380/0.988 ms

bonus - run firefox from within namespace

xterm

start with something simple first, like xterm

ip netns exec ebal xterm

or

ip netns exec ebal xterm -fa liberation -fs 11

ipnetns_xterm.png

test firefox

trying to run firefox within this namespace, will produce an error

# ip netns exec ebal firefox
Running Firefox as root in a regular user's session is not supported.  ($XAUTHORITY is /home/ebal/.Xauthority which is owned by ebal.)

and xauth info will inform us, that the current Xauthority file is owned by our local user.

# xauth info
Authority file:       /home/ebal/.Xauthority
File new:             no
File locked:          no
Number of entries:    4
Changes honored:      yes
Changes made:         no
Current input:        (argv):1

okay, get inside this namespace

ip netns exec ebal bash

and provide a new authority file for firefox

XAUTHORITY=/root/.Xauthority firefox

# XAUTHORITY=/root/.Xauthority firefox

No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0.0

xhost

xhost provide access control to the Xorg graphical server.
By default should look like this:

$ xhost
access control enabled, only authorized clients can connect

We can also disable access control

xhost +

but what we need to do, is to disable access control on local

xhost +local:

firefox

and if we do all that

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ipnetns-firefox.png

End of part two

Saturday, 09 May 2020

Network Namespaces - Part One

Have you ever wondered how containers work on the network level? How they isolate resources and network access? Linux namespaces is the magic behind all these and in this blog post, I will try to explain how to setup your own private, isolated network stack on your linux box.

notes based on ubuntu:20.04, root access.

current setup

Our current setup is similar to this

ip-netns00

List ethernet cards

ip address list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

List routing table

ip route list

default via 192.168.122.1 dev eth0 proto static
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

ip-netns01

Checking internet access and dns

ping -c 5 libreops.cc

PING libreops.cc (185.199.111.153) 56(84) bytes of data.
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=1 ttl=54 time=121 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=2 ttl=54 time=124 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=3 ttl=54 time=182 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=4 ttl=54 time=162 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=5 ttl=54 time=168 ms

--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 121.065/151.405/181.760/24.299 ms

linux network namespace management

In this article we will use the below programs:

so, let us start working with network namespaces.

list

To view the network namespaces, we can type:

ip netns
ip netns list

This will return nothing, an empty list.

help

So quicly view the help of ip-netns

# ip netns help

Usage:  ip netns list
  ip netns add NAME
  ip netns attach NAME PID
  ip netns set NAME NETNSID
  ip [-all] netns delete [NAME]
  ip netns identify [PID]
  ip netns pids NAME
  ip [-all] netns exec [NAME] cmd ...
  ip netns monitor
  ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT

monitor

To monitor in real time any changes, we can open a new terminal and type:

ip netns monitor

Add a new namespace

ip netns add ebal

List namespaces

ip netns list

root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal

We have one namespace

Delete Namespace

ip netns del ebal

Full example

root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal
root@ubuntu2004:~# ip netns
ebal
root@ubuntu2004:~# ip netns del ebal
root@ubuntu2004:~#
root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~#

monitor

root@ubuntu2004:~# ip netns monitor
add ebal
delete ebal

Directory

When we create a new network namespace, it creates an object under /var/run/netns/.

root@ubuntu2004:~# ls -l /var/run/netns/
total 0
-r--r--r-- 1 root root 0 May  9 16:44 ebal

exec

We can run commands inside a namespace.

eg.

ip netns exec ebal ip a

root@ubuntu2004:~# ip netns exec ebal ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

bash

we can also open a shell inside the namespace and run commands throught the shell.
eg.

root@ubuntu2004:~# ip netns exec ebal bash

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

root@ubuntu2004:~# exit
exit

ip-netns02

as you can see, the namespace is isolated from our system. It has only one local interface and nothing else.

We can bring up the loopback interface up

root@ubuntu2004:~# ip link set lo up

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

root@ubuntu2004:~# ip r

veth

The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices.

Think Veth as a physical cable that connects two different computers. Every veth is the end of this cable.

So we need 2 virtual interfaces to connect our system and the new namespace.

ip link add v-eth0 type veth peer name v-ebal

ip-netns03

eg.

root@ubuntu2004:~# ip link add v-eth0 type veth peer name v-ebal

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

3: v-ebal@v-eth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff

4: v-eth0@v-ebal: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff

Attach veth0 to namespace

Now we are going to move one virtual interface (one end of the cable) to the new network namespace

ip link set v-ebal netns ebal

ip-netns03

we will see that the interface is not showing on our system

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside the namespace

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Connect the two virtual interfaces

outside

ip addr add 10.10.10.10/24 dev v-eth0

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal
    inet 10.10.10.10/24 scope global v-eth0
       valid_lft forever preferred_lft forever

inside

ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever

Both Interfaces are down!

But both interfaces are down, now we need to set up both interfaces:

outside

ip link set v-eth0 up

root@ubuntu2004:~# ip link set v-eth0 up 

root@ubuntu2004:~# ip link show v-eth0
4: v-eth0@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside

ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link show v-ebal
3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

did it worked?

Let’s first see our routing table:

outside

root@ubuntu2004:~# ip r
default via 192.168.122.1 dev eth0 proto static
10.10.10.0/24 dev v-eth0 proto kernel scope link src 10.10.10.10
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

inside

root@ubuntu2004:~# ip netns exec ebal ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping !

outside

root@ubuntu2004:~# ping -c 5 10.10.10.20
PING 10.10.10.20 (10.10.10.20) 56(84) bytes of data.
64 bytes from 10.10.10.20: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 10.10.10.20: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.10.10.20: icmp_seq=4 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=5 ttl=64 time=0.071 ms

--- 10.10.10.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4098ms
rtt min/avg/max/mdev = 0.028/0.047/0.071/0.014 ms

inside

root@ubuntu2004:~# ip netns exec ebal bash
root@ubuntu2004:~#
root@ubuntu2004:~# ping -c 5 10.10.10.10
PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from 10.10.10.10: icmp_seq=4 ttl=64 time=0.044 ms
64 bytes from 10.10.10.10: icmp_seq=5 ttl=64 time=0.053 ms

--- 10.10.10.10 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4088ms
rtt min/avg/max/mdev = 0.041/0.045/0.053/0.004 ms
root@ubuntu2004:~# exit
exit

It worked !!

ip-netns03

End of part one.

Thursday, 07 May 2020

HamBSD Development Log 2020-05-07

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on writing unit tests for aprsisd.

I’ve added a few unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. There’s still some todo entries there as I’ve not made up packets for all the cases I wanted to check yet.

These are the first unit tests I’ve written for HamBSD and it’s definitely a different experience compared to writing Python unit tests for example. The framework for the tests uses bsd.regress.mk(5). The tests are C programs that include functions from aprsisd.

In order to do this I’ve had to split the function that converts AX.25 packets to TNC2 packets out into a seperate file. This is the sort of thing that I’d be more comfortable doing if I had more unit test coverage. It seemed to go OK and hopefully the coverage will improve as I get more used to writing tests in this way.

I also corrected a bug from yesterday where AX.25 3rd-party packets would have their length over-reported, leaking stack memory to RF.

I’ve been reading up on the station capabilities packet and it seems a lot of fields have been added by various software over time. Successful APRS IGate Operation (WB2OSZ, 2017) has a list of some of the fields and where they came from under “IGate Status Beacon” which seems to be what this packet is used for, not necessarily advertising the capabilities of the whole station.

If this packet were to become quite long, there is the possibility for an amplification attack. Someone with a low power transmitter can send an IGate query, and then get the IGate to respond with a much longer packet at higher power. It’s not even clear in the reply why this packet would be being sent as the requestor is not mentioned.

I think this will be the first place where I implement some rate limiting and see how that works. Collecting some simple statistics like the number of packets uploaded/downloaded would also be useful for monitoring.

Next steps:

  • Keep track of number of packets uploaded and downloaded
  • Add those statistics to station capabilities packet

Wednesday, 06 May 2020

cloudflared as a doh client with libredns

Cloudflare has released an Argo Tunnel client named: cloudflared. It’s also a DNS over HTTPS (DoH) client and in this blog post, I will describe how to use cloudflared with LibreDNS, a public encrypted DNS service that people can use to maintain the secrecy of their DNS traffic, but also circumvent censorship.

Notes based on ubuntu 20.04, as root

cloudflared.png

Download and install latest stable version

curl -sLO https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz

tar xf cloudflared-stable-linux-amd64.tgz

ls -l
total 61160
-rwxr-xr-x 1 root root 43782944 May  6 03:45 cloudflared
-rw-r--r-- 1 root root 18839814 May  6 19:42 cloudflared-stable-linux-amd64.tgz

mv cloudflared /usr/local/bin/

check version

# cloudflared --version
cloudflared version 2020.5.0 (built 2020-05-06-0335 UTC)

doh support

# cloudflared proxy-dns --help
NAME:
   cloudflared proxy-dns - Run a DNS over HTTPS proxy server.

USAGE:
   cloudflared proxy-dns [command options]

LibreDNS Endpoints

LibreDNS has two endpoints:

  • dns-query
  • ads

The latest blocks trackers/ads etc.

standalone

We can use cloudflared as standalone for testing, here is on a non standard TCP port:

cloudflared proxy-dns --upstream https://doh.libredns.gr/ads --port 5454
INFO[0000] Adding DNS upstream                   url="https://doh.libredns.gr/ads"
INFO[0000] Starting DNS over HTTPS proxy server  addr="dns://localhost:5454"
INFO[0000] Starting metrics server               addr="127.0.0.1:41717/metrics"

Testing ads endpoint

$ dig @127.0.0.1 -p 5454 +short analytics.google.com
0.0.0.0
$ dig @127.0.0.1 -p 5454 +short google.com
216.58.210.14
$ dig @127.0.0.1 -p 5454 +short test.libredns.gr
116.202.176.26

conf

We have verified that cloudflared works with libredns, so let us create a configuration file.

By default, cloudflared is trying to find one of the below files (replace root with your user):

  • /root/.cloudflared/config.yaml
  • /root/.cloudflared/config.yml
  • /root/.cloudflare-warp/config.yaml
  • /root/cloudflare-warp/config.yaml
  • /root/.cloudflare-warp/config.yml
  • /root/cloudflare-warp/config.yml
  • /usr/local/etc/cloudflared/config.yml

The most promising file is:

  • /usr/local/etc/cloudflared/config.yml

Create the configuration file

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true
proxy-dns-upstream:
 - https://doh.libredns.gr/dns-query
EOF

or for ads endpoint

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true
proxy-dns-upstream:
 - https://doh.libredns.gr/ads
EOF

Testing

# cloudflared

INFO[0000] Version 2020.5.0
INFO[0000] GOOS: linux, GOVersion: go1.12.7, GoArch: amd64
INFO[0000] Flags                                         proxy-dns=true proxy-dns-upstream="https://doh.libredns.gr/ads"
INFO[0000] Adding DNS upstream                           url="https://doh.libredns.gr/ads"
INFO[0000] Starting DNS over HTTPS proxy server          addr="dns://localhost:53"
INFO[0000] Starting metrics server                       addr="127.0.0.1:33519/metrics"
INFO[0000] cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
$ dig test.libredns.gr +short
116.202.176.26

Service

if you are a use of Argo Tunnel and you have a cloudflare account, then you login and get your cert.pem key. Then (and only then) you can install cloudflared as a service by:

 cloudflared service install

and you can use /etc/cloudflared or /usr/local/etc/cloudflared/ and must have two files:

  • cert.pem and
  • config.yml (the above file)

That’s it !

HamBSD Development Log 2020-05-06

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on gating from the Internet to RF.

In the morning I cleaned up the mess I made yesterday with escaping the non-printable characters in packets before uploading them. I ran some test packets through and both Xastir and aprs.fi could decode them so that must be the correct way to do it.

I also added filtering of generic station queries (?APRS?) and IGate queries (?IGATE?). When an IGate query is seen, aprsisd will now respond with a station capabilities packet. The packet is not very exciting as it only contains the “IGATE” capability right now, but at least it does that.

Third-party packets are also identified, and have their RF headers stripped, and then are unconditionally thrown away. I need to add a check there to see if TCPIP is in the path and gate it if it’s not but I didn’t get to that today.

I added a new flag to aprsisd, -b, to allow the operator to indicate whether or not this station is a bi-directional IGate station. This currently only affects the q construct added to the path for uploaded packets (there wasn’t one before today) to indicate to APRS-IS whether or not this station can be used for two-way messaging with RF stations in range. Later I’ll make this also drop incoming messages if it’s set to receive-only instead of attempting to gate them anyway.

I noticed that when I connect to aprsc servers with TLS, they have actually been appending a qAS (message generated by server) instead of qAR (message is from RF) which I think is a bug, so I filed a GitHub issue.

The big thing today was generating third-party headers. Until now, aprsisd has tried to stuff the TNC2 header back into an AX.25 packet with a lot of truncated path entries. It’s now building the headers correctly(ish) and it’s possible to have bi-directional messaging through a HamBSD IGate. The path in the encapsulated header is currently entirely missing but it still works.

As this is a completely different way of handling these packets, it meant a rewrite of a good chunk of code. The skeleton is there now, just need to fill it in.

APRS message from MM0ROR: Hello from APRS-IS!

Next steps:

  • Generate proper paths for 3rd-party packets
  • Include a path for RF headers for station capabilities and 3rd-party packets
  • Add the new -b flag to the man page

Tuesday, 05 May 2020

HamBSD Development Log 2020-05-05

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on converting AX.25 packets to the TNC2 format used by APRS-IS.

I fixed the path formatting to include the asterisks for used path entries. Before packets would always appear to APRS-IS to have been heard directly, which gave some impressive range statistics for packets that had in fact been through one or two digipeaters.

A little more filtering is now implemented for packets. The control field and PID are verified to ensure the packets are APRS packets.

The entire path for AX.25 packet read from axtap(4) interface to TNC2 formatted string going out the TCP/TLS connection has bounds checks, with almost all string functions replaced with the mem* equivalents.

It wasn’t clear if it’s necessary to escape the non-printable characters in packets before sending to APRS-IS, and it turns out that actually you’re not meant to do that. I’d implemented this with the following (based roughly on how the KISS output escaping working in kiss(4):

icp = AX25_INFO_PTR(pkt_ax25, pi);
iep = pkt_ax25 + ax25_len;
while (icp < iep) {
        ibp = icp;
        while (icp < iep) {
                if (!isprint(*icp++)) {
                        icp--;
                        break;
                }
        }
        if (icp > ibp) {
                if (tp + (icp - ibp) > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                memcpy(&pkt_tnc2[tp], ibp, icp - ibp);
                tp += icp - ibp;
        }
        if (icp < iep) {
                if (tp + 6 > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                pkt_tnc2[tp++] = '<';
                pkt_tnc2[tp++] = '0';
                pkt_tnc2[tp++] = 'x';
                pkt_tnc2[tp++] = hex_chars[(*icp >> 4) & 0xf];
                pkt_tnc2[tp++] = hex_chars[*icp & 0xf];
                pkt_tnc2[tp++] = '>';
                icp++;
        }
}

I can now probably replace this with just a single bounds check and memcpy, but then I need to worry about logging. There is a debug log for every packet that I’ll probably just call strvis(3).

This did throw up something interesting though, so maybe this wasn’t a complete waste of time. I noticed that a “<0x0d>” was getting appended to packets coming out of my Yaesu VX-8DE. It turns out that this wasn’t a bug in my code or in aprsc (the APRS-IS server software I was connected to) but it’s actually a real byte that is tagged on the end of every APRS packet generated by the radio’s firmware. I never saw it before because aprsc would interpret this byte (ASCII carriage return) as the end of a packet, it would just be lost.

Next steps:

  • Removing the non-printable character escaping again
  • Filtering generic APRS queries (to avoid packet floods)
  • Filtering 3rd-party packets

Sunday, 03 May 2020

kwallet-pam >= 5.18.4 and ecryptfs homes

If you are using kwallet-pam to unlock your kwallet wallets *and* have a encryptfs home, the automatic unlocking probably stopped working for you with Plasma 5.18.4 and you are getting a "Wallet failed to get opened by PAM, error code is -9" in the system log.

Why?

kwallet-pam uses something called a salt file.

Before Plasma 5.18.4 the salt file was read (or created if not existing) in the "authenticate" step of pam. Now that happens on the "open_session" step of pam.

This is very important because on the open_session the encrypted home is already mounted while in the authenticate step it is not.

What that means is that before Plasma 5.18.4 there was a /home/youruser/.local/share/kwalletd/kdewallet.salt *outside* your encrypted home (that was created/read on the "authenticate" step).

Now with Plasma >= 5.18.4 the /home/youruser/.local/share/kwalletd/kdewallet.salt is created/read correctly inside your encrypted home like the rest of your files.

This is all nice for new users, but if you have an existing user, the kwallet auto unlocking will stop to work.

Why?

Because your wallet was salted with the file that was outside your encrypted home folder, now since kwallet-pam can no longer read that, it fails.

How to fix it?

* Reboot your system
* Login as root (or as a different user)
* See that there is a /home/youruser/.local/share/kwalletd/kdewallet.salt (FILE_A)
* Copy that file somewhere safe
* Now login as the youruser user
* If you have a /home/youruser/.local/share/kwalletd/kdewallet.salt copy it somewhere else safe too (you shouldn't need it but just in case)
* Copy the FILE_A you stashed somewhere safe to /home/youruser/.local/share/kwalletd/kdewallet.salt
* Reboot your system and now everything should work

Hetzner Dedicated Server Reverse DNS + Ansible

Continuing on the path towards all my stuff being managed by Ansible, I’ve figured out a method of managing the reverse DNS entries for subnets on the Hetzner Dedicated Server.

There’s a bunch of Ansible modules for handling Hetzner Cloud, but these servers are managed in Robot which the Cloud API doesn’t cover. Instead, you need to use the Robot Webservice.

Ansible does have a module for doing pretty arbitrary things with web APIs though, so using that I’ve got the following playbook figured out to keep the reverse DNS entries in sync:

---
- hosts:
  - vmhost_vm1
  gather_facts: False
  tasks:
  - name: import hetzner webservice credentials
    include_vars:
      file: "hetzner_ws.yml"
  - name: set rdns for hetzner hosts
    block:
    - name: get current rdns entry
      uri:
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "https://robot-ws.your-server.de/rdns/{{ vmip4 }}"
        status_code: [200, 404]
      register: rdns_result
    - name: update incorrect rdns entry
      uri:
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "https://robot-ws.your-server.de/rdns/{{ vmip4 }}"
        method: "POST"
        body_format: form-urlencoded
        body:
          ptr: "{{ inventory_hostname }}"
        status_code: [200, 201]
      when: '"rdns" not in rdns_result.json or inventory_hostname != rdns_result.json.rdns.ptr'
      changed_when: True
    delegate_to: localhost

The host groups this runs on are currently hardcoded as the VMs that live in the Hetzner Dedicated Server. A future iteration of this might use some inventory plugin to look up the subnets that are managed on Hetzner and create a group for all of those. Right now it won’t be setting the reverse DNS for the “router” interface on that subnet, and won’t automatically include new server’s subnets.

Gathering facts is disabled because all of these run locally. There is at least one VM running on this server that I can’t log in to because I host it for someone else, so running locally is a necessity.

The webservice credentials are stored in an Ansible Vault encrypted YAML file and loaded explicitly. An important note: the webservice username and password is not the same as your regular Robot username and password. You need to create a webservice user in Robot under “Settings; Webservice and app settings”.

If you attempt to authenticate with an incorrect username and password 3 times in a row, your IP address will be blocked for 10 minutes. There are 6 hosts in this group, so I did this a few times before I realised there was a different user account I needed to create. I’d suggest limiting to a single host while you’re testing to get the authentication figured out.

The actual calls to the webservice take place in a block just to avoid having to specify the delegate_to: localhost twice. The first step looks up the current PTR record, and accepts success if it gives either a 200 or 404 status code (it will be 404 if there is no existing pointer record). The result of this is used to conditionally create/update the PTR record in the next task.

If nothing needs to be done, nothing will be changed and the second task will be skipped. If a change is needed, the second step is successful if either the PTR record is updated (status 200) or created (status 201).

This was actually a lot easier than I thought it would be, and the uri module looks to be really flexible, so I bet there are other things that I could easily integrate into my Ansible playbooks.

Monday, 27 April 2020

Run your CI test with GitLab-Runner on your system

GitLab is a truly wonderful devops platform. It has a complete CI/CD toolchain, it’s opensource (GitLab Community Edition) and it can also be self-hosted. One of its greatest feature are the GitLab Runner that are used in the CI/CD pipelines.

The GitLab Runner is also an opensource project written in Go and handles CI jobs of a pipeline. GitLab Runner implements Executors to run the continuous integration builds for different scenarios and the most used of them is the docker executor, although nowadays most of sysadmins are migrating to kubernetes executors.

I have a few personal projects in GitLab under https://gitlab.com/ebal but I would like to run GitLab Runner local on my system for testing purposes. GitLab Runner has to register to a GitLab instance, but I do not want to install the entire GitLab application. I want to use the docker executor and run my CI tests local.

Here are my notes on how to run GitLab Runner with the docker executor. No root access needed as long as your user is in the docker group. To give a sense of what this blog post is, the below image will act as reference.

gitlabrunner.png

GitLab Runner

The docker executor comes in two flavors:

  • alpine
  • ubuntu

In this blog post, I will use the ubuntu flavor.

Get the latest ubuntu docker image

docker pull gitlab/gitlab-runner:ubuntu

Verify

$ docker run --rm -ti gitlab/gitlab-runner:ubuntu --version
Version:      12.10.1
Git revision: ce065b93
Git branch:   12-10-stable
GO version:   go1.13.8
Built:        2020-04-22T21:29:52+0000
OS/Arch:      linux/amd64

exec help

We are going to use the exec command to spin up the docker executor. With exec we will not need to register with a token.

$ docker run --rm -ti gitlab/gitlab-runner:ubuntu exec --help

Runtime platform arch=amd64 os=linux pid=6 revision=ce065b93 version=12.10.1
NAME:
   gitlab-runner exec - execute a build locally

USAGE:
   gitlab-runner exec command [command options] [arguments...]

COMMANDS:
     shell       use shell executor
     ssh         use ssh executor
     virtualbox  use virtualbox executor
     custom      use custom executor
     docker      use docker executor
Runner
5 minutes ago
# Run your CI test with GitLab-Runner on your system
GitLab     parallels   use parallels executor

OPTIONS:
   --help, -h  show help

Git Repo - tmux

Now we need to download the git repo, we would like to test. Inside the repo, a .gitlab-ci.yml file should exist. The gitlab-ci file describes the CI pipeline, with all the stages and jobs. In this blog post, I will use a simple repo that builds the latest version of tmux for centos6 & centos7.

git clone https://gitlab.com/rpmbased/tmux.git
cd tmux

Docker In Docker

The docker executor will spawn the GitLab Runner. GitLab Runner needs to communicate with our local docker service to spawn the CentOS docker image and to run the CI job.

So we need to pass the docker socket from our local docker service to GitLab Runner docker container.

To test dind (docker-in-docker) we can try one of the below commands:

docker run --rm -ti
    -v /var/run/docker.sock:/var/run/docker.sock
    docker:latest sh

or

docker run --rm -ti
    -v /var/run/docker.sock:/var/run/docker.sock
    ubuntu:20.04 bash

Limitations

There are some limitations of gitlab-runner exec.

We can not run stages and we can not download artifacts.

  • stages no
  • artifacts no

Jobs

So we have to adapt. As we can not run stages, we will tell gitlab-runner exec to run one specific job.
In the tmux repo, the build-centos-6 is the build job for centos6 and the build-centos-7 for centos7.

Artifacts

GitLab Runner will use the /builds as the build directory. We need to mount this directory as read-write to a local directory to get the artifact.

mkdir -pv artifacts/

The docker executor has many docker options, there are options to setup a different cache directory. To see all the docker options type:

$ docker run --rm -ti gitlab/gitlab-runner:ubuntu exec docker --help | grep docker

Bash Script

We can put everything from above to a bash script. The bash script will mount our current git project directory to the gitlab-runner, then with the help of dind it will spin up the centos docker container, passing our code and gitlab-ci file, run the CI job and then save the artifacts under /builds.

#!/bin/bash

# This will be the directory to save our artifacts
rm -rf artifacts
mkdir -p artifacts

# JOB="build-centos-6"
JOB="build-centos-7"

DOCKER_SOCKET="/var/run/docker.sock"

docker run --rm                              \
  -v "$DOCKER_SOCKET":"$DOCKER_SOCKET"       \
  -v "$PWD":"$PWD"                           \
  --workdir "$PWD"                           \
  gitlab/gitlab-runner:ubuntu                \
  exec docker                                \
      --docker-volumes="$PWD/artifacts":/builds:rw \
      $JOB

That’s it.

You can try with your own gitlab repos, but dont forget to edit the gitlab-ci file accordingly, if needed.

Full example output

Last, but not least, here is the entire walkthrough

ebal@myhomepc:tmux(master)$ git remote -v
oring   git@gitlab.com:rpmbased/tmux.git (fetch)
oring   git@gitlab.com:rpmbased/tmux.git (push)
$ ./gitlab.run.sh

Runtime platform           arch=amd64 os=linux pid=6 revision=ce065b93 version=12.10.1
Running with gitlab-runner 12.10.1 (ce065b93)
Preparing the "docker" executor
Using Docker executor with image centos:6 ...
Pulling docker image centos:6 ...
Using docker image sha256:d0957ffdf8a2ea8c8925903862b65a1b6850dbb019f88d45e927d3d5a3fa0c31 for centos:6 ...
Preparing environment
Running on runner--project-0-concurrent-0 via 42b751e35d01...
Getting source from Git repository
Fetching changes...
Initialized empty Git repository in /builds/0/project-0/.git/
Created fresh repository.
From /home/ebal/gitlab-runner/tmux
 * [new branch]      master     -> origin/master
Checking out 6bb70469 as master...

Skipping Git submodules setup
Restoring cache
Downloading artifacts
Running before_script and script
$ export -p NAME=tmux
$ export -p VERSION=$(awk '/^Version/ {print $NF}' tmux.spec)
$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
$ yum -y update &> /dev/null
$ yum -y install rpm-build curl gcc make automake autoconf pkg-config &> /dev/null
$ yum -y install libevent2-devel ncurses-devel &> /dev/null
$ cp $NAME.spec                  rpmbuild/SPECS/$NAME.spec
$ curl -sLo rpmbuild/SOURCES/$NAME-$VERSION.tar.gz   https://github.com/tmux/$NAME/releases/download/$VERSION/$NAME-$VERSION.tar.gz
$ curl -sLo rpmbuild/SOURCES/bash-it.completion.bash https://raw.githubusercontent.com/Bash-it/bash-it/master/completion/available/bash-it.completion.bash
$ rpmbuild --define "_topdir ${PWD}/rpmbuild/" --clean -ba rpmbuild/SPECS/$NAME.spec &> /dev/null
$ cp rpmbuild/RPMS/x86_64/$NAME*.x86_64.rpm $CI_PROJECT_DIR/
Running after_script
Saving cache
Uploading artifacts for successful job
Job succeeded

artifacts

and here is the tmux-3.1-1.el6.x86_64.rpm

$ ls -l artifacts/0/project-0
total 368
-rw-rw-rw- 1 root root    374 Apr 27 09:13 README.md
drwxr-xr-x 1 root root     70 Apr 27 09:17 rpmbuild
-rw-r--r-- 1 root root 365836 Apr 27 09:17 tmux-3.1-1.el6.x86_64.rpm
-rw-rw-rw- 1 root root   1115 Apr 27 09:13 tmux.spec

docker processes

if we run docker ps -a from another terminal, we see something like this:

$ docker ps -a
CONTAINER ID  IMAGE                        COMMAND                  CREATED        STATUS                   PORTS  NAMES
b5333a7281ac  d0957ffdf8a2                 "sh -c 'if [ -x /usr…"   3 minutes ago  Up 3 minutes                    runner--project-0-concurrent-0-e6ee009d5aa2c136-build-4
70491d10348f  b6b00e0f09b9                 "gitlab-runner-build"    3 minutes ago  Exited (0) 3 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-3
7be453e5cd22  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-2
1046287fba5d  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-1
f1ebc99ce773  b6b00e0f09b9                 "gitlab-runner-build"    4 minutes ago  Exited (0) 4 minutes ago        runner--project-0-concurrent-0-e6ee009d5aa2c136-predefined-0
42b751e35d01  gitlab/gitlab-runner:ubuntu  "/usr/bin/dumb-init …"   4 minutes ago  Up 4 minutes                    vigorous_goldstine

Sunday, 26 April 2020

Upgrading from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS

Server Edition

disclaimer: at this moment there is not an “official” server version of an 20.04 LTS available, so we we will use the development 20.04 release.

Maintenance

If this is a production server, do not forget to inform customers/users/clients that this machine is under maintenance before you start.

backup

When was the last time you took a backup?
Now is a good time.
Try to verify your backup otherwise do not proceed.

Update you current system

Before continue with the dist upgrade to 20.04 LTS, we need to update & upgrade our current LTS version.

Login to your system:

~> ssh ubuntu1804

apt update
apt -y upgrade

reboot is necessary.

update

root@ubuntu:~# apt update
Hit:1 http://gr.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://gr.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://gr.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://gr.archive.ubuntu.com/ubuntu bionic-security InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.

upgrade

# apt -y upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  bsdutils distro-info-data dmidecode fdisk grub-common grub-pc grub-pc-bin grub2-common landscape-common libblkid1 libfdisk1 libmount1 libnss-systemd
  libpam-systemd libsmartcols1 libsystemd0 libudev1 libuuid1 linux-firmware mount open-vm-tools python3-update-manager sosreport systemd systemd-sysv udev
  unattended-upgrades update-manager-core util-linux uuid-runtime
51 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 85.6 MB of archives.
After this operation, 751 kB of additional disk space will be used.
Get:1 http://gr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 bsdutils amd64 1:2.31.1-0.4ubuntu3.6 [60.3 kB]
...

reboot

# reboot

Do release upgrade

root@ubuntu:~# which do-release-upgrade
/usr/bin/do-release-upgrade

help

do-release-upgrade --help
root@ubuntu:~# do-release-upgrade --help
Usage: do-release-upgrade [options]

Options:
  -h, --help            show this help message and exit
  -V, --version         Show version and exit
  -d, --devel-release   If using the latest supported release, upgrade to the
                        development release
  --data-dir=DATA_DIR   Directory that contains the data files
  -p, --proposed        Try upgrading to the latest release using the upgrader
                        from $distro-proposed
  -m MODE, --mode=MODE  Run in a special upgrade mode. Currently 'desktop' for
                        regular upgrades of a desktop system and 'server' for
                        server systems are supported.
  -f FRONTEND, --frontend=FRONTEND
                        Run the specified frontend
  -c, --check-dist-upgrade-only
                        Check only if a new distribution release is available
                        and report the result via the exit code
  --allow-third-party   Try the upgrade with third party mirrors and
                        repositories enabled instead of commenting them out.
  -q, --quiet

do-release-upgrade

# do-release-upgrade -m server
root@ubuntu:~# do-release-upgrade -m server
Checking for a new Ubuntu release
There is no development version of an LTS available.
To upgrade to the latest non-LTS develoment release
set Prompt=normal in /etc/update-manager/release-upgrades.

server

do-release-upgrade -m server -d
root@ubuntu:~# do-release-upgrade -m server -d
Checking for a new Ubuntu release
Get:1 Upgrade tool signature [1,554 B]

Get:2 Upgrade tool [1,344 kB]

Fetched 1,346 kB in 0s (0 B/s)

authenticate 'focal.tar.gz' against 'focal.tar.gz.gpg'
extracting 'focal.tar.gz'

at this moment, we will switch to a gnu/screen session

Reading cache

Checking package manager

Continue running under SSH?

This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.

If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?

Continue [yN]

Press: y

Starting additional sshd

To make recovery in case of failure easier, an additional sshd will
be started on port '1022'. If anything goes wrong with the running
ssh you can still connect to the additional one.
If you run a firewall, you may need to temporarily open this port. As
this is potentially dangerous it's not done automatically. You can
open the port with e.g.:
'iptables -I INPUT -p tcp --dport 1022 -j ACCEPT'

To continue please press [ENTER]

Press Enter

update repos

Reading package lists... Done
Building dependency tree
Reading state information... Done
Hit http://gr.archive.ubuntu.com/ubuntu bionic InRelease
Get:1 http://gr.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]

Get:2 http://gr.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]

Get:3 http://gr.archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:4 http://gr.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [916 kB]
Fetched 1,168 kB in 0s (0 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done

Updating repository information
Get:1 http://gr.archive.ubuntu.com/ubuntu focal InRelease [265 kB]
...

...
Get:32 http://gr.archive.ubuntu.com/ubuntu focal-security/multiverse amd64 c-n-f Metadata [116 B]
Fetched 57.3 MB in 6s (1,247 kB/s)

Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done

Calculating the changes

Calculating the changes

Do you want to start the upgrade?

3 packages are going to be removed. 105 new packages are going to be
installed. 428 packages are going to be upgraded.

You have to download a total of 306 M. This download will take about
3 minutes with your connection.

Installing the upgrade can take several hours. Once the download has
finished, the process cannot be canceled.

 Continue [yN]  Details [d]

Press y

(or review by pressing d )

Fetching packages

Fetching

...
Get:3 http://gr.archive.ubuntu.com/ubuntu focal/main amd64 libcrypt1 amd64 1:4.4.10-10ubuntu4 [78.2 kB]
Get:4 http://gr.archive.ubuntu.com/ubuntu focal/main amd64 libc6 amd64 2.31-0ubuntu9 [2,713 kB]
...

services

at some point a question will pop:

  • Restart services during package upgrade without asking ?

I answered Yes but you should answer this the way you prefer.

f782394f.png

patience is a virtue

Get a coffee or tea. Read a magazine.

Patience is a virtue

till you see a jumping animal.

resolved

Configuration file '/etc/systemd/resolved.conf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** resolved.conf (Y/I/N/O/D/Z) [default=N] ?

I answered this Y, I will change it later.

vim

same here

Configuration file '/etc/vim/vimrc'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** vimrc (Y/I/N/O/D/Z) [default=N] ? Y

ssh conf

b302c319.png

Remove obsolete packages

and finally

Progress: [ 99%]
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Processing triggers for initramfs-tools (0.136ubuntu6) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-26-generic
Processing triggers for dbus (1.12.16-2ubuntu2) ...
Reading package lists... Done
Building dependency tree
Reading state information... Done

Searching for obsolete software
Reading state information... Done

Remove obsolete packages?

59 packages are going to be removed.

 Continue [yN]  Details [d]

Press y to continue

Restart

are you ready to restart your machine ?

System upgrade is complete.

Restart required

To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.

Continue [yN]

Press y to restart

LTS 20.04

Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-26-generic x86_64)

  System information as of Sun 26 Apr 2020 10:34:43 AM UTC

  System load:  0.52               Processes:               135
  Usage of /:   24.9% of 19.56GB   Users logged in:         0
  Memory usage: 3%                 IPv4 address for enp1s0: 192.168.122.77
  Swap usage:   0%

 * Ubuntu 20.04 LTS is out, raising the bar on performance, security,
   and optimisation for Intel, AMD, Nvidia, ARM64 and Z15 as well as
   AWS, Azure and Google Cloud.

     https://ubuntu.com/blog/ubuntu-20-04-lts-arrives

0 updates can be installed immediately.
0 of these updates are security updates.

Last login: Sun Apr 26 07:50:39 2020 from 192.168.122.1
$ cat /etc/os-release

NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Tag(s): ubuntu, 18.04, 20.04, LTS

Saturday, 25 April 2020

Ubuntu Server 20.04 LTS walkthrough

basic server installation

01_ubuntu_2004.png

02_ubuntu_2004.png

03_ubuntu_2004.png

04_ubuntu_2004.png

05_ubuntu_2004.png

06_ubuntu_2004.png

07_ubuntu_2004.png

08_ubuntu_2004.png

09_ubuntu_2004.png

10_ubuntu_2004.png

11_ubuntu_2004.png

12_ubuntu_2004.png

13_ubuntu_2004.png

14_ubuntu_2004.png

15_ubuntu_2004.png

16_ubuntu_2004.png

17_ubuntu_2004.png

18_ubuntu_2004.png

19_ubuntu_2004.png

20_ubuntu_2004.png

Tag(s): ubuntu, 20.04

Monday, 20 April 2020

👾 Game-streaming without the "cloud" 🌩️

With increasing bandwidths, live-streaming of video games is becoming more and more popular – and might further accelerate the demise of the desktop computer. Most options are “cloud-gaming” services based on subscriptions where you don’t own the games and are likely to be tracked and monetised for your data. In this blog post I present the solution I built at home to replace my “living room computer”.

TL; DR

I describe how to stream a native Windows game (Divinity Original Sin 2 – installed DRM-free from GOG) via Steam Remote Play from my Desktop (running Devuan GNU/Linux) in 4K resolution and maximum quality directly to my TV (running Android). On the TV I then play the game with my flatmate using two controllers: one Xbox Controller connected via USB, one Steam-Controller connected via Bluetooth. The process of getting there is not trivial or flawless, but gaming itself works perfectly without artefacts or lag.

(The computer in the above photo is not used any longer.)

Motivation

The big “cloud” streaming services for audio and video are incredibly convenient. In my opinion they address the following problems very well:

  1. They make content available on hardware that wouldn’t have the capability to store the content.
  2. This includes all devices of the user, not just a single (desktop/laptop) computer.
  3. They provide access to a lot of content at a fixed monthly price (especially useful for people who haven’t had the possibility to build up their “library”).

On the other hand, they usually force you to use software that is non-free and/or not trustworthy (if they support your OS at all). You risk losing access to all content when the subscription is terminated. And – often overlooked – they monitor you very closely: what you watch/listen to, when and where you do it, when you press pause and for how long etc. I suspect that at some point this data will be more valuable than the income generated from subscription fees, but this is not important here.

It was only a matter of time before video-gaming would also become part of the streaming business and there are now multiple contenders: Google Stadia and Playstation Now are typical “streaming services” like Netflix for video; the games are included in the monthly fee. They have all the benefits and problems discussed above. Other services like GeForce Now and Blade Shadow provide computation and streaming infrastructure but leave it to you to provide the games (manually or via gaming platforms). This has slightly different implications, but I dont’ want to discuss these further, because I haven’t used any of them and likely won’t in the future.

In any case, I am of course not a big fan of being tracked and anything I can set up with my own infrastructure at home, I try to do! [I actually don’t really spend that much time gaming nowadays, but setting this up was still fun]. In the past I have had an extra (older) computer beside the TV that I used for casual gaming. This has become to old (and noisy) to play current games, so instead I want stream the game from my own desktop computer in a different room.

Disclaimer: It should be noted that this “only” provides feature 2. above, i.e. convergence of different devices. Also this setup includes using various non-free software components, i.e. it also does not solve all of the problems mentioned above.

Setting up the host

The “host” is the computer that renders the game, in my case the desktop computer. Before attempting to stream anything, we need to make sure that everything works on the host, i.e. we need to be able to play the game, use the controllers etc.

Hardware

CPU RAM GPU
AMD Ryzen 6c/12t @ 3.6Ghz 32GB GeForce RTX 2060S

Since the host needs to render the game, it needs decent hardware. Note that it needs to be able to encode the video-stream at the same time as rendering the game which means the requirements are even higher than usual. But of course all of this also depends on the exact games that you are playing and which resolution you want to stream. My specs are shown above.

The game

I chose Divinity Original Sin 2 for this article because that’s what I am playing right now and because I wanted to demonstrate that this even works with games that are not native to Linux (although I usually don’t buy games that don’t have native versions). If you buy it, I recommend getting it on GOG, because games are DRM-free there (they work without internet connection and stay working even if GOG goes bankrupt). The important thing here is that the game does not need to be a Steam game even though we will use Steam Remote Play for streaming. Buying it on Steam will make the process a little easier though.

Install the game using wine (wine in Devuan/Debian repos). I assume that you have installed it to /games/DOS2 and that you have setup a “windows disk G:" for /games. If not, adjust paths accordingly.

Steam

If you haven’t done so already, install Steam. It is available in Debian/Devuan non-free repositories as steam but will install and self-update itself in a hidden subfolder of your home-directory upon first start. You need a steam-account (free) and you need to be logged in for everything to work. I really dislike this and it means that Steam quite likely does gather data about you. I suspect that using non-Steam games makes it more difficult to track you, but I have not done any research on this. See the end of this post for possible alternatives.

The first important thing to know about Steam on Linux is that it ships many system libraries, but it doesn’t ship everything that it needs and it gives you no diagnostic about missing stuff nor are UI elements correctly disabled when the respective feature is not available. This includes support for hardware video encoding and for playing windows games.

Hardware-video encoding is a feature you really want because it reduces latency and load on your CPU. For reasons I don’t understand, Steam uses neither libva (the standard interface for video acceleration on free operating systems) nor vdpau (NVIDIA-specific but also free/open). Instead it uses the proprietary NVENC interface. On Debian / Devuan his has been patched out of all the libraries and applications, so you need to make sure that you get your libraries and apps like ffmpeg from the Debian multimedia project which has working versions. I am not entirely sure which set of libraries/apps is required, for me it was sufficient to install libnvidia-encode1 and do an apt upgrade after adding the debian multimedia repo. Note that only installing libnvidia-encode1 from the official repo was not sufficient. See the troubleshooting section on how to diagnose problems with hardware video encoding.

To play Windows games with Steam, you can use Steam’s builtin windows emulator called Proton (here’s a current articel on it). It’s a fork of Wine with additional patches (most of which are upstreamed to official wine later). Unfortunately it is not installed after a fresh install of Steam on Linux and I found no explicit way of installing it (the interface still suggests it’s there, though!). To get it, select “Enable Steam Play for all titles” in the “Advanced” “Steam Play Settings” in the settings. This activates usage of Proton for Windows games not officially supported. Then install the free Windows game 1982 from inside Steam. This will automatically install Proton which is then listed as an installed application and updated automatically in the future. You can try the game to make sure Proton works as expected. Alternatively, buy the actual game on Steam and skip the next paragraph.

Now go to “Games” → “Add a Non-Steam Game to my Library…", then go to “BROWSE”, show all extensions and find the executable file of the game. In our case this is /games/DOS2/DefEd/bin/EoCApp.exe or G:\DOS2\DefEd\bin\EoCApp.exe (yes, it’s not in the top-level directory of the game). If your path contains spaces, it will break (without diagnostics). To fix this, simply edit the shortcut created for the game again and make sure the path is right and “set launch options” is empty (part of your path may have ended up there). In this dialog you can also explicitly state that you wish to use Proton to run the game (confusingly it will show multiple versions even if none of those are installed). You can also give the game a nicer name or icon if desired.

Test the game

You are now ready to test the game. Simply click on the respective button. Now is also a good time for testing the controller(s), because if they don’t work now, they likely won’t later on. There are many tutorials on using the (Steam) controller on Linux and there are also some notes in the troubleshooting section.

This is also a good point in time to update the firmware of the steam controller to the newest version; we will need that later on.

Setting up the client

The client is my TV, because that runs Android TV and there is a SteamLink application for Android. If your TV has a different OS, you could probably use a small set-top-box built around a cheap embedded board and use Android (or Linux) as the client OS from that. I suppose all of this would be possible on an Android Tablet, as well. I recommend connecting the TV via wired network to the host, but WiFi works at lower resolutions, too.

Controllers and Android

The Xbox controller is just plugged into the USB-port of the TV and required no further configuration. The Steam controller was a little more tricky, and I had no luck getting it to work via USB. However, it comes with Bluetooth support (only after upgrading the firmware!).

Establishing the initial Bluetooth connection between the TV and the controller was surprisingly difficult. Start the Steam-controller with Steam+Y pressed or alternatively with Steam+B pressed and only do so after initiating device search on the TV. If it does not work immediately, keep trying! After a few attempts, the TV should state that it found the device; it calls it SteamController but recognises it as a Keyboard+Mouse. That’s ok.

After the connection has been established once, the controller will auto-connect when turned off and on again (do not press Y or B during start!). The controller’s mouse feature is surprisingly useful in the Android TV when you use Android apps that are not optimised for the TV.

You need to install the SteamLink application from GooglePlay or through another channel like the Aurora Store (my Android TV is not associated with a Google account so I cannot use GooglePlay). After opening the app, Android will ask whether it should allow the app to access the Xbox controller which you have to agree to everytime (the “do this in the future” checkbox has no effect). Confusingly the TV then notifies you that the controller has been disconnected. This just means that the app controls it, I think. The app takes control of the Steam controller without asking and switches it from Keyboard+Mouse mode into Controller mode, so you can use the Joystick to navigate the buttons in the app.

It should automatically detect Steam running on the host and offer to make a connection. Press A on the controller or select “Start Playing”. The connection has to be verified once on the host for obvious reasons, but if everything works well you should now see the “Steam Big Picture Mode” interface on you TV. Your Desktop has switched to this at the same time (in general, the TV will now mirror your desktop). Maybe first try a native Steam game like the aforementioned “1982” to see if everything works.

Next try to start the Game we setup above via its regular entry in your Steam Library. Note that upon starting, the screen will initially flash black and return to Steam; the game is starting in the background, do not start it again, it just needs a second!

Everything should work now! If you hear audio but your screen stays black and shows a “red antenna symbol”, see the troubleshooting section below.

You can press the Steam-Button to return to Steam (although this sometimes breaks for non-native Steam games). You can also long-press the “back/select”-button on the controller to get a SteamLink specific menu provided by the Client. It can be used to launch an on-screen keyboard and force-quit the connection to the host and return to the SteamLink interface.

Video resolutions and codecs

If everything worked so far you are likely playing in 1080p. To change this to 4k resolution, go to the SteamLink app’s settings (wheel symbol) and then to “streaming settings” and then to “advanced”. You can increase the resolution limit there and also enable “HEVC Video” which improves video quality / reduces bitrate. If your desktop does not support streaming HEVC, SteamLink will establish no connection at all. See the troubleshooting section below if you get a black screen and the “red antenna symbol”.

Alternatives

The only viable alternative to Steam’s Remote Play for streaming your own games from your own hardware is NVIDIA GameStream – not to be confused with NVIDIA GeForce Now, the “cloud gaming” service. The advantages of GameStream over Steam Remote Play seem to be the following:

  1. The protocol is documented and there are good Free and Open Source client implementations.
  2. It claims better performance by tying closer into the host’s drivers.
  3. No online-connection or sign-up required like with Steam.

Also NVIDIA is a hardware company and even if the software is proprietary, it might be less likely to spy on you. The disadvantages are:

  1. The host software is only available for Windows.
  2. Only works with NVIDIA GPUs on the host machine, not AMD.

I could not try this, because I don’t have a Windows host and I wanted to stream from GNU/Linux.

Post scriptum

As you have seen the process really still has some rough edges, but I am honestly quite surprised that I did manage to set this up and that it works with really good quality in the end. Although I don’t like Steam because of its DRM, I have to admit that its impressive how much work they put into improving the driver situation on GNU/Linux and supporting such setups as discussed here. Consider that I didn’t even buy a game from Steam!

I would still love to see a host implementation of NVIDIA GameStream that runs on GNU/Linux. Even better would of course be a fully Free and Open Source solution.

On a sidenote: the setup I showed here can be used to stream any kind of application, even just the regular desktop if you want to (check out the advanced client options!).

Hopefully Steam Remote Play (and NVIDIA GameStream) can delay the full transition to “cloud gaming” a little.

Troubleshooting

No hardware video encoding To see if Steam is actually using your hardware encoding, look in the following log-file:
~/.steam/debian-installation/logs/streaming_log.txt

You should have something like:

"CaptureDescriptionID"  "Desktop OpenGL NV12 + NVENC HEVC"

The NVENC part is important. If you instead have the following:

"CaptureDescriptionID"  "Desktop OpenGL NV12 + libx264 main (4 threads)"

It means you have software encoding. Play around with ffmpeg to verify that NVENC works on your system. The following two commands should work:

% ffmpeg -h encoder=h264_nvenc
% ffmpeg -h encoder=hevc_nvenc

The second one is for the HEVC codec. If you are told that the selected encoder is not available, something is broken. Check to see if you correctly upgraded to the libraries from the Debian multimedia project.

Controller not working at all on host Likely your user account does not have permissions to access the device. It worked for me after making sure that my user was in the following groups:
audio dip video plugdev games scanner netdev input
Controller working in Steam but not in game

When you start Steam but leave BigPicture mode (Alt+Tab or minimise), Steam usually switches the controller config back to “Desktop mode” which means Keyboard+Mouse emulation. If a game is started then, it will not detect any controller. The same thing seams to happen for certain non-steam games started through Steam. A workaround is going into the Steam controller settings and selecting the “Gamepad” configuration also as default for “Desktop mode”.

Black screen and "red antenna symbol"

This happens when there is a resolution mismatch somewhere. Note that we have multiple places and layers where resolution can be set and that steam doesn’t always manage to sync these: the host (operating system, Steam, In-Game), the client (operating system, SteamLink). Additionally, Steam apparently can stream in a lower resolution than is set on either host or target.

I initially had this problem when streaming from 4k-host onto a SteamLink app that was configured to only accept 1080p. Changing the host resolution manually to 1080p before starting Steam solved this problem.

Now that everything (Host, In-Game, SteamLink on client) are configured to 4k, I still get this problem, because apparently Steam still attempts to reduce transmission resolution when starting the game. For obscure reasons the following workaround is possible: After starting the game, long-press “back/select” on the Controller to get to the SteamLink menu and select “Stop Game” there. This fixes the Black Screen and the regular game screen appears at 4K resolution. I suspect that “Stop game” terminates Steam’s wrapper around the game’s process that screws up the resolution. The devil knows why this does not terminate the game.

SteamLink produces black screen and nothing else

When switching in out of SteamLink via Android (e.g. TV remote), SteamLink sometimes doesn’t recover. Probably something goes wrong with putting the app to standby, but it’s also not something you typically would. Just quit the app correctly.

In any case, only a full reboot of the TV seems to fix this and make SteamLink usable again.

Sunday, 19 April 2020

Terminator 1.92 released

  • Mäh?
  • 11:58, Sunday, 19 April 2020

Do you still remember the project Terminator? People around the world are still using this tool as their terminal emulator of choice for Linux- and Unix-based systems (including Mac OS).

Unfortunately the development stagnated a bit since 2017 and within the last three years there had to be a lot of things to do. Terminator is written in Python, it had to be migrated to Python 3 for example, as distributions started to think about dropping support for Python 2. Packagers of several distributions started maintaining their own patches to support Terminator with Python 3, until today.

Two weeks ago, things have changed. A project at GitHub as been created, the source code has been migrated from Bazaar to Git and even some package maintainers from Arch Linux and Fedora contributed and were working hard towards whats happened this weekend. There is a Terminator 1.92 release available and you can find Terminator at it's new home here: https://github.com/gnome-terminator/terminator.

The update for Terminator 1.92 includes a lot of interesting changes you surely were waiting for, including the support for Python 3 and a bunch of bug fixes, for example you can now open links with just Ctrl+Click on the link. You can find a detailed change log right here: https://github.com/gnome-terminator/terminator/releases.

Of course, Fedora is one of the first distributions to update Terminator to 1.92 and if you're using Fedora or a RHEL8 based system, you can shortly update to it via:

dnf -y --enable-repo=updates-testing terminator

As usual you can leave your feedback and some Karma for the update:

The Terminator development team at GitHub will be lucky if you want to support the project and join. You don't need to be a programmer to support, in an open source project there is always something to do.







Friday, 17 April 2020

Should KDE fork CHMLib?

CHMLib is a library to handle CHM files.

It is used by Okular and other applications to show those files.

It hasn't had a release in 11 years.

It is packaged by all major distributions.

A few weeks ago I got annoyed because we need to carry a patch in Okular flathub because the code is not great and it defines it's own int types.

I tried contacting the upstream author, but unsurprisingly after 11 years he doesn't seem to care much and got no answer.

Then i looked saw that various distributions carry different random set of patches, not great.

So I said, OK, maybe we should just fork it, and emailed 14 people listed at repology as package maintainers for CHMLib saying how they would react to KDE forking CHMLib in an effort to do some basic maintenance, i.e. get it to compile without need for patches, make sure all the patches different distributions has are in the same place, etc.

1 packager answered saying "great"
1 packager answered "nah we want to follow upstream" (... which part of upstream is dead did they not understand?)
1 person answered saying "i haven't been packaging for $DISTRO for ages" (makes sense given how old the package is)
1 person answered saying "not a maintainer anymore but i think you should not break API/ABI of CHMLib" (which makes sense, never said that was planned)

And that's it, so only 4 out of 14 answers and only one of them encouraging.

So I'm asking *YOU*, should we push for a fork or I should stop this crazyness and do something more productive?

Wednesday, 08 April 2020

Geany and Geany-Plugins for EPEL8

  • Mäh?
  • 20:49, Wednesday, 08 April 2020

If you're a lucky user of a RedHat Enterprise Linux based system, you're probably already aware of the Enterprise Packages for Enterprise Linux from the Fedora Project. In case you've missed the flyweight IDE Geany and it's plugins there this is probably some good news for you: Geany is coming to EPEL8 soon!

The update will hit the testing repositories in the next days and needs your karma here:

https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2020-5392d2e45e

Tuesday, 31 March 2020

System Hackers meeting - Lyon edition

For the 4th time, and less than 5 months after the last meeting, the FSFE System Hackers met in person to coordinate their activities, work on complex issues, and exchange know-how. This time, we chose yet another town familiar to one of our team members as venue – Lyon in France. What follows is a report of this gathering that happened shortly before #stayhome became the order of the day.

For those who do not know this less visible but important team: The System Hackers are responsible for the maintenance and development of a large number of services. From the fsfe.org website’s deployment to the mail servers and blogs, from Git to internal services like DNS and monitoring, all these services, virtual machines and physical servers are handled by this friendly group that is always looking forward to welcoming new members.

Interestingly, we have gathered in the same constellation as in the hackathon before, so Albert, Florian, Francesco, Thomas, Vincent and me tackled large and small challenges in the FSFE’s systems. But we have also used the time to exchange knowledge about complex tasks and some interconnected systems. The official part was conducted in the fascinating Astech Fablab, but word has it that Ninkasi, an excellent pub in Lyon, was the actual epicentre of this year’s meeting.

Sharing is caring

Saturday morning after reviewing open tasks and setting our priorities, we started to share more knowledge about our services to reduce bottlenecks. For this, I drew a few diagrams to explain how we deploy our Docker containers, how our community database interacts with the mail and lists server, and how DNS works at the FSFE.

To also help the non-present system hackers and “future generations”, I’ve added this information to a public wiki page. This could also be the starting point to transfer more internal knowledge to public pages to make maintenance and onboarding easier.

Todo? Done!

Afterwards, we focused on closing tasks that have been open for a longer time:

  • The DNS has been a big issue for a long time. Over the past months we’ve migrated the source for our nameserver entries from SVN to Git, rewrote our deployment scripts, and eventually upgraded the two very sensitive systems to Debian 10. During the meeting, we came closer to perfection: all Bind configuration cleaned from old entries, uniformly formatted, and now featuring SPF, DMARC and CAA records.
  • For a better security monitoring of the 100+ mailing lists the FSFE hosts, we’ve finalised the weekly automatic checks for sane and safe settings, and a tool that helps to easily update the internal documentation.
  • Speaking of monitoring: we did lack proper monitoring of our 20+ hosts for availability, disk usage, TLS certificates, service status and more. While we tried for a longer time to get Prometheus and Grafana doing what we need, we performed a 180° turn: now, there is a Icinga2 installation running that already monitors a few hosts and their services – deployed with Ansible. In the following weeks we will add more hosts and services to the watched targets.
  • We plan to migrate our user-unfriendly way to share files between groups to Nextcloud, including using some more of the software’s capabilities. During the weekend, we’ve tested the instance thoroughly, and created some more LDAP groups that are automatically transposed to groups in Nextcloud. In the same run, Albert shared some more knowledge about LDAP with Vincent and me, so we get rid of more bottlenecks.

Then, it was time to deal with other urgent issues:

  • Some of us worked on making our systems more resilient against DDoS attacks. Over the Christmas season, we became a target of an attack. The idea is to come up with solutions that are easy to deploy on all our web services while keeping complexity low. We’ve tested some approaches and will further work on coming up with solutions.
  • Regarding webservers, we’ve updated the TLS configurations on various services to the recommended settings, and also improved some other settings while touching the configuration files.
  • We intend to ease people encrypting their emails with GnuPG. That is why we experimented with WKD/WKS and will work on setting up this service. As it requires some interconnection with others services, this will take us some more time unfortunately.
  • On the maintenance side of things, we have upgraded all servers except one to the latest Debian version, and also updated many of our Docker images and containers to make use of the latest security and stability improvements.
  • The FSFE hosts a few third party services, and unfortunately they have been running on unmaintained systems. That is why we set up a brand new host for our sister organisation in Latin America so they can eventually migrate, and moved the fossmarks.org website to our automatic CI/CD setup via Drone/Docker.

The next steps and developments

As you can see, we completed and started to tackle a lot of issues again, so it won’t become boring in our team any time soon. However, although we should know better, we intend to “change a running system”!

While the in-person meetings have been highly important and also fun, we are in a state where knowledge and mutual trust are further distributed between the members, the tasks separated more clearly and the systems mostly well documented. So part of our feedback session was the question whether these meetings in the 6-12 month rhythm are still necessary.

Yes, they are, but not more often than once a year. Instead, we would like to try virtual meetings and sprints. Before a sprint session, we would discuss all tasks (basically go through our internal Kan board), plan the challenges, ask for input if necessary, and resolve blockers as early as possible. Then, we would be prepared for a sprint day or afternoon during which everyone can work on their tasks while being able to directly contact other members. All that should happen over a video conference to have a more personal atmosphere.

For the analogue meetings, it was requested to also plan tasks and priorities beforehand together, and focus on tasks that require more people from the group. Also, we want to have more trainings and system introductions like we’ve just had to reduce dependencies on single persons.

All in all, this gathering has been another successful meeting and will set a corner stone for exciting new improvements for both the systems and the team. Thanks to everyone who participated, and a big applause to Vincent who organised the venue and the social activities!

Sunday, 29 March 2020

RSIBreak 0.12.12 released!

All of you that are in using a computer for a long time should use it!

https://userbase.kde.org/RSIBreak

Changes from 0.12.11:
* Don't reset pause counter on very short inputs that can just be accidental.
* Improve high dpi support
* Translation improvements
* Compile with Qt 5.15 beta
* Minor code cleanup

http://download.kde.org/stable/rsibreak/0.12/rsibreak-0.12.12.tar.xz

Friday, 27 March 2020

Cruising sailboat electronics setup with Signal K

I haven’t mentioned this on the blog earlier, but in the end of 2018 we bought a small cruising sailboat. After some looking, we went with a Van de Stadt designed Oceaan 25, a Dutch pocket cruiser from the early 1980s. S/Y Curiosity is an affordable and comfortable boat for cruising with 2-4 people, but also needed major maintenance work.

Curiosity sailing on Havel with Royal Louise

The refit has so far included osmosis repair, some fixes to the standing rigging, engine maintenance, and many structural improvements. But this post will focus on the electronics and navigation aspects of the project.

12V power

When we got it, the boat’s electrics setup was quite barebones. There was a small lead-acid battery, charged only when running the outboard. Light control was pretty much all-or-nothing, either we were running inside and navigation lights, or not. Everything was wired with 80s spec components, using energy-inefficient lightbulbs.

Looking at the state of the setup, it was also unclear when the electrics had been used for anything else than starting the engine last time.

Before going further with the electronics setup, all of this would have to be rebuilt. We made a plan, and scheduled two weekends in summer 2019 for rewiring and upgrading the electricity setup of the boat.

First step was to test all existing wiring with a multimeter, and label and document all of it. Surprisingly, there were only couple of bad connections from the main distribution panel to consumers, so for most part we decided to reuse that wiring, but just with a modern terminal block setup.

All wires labeled and being reconnected

For most part we used a dymo label printer, with the labels covered with a transparent heat shrink.

We replaced the old main control panel with a modern one with the capability to power different parts of the boat separately, and added some 12V and USB sockets next to it.

New battery charger and voltmeter

All internal lighting was replaced with energy-efficient LEDs, and we added the option of using red lights all through the cabin for preserving night vision. A car charger was added to the system for easier battery charging while in harbour.

Next steps for power

With this, we had a workable lighting and power setup for overnight sailing. But next obvious step will be to increase the range of our boat.

For that, we’re adding a solar panel. We already have most parts for the setup, but are still waiting for the customized NOA mounting hardware to arrive. And of course the current COVID-19 curfews need to lift before we can install it.

Until we have actual data from our Victron MPPT charge controller, I’ve run some simulations using NASA’s insolation data for Berlin on how much the panel ought to increase our cruising range.

Range estimates for Curiosity solar setup

The basis for boat navigation is still the combination of a clock, a compass, and a paper chart (as well as a sextant on the open ocean). However, most modern cruising boats utilize some electrical tools to aid the process of running the boat. These typically come in form a chartplotter and a set of sensors to get things like GPS position, speed, and the water depth.

Commercial marine navigation equipment is a bit like computer networking in the 90s - everything is expensive, and you pretty much have to buy the whole kit from a single vendor to make it work. Standards like NMEA 0183 exist, but “embrace and extend” is typical vendor behaviour.

Signal K

Being open source hackerspace people, that was obviously not the way we wanted to do things. Instead of getting locked into an expensive proprietary single-vendor marine instrumentation setup, we decided to roll our own using off-the-shelf IoT components. To serve as the heart of the system, we picked Signal K.

Signal K is first of all a specification on how marine instruments can exchange data. It also has an open source implementation in Node.js. This allows piping in data from all of the relevant marine data buses, as well as setting up custom data providers. Signal K then harmonizes the data, and makes it available both via modern web APIs, and in traditional NMEA formats. This enables instruments like chartplotters also to utilize the Signal K enriched data.

We’re running Signal K on a Raspberry Pi 3B+ powered by the boat battery. With a GPS dongle, this was already enough to give some basic navigation capabilities like charts and anchor watch. We also added a WiFi hotspot with a LTE uplink to the boat.

Tracking some basic sailing exercises via Signal K

To make the system robust, installation is automated via Ansible, and easy to reproduce. Our boat GitHub repo also has the needed functionality to run a clone of our boat’s setup on our laptops via Docker, which is great when developing new features.

Signal K has a very active developer community, which has been great for figuring out how the extend the capabilities of our system.

Chartplotter

We’re using regular tablets for navigation. The main chartplotter is a cheap old waterproof Samsung Galaxy Tab Active 8.0 tablet that can show both the Freeboard web-based chartplotter with OpenSeaMap charts, and run the Navionics Boating app to display commercial charts. Navionics is also able to receive some Signal K data over the boat WiFi to show things like AIS targets, and to utilize the boat GPS.

Samsung T360 with Curiosity logo

As a backup we have our personal smartphones and tablets.

Anchor watch with Freeboard and a tablet

Inside the cabin we also have an e-ink screen showing the primary statistics relevant to the current boat state.

e-ink dashboard showing boat statistics

Environmental sensing

Monitoring air pressure changes is important for dealing with the weather. For this, we added a cheap barometer-temperature-humidity sensor module wired to the Raspberry Pi, driven with the Signal K BME280 plugin. With this we were able to get all of this information from our cabin into Signal K.

However, there was more environmental information we wanted to get. For instance, the outdoor temperature, the humidity in our foul weather gear locker, and the temperature of our icebox. For these we found the Ruuvi tags produced by a Finnish startup. These are small weatherproofed Bluetooth environmental sensors that can run for years with a coin cell battery.

Ruuvi tags for Curiosity with handy pouches

With Ruuvi tags and the Signal K Ruuvi tag plugin we were able to bring a rich set of environmental data from all around the boat into our dashboards.

Anchor watch

Like every cruising boat, we spend quite a lot of nights at anchor. One important safety measure with a shorthanded crew is to run an automated anchor watch. This monitors the boat’s distance to the anchor, and raises an alarm if we start dragging.

For this one, we’re using the Signal K anchor alarm plugin. We added a Bluetooth speaker to get these alarms in an audible way.

To make starting and stopping the anchor watch easier, I utilized a simple Bluetooth remote camera shutter button together with some scripts. This way the person dropping the anchor can also start the anchor watch immediately from the bow.

Camera shutter button for starting anchor watch

AIS

Automatic Identification System is a radio protocol used by most bigger vessels to tell others about their course and position. It can be used for collision avoidance. Having an active transponder on a small boat like Curiosity is a bit expensive, but we decided we’d at least want to see commercial traffic in our chartplotter in order to navigate safely.

For this we bought an RTL-SDR USB stick that can tune into the AIS frequency, and with the rtl_ais software, receive and forward all AIS data into Signal K.

Tracking AIS targets in Freeboard

This setup is still quite new, so we haven’t been able to test it live yet. But it should allow us to see all nearby bigger ships in our chartplotter in realtime, assuming that we have a good-enough antenna.

Putting it all together

All together this is quite a lot of hardware. To house all of it, we built a custom backing plate with 3D-printed brackets to hold the various components. The whole setup is called Voronoi-1 onboard computer. This is a setup that should be easy to duplicate on any small sailing vessel.

The Voronoi-1 onboard computer

The total cost so far for the full boat navigation setup has been around 600€, which is less than just a commercial chartplotter would cost. And the system we have is both easy to extend, and to fix even on the go. And we get a set of capabilities that would normally require a whole suite of proprietary parts to put together.

Next steps for navigation setup

We of course have plenty of ideas on what to do next to improve the navigation setup. Here are some projects we’ll likely tackle over the coming year:

  • Adding a timeseries database and some data visualization
  • 9 degrees of freedom sensor to track the compass course, as well as boat heel
  • Instrumenting our outboard motor to get RPMs into Signal K and track the engine running time
  • Wind sensor, either open source or commercial

If you have ideas for suitable components or projects, please get in touch!

Source code

Huge thanks to both the Signal K and Hackerfleet communities and the Curiosity crew for making all this happen.

Now we just wait for the curfews to lift so that we can get back to sailing!

Curiosity Crew Badge

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog