Free Software, Free Society!
Thoughts of the FSFE Community (English)

Monday, 21 September 2020

Pimp your bash with Liquid Prompt

Linux users, software developers and power users alike spend a lot of time in the command-line prompt. The default Bash shell that ships in most Linux distributions have stayed pretty much unchanged for years and years. We wanted to explore how to improve the every day command-line experience, and eventually came very fond of the Liquid Prompt, which we recommend all Linux users to check out.

Why Liquid Prompt?

A lot of developers, in particular Mac users, seem to be using zsh nowadays, and extensions such as oh-my-zsh and powerline. While these look very fancy, the installing zsh with a lot of plugins quickly leads to a messy collection of scripts. At first it can feel cool, but one quickly runs into upgrade and interoperability issues. Also, changing the default shell from bash to zsh has profound effects on how shell scripts behave and the overall experience using SSH and working on multiple hosts can degrade.

To cut through the flashy blog posts and marketing fluff we turned to the Debian Developers mailing list asking for thoughtful advice from true command-line professionals. From this advice we learned about the Liquid Prompt, a Bash extension that has been around since 2012 and packaged in since 2016 in Debian (and its derivates, such as Ubuntu). Check out the threads in May, June and July if you want to learn about the other tips shared on the mailing list.

Main innovation: progressive disclosure

The default prompt in Liquid Prompt is very simple:

It shows the username (on this test machine it is ‘vagrant’) and hostname (on this machine it is ‘wordpress’) the path and the prompt sign $. This is basically the same as the default Bash prompt, but with colors, which will be explained later.

However, it will automatically expand to tell the user more information when there is something special going on, hence the name ‘liquid’:

In the screenshot above, there is now a lot of information available:

  • the 3& tells that there are 3 processes running the in background
  • the 2z tells that there are 2 processes stopped
  • the path is expanded and after it the master tells the git branch name
  • the +2/-0 tells that there are local commits that have not been pushed to the remote
  • the * tells that there are also uncommitted changes
  • the 15s is the unusually long duration of the previous command
  • the 147 is the non-zero exit code from the previous command and
  • the ± as the prompt tells that the directory uses git for version control

The next screenshot shows another situation:

Here the prompt text and colors tells that:

  • the user is now ‘mysql’
  • the @-character is yellow, which means that there is no X11 forwarding available
  • the hostname ‘wordpress’ is bold and colored, which tells that the prompt is used over SSH
  • the colon : is red as an indication that the current user does not have write permissions to the current path
  • the long path name has been shortened with ... to save space for the prompt

This is a great design principle, because extra information is revealed only when it is relevant, and otherwise the prompt stays short and sweet. The fact that Liquid Prompt is an extension to Bash means that there are no custom shell language and all of your Bash scripts work just as they should. There is no right aligned extra stuff that is prone to rendering errors and the whole layout works well in any screen/resolution or screen/tmux use without getting messed up. Liquid Prompt is an interoperable, stable and clean piece of software we can warmly recommend as your new daily tool.

To learn about all features, check out Liquid Prompt’s README on Github.

Installation

Liquid Prompt is available by default in most Linux distributions. On Debian/Ubuntu you can simply run apt install liquidprompt. For each user that wants to activate it run liquidprompt_activate. This command adds to .bashrc the following lines:

# Only load liquidprompt in interactive shells, not from a script or from scp
echo $- | grep -q i 2>/dev/null && . /usr/share/liquidprompt/liquidprompt

The new prompt will be active starting from the next new command-line session a user starts.

To also have the laptop battery status and temperature showing, one also needs to have the extra packages acpi and lm-sensors installed.

Customize settings

It is possible to fine tune how Liquid Prompt works by editing the settings file at ~/.config/liquidpromptrc. While the default settings of Liquid Prompt are mostly fine, the following modifications made it feel a tad more usable:

# Show laptop batter level only if below 20%
LP_BATTERY_THRESHOLD=20
# Show CPU load only if above 40%
LP_LOAD_THRESHOLD=40
# Don't show hostname on localhost
LP_HOSTNAME_ALWAYS=0
# Don't show user is it is you
LP_USER_ALWAYS=0
# Always show the wall clock at the beginning of the line.
LP_ENABLE_TIME=1
# Show duration of previous command only is it was over 10s
LP_ENABLE_RUNTIME=10
# Never show laptop temperature
LP_ENABLE_TEMP=0
# Always enable colors if accessed via SSH
LP_ENABLE_SSH_COLORS=1

With the settings above the default prompt will look as:

Additionally you might want to make these additions to your .bashrc file:

# Ensure Gnome Terminal shows the prompt correctly
PROMPT_COMMAND='echo -ne "\033]2;${USER}@${HOSTNAME}: ${PWD/$HOME/~}\007"'
PROMPT_COMMAND="history -a; history -c; history -r;$PROMPT_COMMAND"

Extra tip: make use of a bigger bash history, the Liquid Prompt compatible way

# Store run time in .bash_history history
HISTTIMEFORMAT="[%F %T] "
# Collect very long history, disk space is cheap
HISTSIZE=10000000
HISTFILESIZE=10000000
# Append each command to history, don't overwrite
shopt -s histappend
# Append each command to history immediately after it ran, dont' wait until the session ends 
PROMPT_COMMAND="history -a; history -c; history -r;$PROMPT_COMMAND"
# Ignore duplicates and commands that started with a space character
HISTCONTROL=ignoredups:ignorespace

Want to find more hidden gems?

While doing research on this topic we came across a great Youtube video series by Jonathan Carter (highvoltage): Debian Package of the Day and in particular the Week of Terminal Emulators in episodes 46–51.

Saturday, 12 September 2020

VMs on KVM with Terraform

many thanks to erethon for his help & support on this article.

Working on your home lab, it is quiet often that you need to spawn containers or virtual machines to test or develop something. I was doing this kind of testing with public cloud providers with minimal VMs and for short time of periods to reduce any costs. In this article I will try to explain how to use libvirt -that means kvm- with terraform and provide a simple way to run this on your linux machine.

Be aware this will be a (long) technical article and some experience is needed with kvm/libvirt & terraform but I will try to keep it simple so you can follow the instructions.

Terraform

Install Terraform v0.13 either from your distro or directly from hashicopr’s site.

$ terraform version
Terraform v0.13.2

Libvirt

same thing for libvirt

$ libvirtd --version
libvirtd (libvirt) 6.5.0

$ sudo systemctl is-active libvirtd
active

verify that you have access to libvirt

$ virsh -c qemu:///system version
Compiled against library: libvirt 6.5.0
Using library: libvirt 6.5.0
Using API: QEMU 6.5.0
Running hypervisor: QEMU 5.1.0

Terraform Libvirt Provider

To access the libvirt daemon via terraform, we need the terraform-libvirt provider.

Terraform provider to provision infrastructure with Linux’s KVM using libvirt

The official repo is on GitHub - dmacvicar/terraform-provider-libvirt and you can download a precompiled version for your distro from the repo, or you can download a generic version from my gitlab repo

ebal / terraform-provider-libvirt · GitLab

These are my instructions

mkdir -pv ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/
curl -sLo ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt https://gitlab.com/terraform-provider/terraform-provider-libvirt/-/jobs/artifacts/master/raw/terraform-provider-libvirt/terraform-provider-libvirt?job=run-build
chmod +x ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt

Terraform Init

Let’s create a new directory and test that everything is fine.

mkdir -pv tf_libvirt
cd !$

cat > Provider.tf <<EOF
terraform {
 required_version = ">= 0.13"
 required_providers {
     libvirt = {
       source  = "dmacvicar/libvirt"
       version = "0.6.2"
     }
 }
}
EOF
$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

everything seems okay!

We can verify with tree or find

$ tree -a
.
├── Provider.tf
└── .terraform
    └── plugins
        ├── registry.terraform.io
        │   └── dmacvicar
        │       └── libvirt
        │           └── 0.6.2
        │               └── linux_amd64 -> /home/ebal/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
        └── selections.json

7 directories, 2 files

Provider

but did we actually connect to libvirtd via terraform ?
Short answer: No.

We just told terraform to use this specific provider.

How to connect ?
We need to add the connection libvirt uri to the provider section:

provider "libvirt" {
    uri = "qemu:///system"
}

so our Provider.tf looks like this

terraform {
  required_version = ">= 0.13"
  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

Libvirt URI

libvirt is a virtualization api/toolkit that supports multiple drivers and thus you can use libvirt against the below virtualization platforms

  • LXC - Linux Containers
  • OpenVZ
  • QEMU
  • VirtualBox
  • VMware ESX
  • VMware Workstation/Player
  • Xen
  • Microsoft Hyper-V
  • Virtuozzo
  • Bhyve - The BSD Hypervisor

Libvirt also supports multiple authentication mechanisms like ssh

virsh -c qemu+ssh://username@host1.example.org/system

so it is really important to properly define the libvirt URI in terraform!

In this article, I will limit it to a local libvirt daemon, but keep in mind you can use a remote libvirt daemon as well.

Volume

Next thing, we need a disk volume!

Volume.tf

resource "libvirt_volume" "ubuntu-2004-vol" {
  name = "ubuntu-2004-vol"
  pool = "default"
  #source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
  source = "ubuntu-20.04.img"
  format = "qcow2"
}

I have already downloaded this image and verified its checksum, I will use this local image as the base image for my VM’s volume.

By running terraform plan we will see this output:

  # libvirt_volume.ubuntu-2004-vol will be created
  + resource "libvirt_volume" "ubuntu-2004-vol" {
      + format = "qcow2"
      + id     = (known after apply)
      + name   = "ubuntu-2004-vol"
      + pool   = "default"
      + size   = (known after apply)
      + source = "ubuntu-20.04.img"
    }

What we expect is to use this source image and create a new disk volume (copy) and put it to the default disk storage libvirt pool.

Let’s try to figure out what is happening here:

terraform plan -out terraform.out
terraform apply terraform.out
terraform show
# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

and

$ virsh -c qemu:///system vol-list default
 Name              Path
------------------------------------------------------------
 ubuntu-2004-vol   /var/lib/libvirt/images/ubuntu-2004-vol

Volume Size

BE Aware: by this declaration, the produced disk volume image will have the same size as the original source. In this case ~2G of disk.

We will show later in this article how to expand to something larger.

destroy volume

$ terraform destroy
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # libvirt_volume.ubuntu-2004-vol will be destroyed
  - resource "libvirt_volume" "ubuntu-2004-vol" {
      - format = "qcow2" -> null
      - id     = "/var/lib/libvirt/images/ubuntu-2004-vol" -> null
      - name   = "ubuntu-2004-vol" -> null
      - pool   = "default" -> null
      - size   = 2361393152 -> null
      - source = "ubuntu-20.04.img" -> null
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.

verify

$ virsh -c qemu:///system vol-list default
 Name             Path
----------------------------------------------------------

reminder: always destroy!

Domain

Believe it or not, we are half way from our first VM!

We need to create a libvirt domain resource.

Domain.tf

cat > Domain.tf <<EOF
resource "libvirt_domain" "ubuntu-2004-vm" {
  name = "ubuntu-2004-vm"

  memory = "2048"
  vcpu   = 1

  disk {
    volume_id = libvirt_volume.ubuntu-2004-vol.id
  }

}

EOF

Apply the new tf plan

 terraform plan -out terraform.out
 terraform apply terraform.out
$ terraform show

# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
    arch        = "x86_64"
    autostart   = false
    disk        = [
        {
            block_device = ""
            file         = ""
            scsi         = false
            url          = ""
            volume_id    = "/var/lib/libvirt/images/ubuntu-2004-vol"
            wwn          = ""
        },
    ]
    emulator    = "/usr/bin/qemu-system-x86_64"
    fw_cfg_name = "opt/com.coreos/config"
    id          = "3a4a2b44-5ecd-433c-8645-9bccc831984f"
    machine     = "pc"
    memory      = 2048
    name        = "ubuntu-2004-vm"
    qemu_agent  = false
    running     = true
    vcpu        = 1
}

# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

Verify via virsh:

$ virsh -c qemu:///system list
 Id   Name             State
--------------------------------
 3    ubuntu-2004-vm   running

Destroy them!

$ terraform destroy

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

libvirt_domain.ubuntu-2004-vm: Destroying... [id=3a4a2b44-5ecd-433c-8645-9bccc831984f]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 2 destroyed.

That’s it !

We have successfully created a new VM from a source image that runs on our libvirt environment.

But we can not connect/use or do anything with this instance. Not yet, we need to add a few more things. Like a network interface, a console output and a default cloud-init file to auto-configure the virtual machine.

Variables

Before continuing with the user-data (cloud-init), it is a good time to set up some terraform variables.

cat > Variables.tf <<EOF

variable "domain" {
  description = "The domain/host name of the zone"
  default     = "ubuntu2004"
}

EOF

We are going to use this variable within the user-date yaml file.

Cloud-init

The best way to configure a newly created virtual machine, is via cloud-init and the ability of injecting a user-data.yml file into the virtual machine via terraform-libvirt.

user-data

#cloud-config

#disable_root: true
disable_root: false
chpasswd:
  list: |
       root:ping
  expire: False

# Set TimeZone
timezone: Europe/Athens

hostname: "${hostname}"

# PostInstall
runcmd:
  # Remove cloud-init
  - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

cloud init disk

Terraform will create a new iso by reading the above template file and generate a proper user-data.yaml file. To use this cloud init iso, we need to configure it as a libvirt cloud-init resource.

Cloudinit.tf

data "template_file" "user_data" {
  template = file("user-data.yml")
  vars = {
    hostname = var.domain
  }
}

resource "libvirt_cloudinit_disk" "cloud-init" {
  name           = "cloud-init.iso"
  user_data      = data.template_file.user_data.rendered
}

and we need to modify our Domain.tf accordingly

cloudinit = libvirt_cloudinit_disk.cloud-init.id

Terraform will create and upload this iso disk image into the default libvirt storage pool. And attach it to the virtual machine in the boot process.

At this moment the tf_libvirt directory should look like this:

$ ls -1
Cloudinit.tf
Domain.tf
Provider.tf
ubuntu-20.04.img
user-data.yml
Variables.tf
Volume.tf

To give you an idea, the abstract design is this:

tf_libvirt.png

apply

terraform plan -out terraform.out
terraform apply terraform.out
$ terraform show

# data.template_file.user_data:
data "template_file" "user_data" {
    id       = "cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00"
    rendered = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "ubuntu2004"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
    template = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "${hostname}"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
    vars     = {
        "hostname" = "ubuntu2004"
    }
}

# libvirt_cloudinit_disk.cloud-init:
resource "libvirt_cloudinit_disk" "cloud-init" {
    id        = "/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539"
    name      = "cloud-init.iso"
    pool      = "default"
    user_data = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "ubuntu2004"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
}

# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
    arch        = "x86_64"
    autostart   = false
    cloudinit   = "/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c"
    disk        = [
        {
            block_device = ""
            file         = ""
            scsi         = false
            url          = ""
            volume_id    = "/var/lib/libvirt/images/ubuntu-2004-vol"
            wwn          = ""
        },
    ]
    emulator    = "/usr/bin/qemu-system-x86_64"
    fw_cfg_name = "opt/com.coreos/config"
    id          = "3ade5c95-30d4-496b-9bcf-a12d63993cfa"
    machine     = "pc"
    memory      = 2048
    name        = "ubuntu-2004-vm"
    qemu_agent  = false
    running     = true
    vcpu        = 1
}

# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

Lots of output , but let me explain it really quick:

generate a user-data file from template, template is populated with variables, create an cloud-init iso, create a volume disk from source, create a virtual machine with this new volume disk and boot it with this cloud-init iso.

Pretty much, that’s it!!!

$ virsh  -c qemu:///system vol-list --details  default

 Name              Path                                      Type   Capacity     Allocation
---------------------------------------------------------------------------------------------
 cloud-init.iso    /var/lib/libvirt/images/cloud-init.iso    file   364.00 KiB   364.00 KiB
 ubuntu-2004-vol   /var/lib/libvirt/images/ubuntu-2004-vol   file   2.20 GiB     537.94 MiB

$ virsh  -c qemu:///system list
 Id   Name             State
--------------------------------
 1    ubuntu-2004-vm   running

destroy

$ terraform destroy -auto-approve

data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 3 destroyed.

Most important detail is:

Resources: 3 destroyed.

  • cloud-init.iso
  • ubuntu-2004-vol
  • ubuntu-2004-vm

Console

but there are a few things still missing.

To add a console for starters so we can connect into this virtual machine!

To do that, we need to re-edit Domain.tf and add a console output:

  console {
    target_type = "serial"
    type        = "pty"
    target_port = "0"
  }
  console {
    target_type = "virtio"
    type        = "pty"
    target_port = "1"
  }

the full file should look like:

resource "libvirt_domain" "ubuntu-2004-vm" {
  name = "ubuntu-2004-vm"

  memory = "2048"
  vcpu   = 1

 cloudinit = libvirt_cloudinit_disk.cloud-init.id

  disk {
    volume_id = libvirt_volume.ubuntu-2004-vol.id
  }

  console {
    target_type = "serial"
    type        = "pty"
    target_port = "0"
  }
  console {
    target_type = "virtio"
    type        = "pty"
    target_port = "1"
  }

}

Create again the VM with

terraform plan -out terraform.out
terraform apply terraform.out

And test the console:

$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

ubuntu_console

wow!

We have actually logged-in to this VM using the libvirt console!

Virtual Machine

some interesting details

root@ubuntu2004:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       2.0G  916M  1.1G  46% /
devtmpfs        998M     0  998M   0% /dev
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           200M  392K  200M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/vda15      105M  3.9M  101M   4% /boot/efi
tmpfs           200M     0  200M   0% /run/user/0

root@ubuntu2004:~# free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        73Mi       1.7Gi       0.0Ki       140Mi       1.8Gi
Swap:            0B          0B          0B

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

Do not forget to destroy

$ terraform destroy -auto-approve

data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 3 destroyed.

extend the volume disk

As mentioned above, the volume’s disk size is exactly as the origin source image.
In our case it’s 2G.

What we need to do, is to use the source image as a base for a new volume disk. In our new volume disk, we can declare the size we need.

I would like to make this a user variable: Variables.tf

variable "vol_size" {
  description = "The mac & iP address for this VM"
  # 10G
  default = 10 * 1024 * 1024 * 1024
}

Arithmetic in terraform!!

So the Volume.tf should be:

resource "libvirt_volume" "ubuntu-2004-base" {
  name = "ubuntu-2004-base"
  pool = "default"
  #source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
  source = "ubuntu-20.04.img"
  format = "qcow2"
}

resource "libvirt_volume" "ubuntu-2004-vol" {
  name           = "ubuntu-2004-vol"
  pool           = "default"
  base_volume_id = libvirt_volume.ubuntu-2004-base.id
  size           = var.vol_size
}

base image –> volume image

test it

terraform plan -out terraform.out
terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

ubuntu2004 login: root
Password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Sep 12 18:27:46 EEST 2020

  System load: 0.29             Memory usage: 6%   Processes:       66
  Usage of /:  9.3% of 9.52GB   Swap usage:   0%   Users logged in: 0

0 updates can be installed immediately.
0 of these updates are security updates.

Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings

Last login: Sat Sep 12 18:26:37 EEST 2020 on ttyS0
root@ubuntu2004:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       9.6G  912M  8.7G  10% /
root@ubuntu2004:~#

10G !

destroy

terraform destroy -auto-approve

Swap

I would like to have a swap partition and I will use cloud init to create a swap partition.

modify user-data.yml

# Create swap partition
swap:
  filename: /swap.img
  size: "auto"
  maxsize: 2G

test it

terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

root@ubuntu2004:~# free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        86Mi       1.7Gi       0.0Ki       188Mi       1.8Gi
Swap:         2.0Gi          0B       2.0Gi

root@ubuntu2004:~#

success !!

terraform destroy -auto-approve

Network

How about internet? network?
Yes, what about it ?

I guess you need to connect to the internets, okay then.

The easiest way is to add this your Domain.tf

  network_interface {
    network_name = "default"
  }

This will use the default network libvirt resource

$ virsh -c qemu:///system net-list

 Name              State    Autostart   Persistent
----------------------------------------------------
 default           active   yes         yes

if you prefer to directly expose your VM to your local network (be very careful) then replace the above with a macvtap interface. If your ISP router provides dhcp, then your VM will take a random IP from your router.

network_interface {
  macvtap = "eth0"
}

test it

terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

root@ubuntu2004:~#

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:36:66:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.228/24 brd 192.168.122.255 scope global dynamic ens3
       valid_lft 3544sec preferred_lft 3544sec
    inet6 fe80::5054:ff:fe36:6696/64 scope link
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

root@ubuntu2004:~# ping -c 5 google.com
PING google.com (172.217.23.142) 56(84) bytes of data.
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=1 ttl=115 time=43.4 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=2 ttl=115 time=43.9 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=3 ttl=115 time=43.0 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=4 ttl=115 time=43.1 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=5 ttl=115 time=43.4 ms

--- google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 42.977/43.346/43.857/0.311 ms
root@ubuntu2004:~#

destroy

$ terraform destroy -auto-approve

Destroy complete! Resources: 4 destroyed.

SSH

Okay, now that we have network it is possible to setup ssh to our virtual machine and also auto create a user. I would like cloud-init to get my public key from github and setup my user.

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        AcceptEnv LANG LC_*
        AllowUsers ebal
        ChallengeResponseAuthentication no
        Compression NO
        MaxSessions 3
        PasswordAuthentication no
        PermitRootLogin no
        Port "${sshdport}"
        PrintMotd no
        Subsystem sftp  /usr/lib/openssh/sftp-server
        UseDNS no
        UsePAM yes
        X11Forwarding no

Notice, I have added a new variable called sshdport

Variables.tf

variable "ssh_port" {
  description = "The sshd port of the VM"
  default     = 12345
}

and do not forget to update your cloud-init tf

Cloudinit.tf

data "template_file" "user_data" {
  template = file("user-data.yml")
  vars = {
    hostname = var.domain
    sshdport = var.ssh_port
  }
}

resource "libvirt_cloudinit_disk" "cloud-init" {
  name           = "cloud-init.iso"
  user_data      = data.template_file.user_data.rendered
}

Update VM

I would also like to update & install specific packages to this virtual machine:

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion
  - ncdu

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet ${hostname} > /etc/motd
  - updatedb
  # Firewall
  - ufw allow "${sshdport}"/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

Yes, I prefer to uninstall cloud-init at the end.

user-date.yaml

the entire user-date.yaml looks like this:

#cloud-config
disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        AcceptEnv LANG LC_*
        AllowUsers ebal
        ChallengeResponseAuthentication no
        Compression NO
        MaxSessions 3
        PasswordAuthentication no
        PermitRootLogin no
        Port "${sshdport}"
        PrintMotd no
        Subsystem sftp  /usr/lib/openssh/sftp-server
        UseDNS no
        UsePAM yes
        X11Forwarding no

# Set TimeZone
timezone: Europe/Athens

hostname: "${hostname}"

# Create swap partition
swap:
  filename: /swap.img
  size: "auto"
  maxsize: 2G

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion
  - ncdu

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet ${hostname} > /etc/motd
  - updatedb
  # Firewall
  - ufw allow "${sshdport}"/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

Output

We need to know the IP to login so create a new terraform file to get the IP

Output.tf

output "IP" {
  value = libvirt_domain.ubuntu-2004-vm.network_interface.0.addresses
}

but that means that we need to wait for the dhcp lease. Modify Domain.tf to tell terraform to wait.

  network_interface {
    network_name = "default"
    wait_for_lease = true
  }

Plan & Apply

$ terraform plan -out terraform.out && terraform apply terraform.out

Outputs:

IP = [
  "192.168.122.79",
]

Verify

$ ssh 192.168.122.79 -p 12345 uptime
 19:33:46 up 2 min,  0 users,  load average: 0.95, 0.37, 0.14
$ ssh 192.168.122.79 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1023-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Sep 12 19:34:45 EEST 2020

  System load:  0.31              Processes:             72
  Usage of /:   33.1% of 9.52GB   Users logged in:       0
  Memory usage: 7%                IPv4 address for ens3: 192.168.122.79
  Swap usage:   0%

0 updates can be installed immediately.
0 of these updates are security updates.

       _                 _         ____   ___   ___  _  _
 _   _| |__  _   _ _ __ | |_ _   _|___  / _  / _ | || |
| | | | '_ | | | | '_ | __| | | | __) | | | | | | | || |_
| |_| | |_) | |_| | | | | |_| |_| |/ __/| |_| | |_| |__   _|
 __,_|_.__/ __,_|_| |_|__|__,_|_____|___/ ___/   |_|

Last login: Sat Sep 12 19:34:37 2020 from 192.168.122.1

ebal@ubuntu2004:~$
ebal@ubuntu2004:~$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       9.6G  3.2G  6.4G  34% /

ebal@ubuntu2004:~$ free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        91Mi       1.7Gi       0.0Ki       197Mi       1.8Gi
Swap:         2.0Gi          0B       2.0Gi

ebal@ubuntu2004:~$ ping -c 5 libreops.cc
PING libreops.cc (185.199.108.153) 56(84) bytes of data.
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=1 ttl=55 time=48.4 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=2 ttl=55 time=48.7 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=3 ttl=55 time=48.5 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=4 ttl=55 time=48.3 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=5 ttl=55 time=52.8 ms

--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 48.266/49.319/52.794/1.743 ms

what !!!!

awesome

destroy

terraform destroy -auto-approve

Custom Network

One last thing I would like to discuss is how to create a new network and provide a specific IP to your VM. This will separate your VMs/lab and it is cheap & easy to setup a new libvirt network.

Network.tf

resource "libvirt_network" "tf_net" {
  name      = "tf_net"
  domain    = "libvirt.local"
  addresses = ["192.168.123.0/24"]
  dhcp {
    enabled = true
  }
  dns {
    enabled = true
  }
}

and replace network_interface in Domains.tf

  network_interface {
    network_id     = libvirt_network.tf_net.id
    network_name   = "tf_net"
    addresses      = ["192.168.123.${var.IP_addr}"]
    mac            = "52:54:00:b2:2f:${var.IP_addr}"
    wait_for_lease = true
  }

Closely look, there is a new terraform variable

Variables.tf

variable "IP_addr" {
  description = "The mac & iP address for this VM"
  default     = 33
}
$ terraform plan -out terraform.out && terraform apply terraform.out

Outputs:

IP = [
  "192.168.123.33",
]
$ ssh 192.168.123.33 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

 System information disabled due to load higher than 1.0

12 updates can be installed immediately.
8 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Sep 12 19:56:33 2020 from 192.168.123.1

ebal@ubuntu2004:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:b2:2f:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.123.33/24 brd 192.168.123.255 scope global dynamic ens3
       valid_lft 3491sec preferred_lft 3491sec
    inet6 fe80::5054:ff:feb2:2f33/64 scope link
       valid_lft forever preferred_lft forever
ebal@ubuntu2004:~$

Terraform files

you can find every terraform example in my github repo

tf/0.13/libvirt/0.6.2/ubuntu/20.04 at master · ebal/tf · GitHub

That’s it!

If you like this article, consider following me on twitter ebalaskas.

3 days to sends your talks to Linux App Summit 2020!

Head to https://linuxappsummit.org/cfp/ and talk about all those nice [Linux] Apps you're working on!


Call for paper ends in 3 days (September 15th)

Tuesday, 01 September 2020

PGPainless 0.1.0 released

After two years and a dozen alpha versions I am very glad to announce the first stable release of PGPainless! The release is available on maven central.

PGPainless aims to make using OpenPGP with Bouncycastle fun again by abstracting away most of the complexity and overhead that normally comes with it. At the same time PGPainless remains configurable by making heavy use of the builder pattern for almost everything.

Lets take a look at how to create a fresh OpenPGP key:

        PGPKeyRing keyRing = PGPainless.generateKeyRing()
                .simpleEcKeyRing("alice@wonderland.lit", "password123");

That is all it takes to generate an OpenPGP keypair that uses ECDH+ECDSA keys for encryption and signatures! You can of course also configure a more complex key pair with different algorithms and attributes:

        PGPainless.generateKeyRing()
                .withSubKey(KeySpec.getBuilder(RSA_ENCRYPT.withLength(RsaLength._4096))
                        .withKeyFlags(KeyFlag.ENCRYPT_COMMS, KeyFlag.ENCRYPT_STORAGE)
                        .withDefaultAlgorithms())
                .withSubKey(KeySpec.getBuilder(ECDH.fromCurve(EllipticCurve._P256))
                        .withKeyFlags(KeyFlag.ENCRYPT_COMMS, KeyFlag.ENCRYPT_STORAGE)
                        .withDefaultAlgorithms())
                .withSubKey(KeySpec.getBuilder(RSA_SIGN.withLength(RsaLength._4096))
                        .withKeyFlags(KeyFlag.SIGN_DATA)
                        .withDefaultAlgorithms())
                .withMasterKey(KeySpec.getBuilder(RSA_SIGN.withLength(RsaLength._8192))
                        .withKeyFlags(KeyFlag.CERTIFY_OTHER)
                        .withDetailedConfiguration()
                        .withPreferredSymmetricAlgorithms(SymmetricKeyAlgorithm.AES_256)
                        .withPreferredHashAlgorithms(HashAlgorithm.SHA512)
                        .withPreferredCompressionAlgorithms(CompressionAlgorithm.BZIP2)
                        .withFeature(Feature.MODIFICATION_DETECTION)
                        .done())
                .withPrimaryUserId("alice@wonderland.lit")
                .withPassphrase(new Passphrase("password123".toCharArray()))
                .build();

The API is designed in a way so that the user can very hardly make mistakes. Inputs are typed, so that as an example the user cannot input a wrong key length for an RSA key. The “shortcut” methods (eg. withDefaultAlgorithms()) uses sane, secure defaults.

Now that we have a key, lets encrypt some data!

        byte[] secretMessage = message.getBytes(UTF8);
        ByteArrayOutputStream envelope = new ByteArrayOutputStream();

        EncryptionStream encryptor = PGPainless.createEncryptor()
                .onOutputStream(envelope)
                .toRecipients(recipientPublicKey)
                .usingSecureAlgorithms()
                .signWith(keyDecryptor, senderSecretKey)
                .asciiArmor();

        Streams.pipeAll(new ByteArrayInputStream(secretMessage), encryptor);
        encryptor.close();
        byte[] encryptedSecretMessage = envelope.toByteArray();

As you can see there is almost no boilerplate code! At the same time, above code will create a stream that will encrypt and sign all the data that is passed through. In the end the envelope stream will contain an ASCII armored message that can only be decrypted by the intended recipients and that is signed using the senders secret key.

Decrypting data and/or verifying signatures works very similar:

        ByteArrayInputStream envelopeIn = new ByteArrayInputStream(encryptedSecretMessage);
        DecryptionStream decryptor = PGPainless.createDecryptor()
                .onInputStream(envelopeIn)
                .decryptWith(keyDecryptor, recipientSecretKey)
                .verifyWith(senderPublicKey)
                .ignoreMissingPublicKeys()
                .build();

        ByteArrayOutputStream decryptedSecretMessage = new ByteArrayOutputStream();

        Streams.pipeAll(decryptor, decryptedSecretMessage);
        decryptor.close();
        OpenPgpMetadata metadata = decryptor.getResult();

The decryptedSecretMessage stream now contains the decrypted message. The metadata object can be used to get information about the message, eg. which keys and algorithms were used to encrypt/sign the data and if those signatures were valid.

In summary, PGPainless is now able to create different types of keys, read encrypted and unencrypted keys, encrypt and/or sign data streams as well as decrypt and/or verify signatures. The latest additions to the API contain support for creating and verifying detached signatures.

PGPainless is already in use in Smacks OpenPGP module which implements XEP-0373: OpenPGP for XMPP and it has been designed primarily with the instant messaging use case in mind. So if you want to add OpenPGP support to your application, feel free to give PGPainless a try!

Monday, 24 August 2020

Hacking GOG.com for fun and profit

If you have a GOG account, you might have received an email announcing a Harvest Sale. While it’s unusual for harvest to last only 48 hours, but apart from that naming blunder, the sale is no different than many that came before it. What caught my attention was somewhat creative spot the difference puzzle that accompanied it. Specifically, as pretext to share some image processing insights.

Portion of the spot the difference Harvest Sale puzzle

Each identified difference presents a discount code for an exciting game. Surely, one cannot let it go to waste! Alas, years of sitting in a cave in front of a computer destroyed our eyesight; how could we possibly succeed‽ Simple!

Firstly, make a screenshot of the puzzle. The drawing in the email is rather tall so you might need to take a few of them (each capturing portion of the picture) or zoom out until the entire illustration fits. To practice feel free to grab the above image of the portion of the puzzle.

With the screenshot ready, open it in an image manipulation program. To make things easier crop it — the default key binding for Crop tool is Shift+C — to discard irrelevant parts. Having done that, duplicate the layer. That can be accomplished through Layer menu or by pressing Ctrl+Shift+D. Now, flip one of the layers. This operation is accessible through the Layer → Transform → Flip Horizontally menu option as well as inside of the Layers dialogue.

alt

The Layers dialogue is what we need now as that’s where the magic happen. if you haven’t brought it up yet do it now for example with Ctrl+L shortcut. Make sure the top layer is selected and finally choose Difference from the Mode drop-down. Et voilà.

Depending on the cropping done at the beginning, the top layer might need to be moved a little — key binding for the Move tool is M. If done correctly, the differences should be highlighted with bright colours.

Friday, 21 August 2020

Flathub stats for KDE applications [that are part of the release service]

I just discovered https://gitlab.gnome.org/Jehan/gimp-flathub-stats that tells you the download (including updates) stats for a given flathub application.


Ran it over the KDE applicatitions that are part of the release service that we have in flathub.


The results are somewhat surprising:


org.kde.kdenlive
Total:   22649 downloads in 8 days
org.kde.kalzium
Total:   16571 downloads in 9 days
org.kde.kgeography
Total:   16343 downloads in 9 days
org.kde.kbruch
Total:   15744 downloads in 9 days
org.kde.kapman
Total:   15473 downloads in 9 days
org.kde.kblocks
Total:   15426 downloads in 9 days
org.kde.katomic
Total:   15385 downloads in 9 days
org.kde.khangman
Total:   15370 downloads in 9 days
org.kde.kbounce
Total:   15280 downloads in 9 days
org.kde.kdiamond
Total:   15242 downloads in 9 days
org.kde.kwordquiz
Total:   15173 downloads in 9 days
org.kde.ksudoku
Total:   15155 downloads in 9 days
org.kde.kigo
Total:   15125 downloads in 9 days
org.kde.kgoldrunner
Total:   15062 downloads in 9 days
org.kde.knetwalk
Total:   15030 downloads in 9 days
org.kde.palapeli
Total:   14939 downloads in 9 days
org.kde.klickety
Total:   14917 downloads in 9 days
org.kde.klines
Total:   14866 downloads in 9 days
org.kde.knavalbattle
Total:   14848 downloads in 9 days
org.kde.kjumpingcube
Total:   14831 downloads in 9 days
org.kde.ksquares
Total:   14829 downloads in 9 days
org.kde.killbots
Total:   14772 downloads in 9 days
org.kde.kubrick
Total:   14696 downloads in 9 days
org.kde.ktuberling
Total:   14652 downloads in 9 days
org.kde.kontact
Total:   3728 downloads in 458 days
org.kde.kolourpaint
Total:   2542 downloads in 9 days
org.kde.okular
Total:   2304 downloads in 3 days
org.kde.kpat
Total:   1203 downloads in 8 days
org.kde.ktouch
Total:   880 downloads in 9 days
org.kde.kate
Total:   638 downloads in 9 days
org.kde.ark
Total:   571 downloads in 9 days
org.kde.dolphin
Total:   566 downloads in 3 days
org.kde.elisa
Total:   387 downloads in 3 days
org.kde.minuet
Total:   103 downloads in 9 days
org.kde.kwrite
Total:   84 downloads in 9 days
org.kde.kcachegrind
Total:   35 downloads in 9 days
org.kde.kcalc
Total:   23 downloads in 4 days
org.kde.lokalize
Total:   10 downloads in 3 days

kdenlive is the clear winner.

After that there's a compact block of games/edu apps that i think were/are part of the Endless default install and that shows, since after that the next app has like 6 times less downloads.

"New" (in the flathub sense) apps like lokalize or kcalc are the ones with less downloads, i guess people haven't seen them there yet.

Thursday, 20 August 2020

Curse of knowledge

[Original Published at Linkedin on October 28, 2018]

 

The curse of knowledge is a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand.

 

Let’s talk about documentation

This is the one big elephant in every team’s room.

TLDR; Increment: Documentation

Documentation empowers users and technical teams to function more effectively, and can promote approachability, accessibility, efficiency, innovation, and more stable development.

Bad technical guides can cause frustration, confusion, and distrust in your software, support channels, and even your brand—and they can hinder progress and productivity internally

 

so to avoid situations like these:

wisdom_of_the_ancients.png

xkcd - wisdom_of_the_ancients

or/and

TitsMcGee4782

Optipess - TitsMcGee4782

 

documentation must exist!

 

Myths

  • Self-documenting code
  • No time to write documentation
  • There are code examples, what else you need?
  • There is a wiki page (or 300.000 pages).
  • I’m not a professional writer
  • No one reads the manual

 

Problems

  • Maintaining the documentation (up2date)
  • Incomplete or Confusing documentation
  • Documentation is platform/version oriented
  • Engineers who may not be fluent in English (or dont speak your language)
  • Too long
  • Too short
  • Documentation structure

 

Types of documentation

  • Technical Manual (system)
  • Tutorial (mini tutorials)
  • HowTo (mini howto)
  • Library/API documentation (reference)
  • Customer Documentation (end user)
  • Operations manual
  • User manual (support)
  • Team documentation
  • Project/Software documentation
  • Notes
  • FAQ

 

Why Documentation Is Important

Communication is a key to success. Documentation is part of the communication process. We either try to communicate or collaborate with our customers or even within our own team. We use our documentation to inform customers of new feautures and how to use them, to train our internal team (colleagues), collaborate with them, reach-out, help-out, connect, communicate our work with others.

When writing code, documentation should be the “One-Truth” instead of the code repository. I love working with projects that they will not deploy a new feature before updating the documentation first. For example I read the ‘Release Notes for Red Hat’ and the ChangeLog instead of reading myriads of code repositories.

 

Know Your Audience

Try to answer these questions:

  • Who is reading this documentation ?
  • Is it for internal or external users/customers ?
  • Do they have a dev background ?
  • Arey they non-technical people ?

Use personas to create diferrent material. Try to remember this one gold rule:

Audidence should get value from documentation (learning or something).

 

Guidelines

Here are some guidelines:

  • Tell a story
  • Use a narative voice
  • Try to solve a problem
  • Simplify - KISS philosophy
  • Focus on approachability and usability

Even on a technical document try to:

  • Write documentation agnostic - Independent Platform
  • Reproducibility
  • Not writing in acronyms and technical jargon (explain)
  • Step Approach
  • Towards goal achievement
  • Routines

 

UX

A picture is worth a thousand words

so remember to:

  • visual representation of the information architecture
  • use code examples
  • screencasts
  • CLI –help output
  • usage
  • clear error messages

Customers and Users do want to write nothing.
Reducing user input, your project will be more fault tolerant.

Instead of providing a procedure for a deploy pipeline, give them a deploy bot, a next-next-install Gui/Web User-Interface and focus your documentation around that.

 

Content

So what to include in the documentation.

  • How technical should be or not ?
  • Use cases ?
  • General-Purpose ?
  • Article size (small pages are more manageable set to maintain).
  • Minimum Viable Content Vs Too much detail
  • Help them to understand

imagine your documentation as microservices instead of a huge monolith project.

Usally a well defined structure, looks like this:

  • Table of Contents (toc)
  • Introduction
  • Short Description
  • Sections / Modules / Chapters
  • Conclusion / Summary
  • Glossary
  • Index
  • References

 

Tools

I prefer wiki pages instead of a word-document, because of the below features:

  • Version
  • History
  • Portability
  • Convertibility
  • Accountability

btw if you are using Confluence, there is a Markdown plugin.

 

Measurements & Feedback

To understand if your documentation is good or not, you need feedback. But first there is an evaluation process of Review. It is the same thing as writing code, you cant review your own code! The first feedback should come within your team.

Use analytics to measure if people reads your documentation, from ‘Hits per page’ to more advance analytics as Matomo (formerly Piwik). Profiling helps you understand your audience. What they like in documentation?

Customer satisfaction (CSat) are important in documentation metrics.

  • Was this page helpful? Yes/No
  • Allowing customers to submit comments.
  • Upvote/Downvote / Like
  • or even let them to contribute in your documentation

make it easy for people to share their feedbak and find a way to include their comments in it.

 

FAQ

Frequently Asked Questions should answering questions in the style of:

  • What would customers ask ?
  • What if
  • How to

FAQ or QA should be really straight forward, short and simple as it can be. You are writing a FAQ because you are here to help customers to learn how to use this specific feature not the entire software. Use links for more detail info, that direct them to your main documentation.

Conclusion

Sharing knowledge & shaping the culture of your team/end users. Your documentation should reflect your customers needs. Everything you do in your business is to satisfy your customers. Documentation is one way to communicate this.

So here are some final points on documentation:

  • User Oriented
  • Readability
  • Comprehensive
  • Keep it up to date
Tag(s): knowledge

Wednesday, 12 August 2020

Computers can't sustain themselves

The situation where an unskilled user can enjoy a well-working computer does only last so long.1 Either the user becomes good at maintaining the computer, or it will stop working correctly. That’s because computers are not reliable. If not used carefully, at some point, they will behave unexpectedly or stop working. Therefore, one will have to get their hands dirty and most likely learn something along the way.

Some operating systems are more prone to gathering cruft, though. Windows computers are known to decay over time. This is caused by file system fragmentation, the registry getting cluttered, OS / software updates,2 or unwanted software installation. Furthermore, users can install software from any source. As a result, they have to check the software quality by themselves, whether it’s compatible with their system, perform the installation procedure and the maintenance. This may create problems if any of these tasks are not done well. Conversely, despite giving users a lot of power, GNU/Linux is more likely to stay stable. Even though it depends on the distributions policies, software installation and updates are done through package manager repositories that are administered and curated by skilled maintainers. Breaking the system is thus more difficult, but not impossible. After all, with great power comes great responsibility.

Becoming knowledgeable about computers and being able to manage them efficiently is easier with Free Software than proprietary software. Free Software creates a healthy relationship between developers and users whereby the former plays nice with the latter because he or she has may redesign the software to better fit their needs or completely stop using it. Its openness allows everyone to dig into technical details and discover how it works. Therefore, Free Software puts users in control, allowing them to better understand technology.

Regardless of the reason, leaving a computer badly managed will result in cruft creeping in. The solution to this is simple: don’t be afraid of your tools, master them!


  1. Unless the computer is managed remotely. ↩︎

  2. Citing only the latest of a long list: Windows 10 printing breaks due to Microsoft June 2020 updates. ↩︎

Tuesday, 28 July 2020

The power of git-sed

In the recent weeks and months, the FSFE Web Team has been doing some heavy work on the FSFE website. We moved and replaced thousands of files and their respective links to improve the structure of a historically grown website (19+ years, 23243 files, almost 39k commits). But how to do that most efficiently in a version controlled system like Git?

In our scenarios, the steps executed often looked like the following:

  1. Move/rename a directory full of XML files representing website pages
  2. Find all links that pointed to this directory, and change them
  3. Create a rewrite rule

For the first step, using the included git mv is perfectly fine.

For the second, we would usually need a combination of grep and sed, e.g.:

grep -lr "/old/page.html" | xargs sed 's;/old/page.html;/new/page.html;g'

This has a few major flaws:

  • In a Git repository, this also greps inside the .git directory where we do not want to edit files directly
  • The grep is slow in huge repositories
  • The searched old link has to be mentioned two times, so hard for semi-manual replacement of a large number of links
  • Depending on the Regex complexity we need, the command becomes long, and we need to take care of using the correct flags for grep and sed.

git-sed to the rescue

After some research, I found git-sed, basically a Bash file in the git-extras project. With some modifications (pull request pending) it’s the perfect tool for mass search and replacement.

It solves all of the above problems:

  • It uses git grep that ignores the .git/ directory, and is much faster because it uses git’s index.
  • The command is much shorter and easier to understand and write
  • Flags are easy to add, and this only has to be done once per command

Install

You can just install the git-extras package which also contains a few other scripts.

I opted for using it standalone, so downloaded the shell file, put it in a directory which is in my $PATH, and removed one dependency on a script which is only available in git-extras (see my aforementioned PR). So for instance, you could copy git-sed.sh in /usr/local/bin/ and make it executable. To enable calling it via git sed, put in your ~/.gitconfig:

[alias]
  sed = !sh git-sed.sh

Usage

After installing git-sed, the command above would become:

git sed -f g "/old/page.html" "/new/page.html"

My modifications also allow people to use extended Regex, so things like reference captures, so I hope these will be merged soon. With this, some more advanced replacements are possible:

# Use reference capture (save absolute link as \1)
git sed -f g "http://fsfe.org(/.*?\.html)" "https://fsfe.org\1"

# Optional tokens (.html is optional here)
git sed -f g "/old/page(\.html)?" "/new/page.html"

And if you would like to limit git-sed to a certain directory, e.g. news/, that’s also no big deal:

git sed -f g "oldstring" "newstring" -- news/

You may have notived the -f flag with the g argument. People used to sed know that g replaces all appearances of the searched pattern in a file, not only the first one. You could also make it gi if you want a case-insensitive search and replace.

Conclusion

As you can see, using git-sed is really a time and nerve saver when doing mass changes on your repositories. Of course, there is also room for improvement. For instance, it could be useful to use the Perl Regex library (PCRE) for the grep and sed to also allow for look-aheads or look-behinds. I encourage you to try git-sed and make suggestions to upstream directly to improve this handy tool.

Sunday, 26 July 2020

Nextcloud and OpenID-Connect

%!s()

If you looked at the Nextcloud app store you could already find OpenID-Connect connectors before but since Nextcloud 19 it is an officially supported user back-end. So it was time for me to have a closer look at it and to try it out.

Get a OpenID Connect provider

First step was to get an OpenID-Connect provider, sure I could have chosen one of the public services. But why not have a small nice provider running directly on my machine? Keycloak makes this really simple. By following their Getting Started Guide I could setup a OpenID-Connect provider in just a few minutes and run it directly on my local demo machine. I will show you how I configured Keycloak as an OpenID-Connect provider for Nextcloud but please keep in mind, this is the first time I configured Keycloak and my main goal was to get it running quickly to test the Nextcloud connector. It is quite likely that I missed some important security setting which you would like to enable for a productive system.

After installing Keycloak we go to http://localhost:8080/auth/ which is the default URL in “standalone” mode and login as admin. The first thing we do is to configure a new Realm in the “Realm Settings”. Only “Name” and “Display name” need to be set, the rest can be kept as it is:

Next we move on to the “Clients” tab, and created a new client:

Set a random “Client ID”, I chose “nextcloud” in this example, and the root URL of your Nextcloud which is http://localhost/stable in this case. After that we get redirected to the client configuration page. Most settings are already set correctly. We only need to adjusted two more settings.

First we set the “Access Type” to “confidential”, this is needed in order to get a client secret which we need for the Nextcloud setup later on. While the “Valid Redirection URIs” work as it is with the wildcard, I used the one proposed by the Nextcloud OIDC app http://localhost/stable/index.php/apps/user_oidc/code. This is the Nextcloud end-point to which Keycloak will redirect the user back after a successful login.

Finally we create a user who should be able to login to Nextcloud later.

While technically the “Username” is enough I directly set E-Mail address, first- and second name. Nextcloud will reuse this information later to pre-fill the users profile nicely. Don’t forget to go to the “Credentials” tab and set a password for your new user.

That’s it, now we just need to grab a few information to complete the Nextcloud configuration later on.

First we go back to the “Realm Settings” of the “OIDCDemo”, under “OpenID Endpoint Configuration” we get a JSON document with all the parameter of our OpenID-Connect end-point. For Nextcloud we only need the “authorization_endpoint” which we find in the second line of the JSON file.

The second value is the client secret. We can find this in the credential tab of the “nextcloud” client settings:

Nextcloud setup

Before we continue, make sure to have this line in your config.php 'allow_local_remote_servers' => true,, otherwise Nextcloud will refuse to connect to Keycloak on localhost.

Now we can move on and configure Nextcloud. As mentioned before, the official OpenID-Connect app was released together with Nextcloud 19, so you need Nextcloud 19 or later. If you go to the Nextcloud apps management and search for “openid” you will not only find the official app but also the community apps. Make sure to chose the app called “OpenID Connect user backend”. Just to avoid misunderstandings at this point, the Nextcloud community does an awesome job! I’m sure the community apps work great too, they may even have more features compared to the new official app. But the goal of this article was to try out the officially supported OpenID-Connect app.

After installing the app we go to the admin settings where we will find a new menu entry called “OpenID Connect” on the left sidebar. The setup is quite simple but contains everything needed:

The app supports multiple OpenID Connect providers in parallel, so the first thing we do is to chose a “Identifier” which will be shown on the login page to let the user chose the right provider. For the other fields we enter the “Client ID”, “Client secret” and “Discovery endpoint” from Keycloak. After accepting the setting by clicking on “Register” we are done. Now let’s try to login with OpenID Connect:

As you can see, we have now an additional button called “Login with Keycloak”. Once clicked we get re-directed to Keycloak:

After we successfully logged-in to Keycloak we get directly re-directed back to Nextcloud and are logged-in. A look into our personal settings shows us that all our account detail like the full name and the email address where added correctly to our Nextcloud account:

Saturday, 18 July 2020

Book your BoF for Akademy 2020 now!

BoF sessions are an integral part of Akademy, it's the "let's discuss and plan how to make things happen" oriented part after the more "this is what we've done" oriented part of the talks.

So go to https://community.kde.org/Akademy/2020/AllBoF and book a session to discuss with the rest of the community about something you're passionate! :)

Saturday, 11 July 2020

20.08 releases branches created

Make sure you commit anything you want to end up in the 20.08 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 16 of July.

More interesting dates
    July 30, 2020: 20.08 RC (20.07.90) Tagging and Release
  August  6, 2020: 20.08 Tagging
  August 13, 2020: 20.08 Release

https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Cheers,
  Albert

Friday, 10 July 2020

Light OpenStreetMapping with GPS

Now that lockdown is lifting a bit in Scotland, I’ve been going a bit further for exercise. One location I’ve been to a few times is Tyrebagger Woods. In theory, I can walk here from my house via Brimmond Hill although I’m not yet fit enough to do that in one go.

Instead of following the main path, I took a detour along some route that looked like it wanted to be a path but it hadn’t been maintained for a while. When I decided I’d had enough of this, I looked for a way back to the main path but OpenStreetMap didn’t seem to have the footpaths mapped out here yet.

I’ve done some OpenStreetMap surveying before so I thought I’d take a look at improving this, and moving some of the tracks on the map closer to where they are in reality. In the past I’ve used OSMTracker which was great, but now I’m on iOS there doesn’t seem to be anything that matches up.

My new handheld radio, a Kenwood TH-D74 has the ability to record GPS logs so I thought I’d give this a go. It records the logs to the SD card with one file per session. It’s a very simple logger that records the NMEA strings as they are received. The only sentences I see in the file are GPGGA (Global Positioning System Fix Data) and GPRMC (Recommended Minimum Specific GPS/Transit Data).

I tried to import this directly with JOSM but it seemed to throw an error and crash. I’ve not investigated this, but I thought a way around could be to convert this to GPX format. This was easier than expected:

apt install gpsbabel
gpsbabel -i nmea -f "/sdcard/KENWOOD/TH-D74/GPS_LOG/25062020_165017.nme" \
                 -o gpx,gpxver=1.1 -F "/tmp/tyrebagger.gpx"

This imported into JOSM just fine and I was able to adjust some of the tracks to better fit where they actually are.

I’ll take the radio with me when I go in future and explore some of the other paths, to see if I can get the whole woods mapped out nicely. It is fun to just dive into the trees sometimes, along the paths that looks a little forgotten and overgrown, but also it’s nice to be able to find your way out again when you get lost.

Tuesday, 23 June 2020

How to build a SSH Bastion host

[this is a technical blog post, but easy to follow]

recently I had to setup and present my idea of a ssh bastion host. You may have already heard this as jump host or a security ssh hoping station or ssh gateway or even something else.

The main concept

SSH bastion

Disclaimer: This is just a proof of concept (PoC). May need a few adjustments.

The destination VM may be on another VPC, perhaps it does not have a public DNS or even a public IP. Think of this VM as not accessible. Only the ssh bastion server can reach this VM. So we need to first reach the bastion.

SSH Config

To begin with, I will share my initial sshd_config to get an idea of my current ssh setup

AcceptEnv LANG LC_*
ChallengeResponseAuthentication no
Compression no
MaxSessions 3
PasswordAuthentication no
PermitRootLogin no
Port 12345
PrintMotd no
Subsystem sftp  /usr/lib/openssh/sftp-server
UseDNS no
UsePAM yes
X11Forwarding no
AllowUsers ebal
  • I only allow, user ebal to connect via ssh.
  • I do not allow the root user to login via ssh.
  • I use ssh keys instead of passwords.

This configuration is almost identical to both VMs

  • bastion (the name of the VM that acts as a bastion server)
  • VM (the name of the destination VM that is behind a DMZ/firewall)

~/.ssh/config

I am using the ssh config file to have an easier user experience when using ssh

Host bastion
     Hostname 127.71.16.12
     Port 12345
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     Port 23456

Host *
    User ebal
    ServerAliveInterval 300
    ServerAliveCountMax 10
    ConnectTimeout=60

Create a new user to test this

Let us create a new user for testing.

User/Group

$ sudo groupadd ebal_test
$ sudo useradd -g ebal_test -m ebal_test

$ id ebal_test
uid=1000(ebal_test) gid=1000(ebal_test) groups=1000(ebal_test)

Perms

Copy .ssh directory from current user (<== lazy sysadmin)

$ sudo cp -ravx /home/ebal/.ssh/ /home/ebal_test/
$ sudo chown -R ebal_test:ebal_test /home/ebal_test/.ssh

$ sudo ls -la ~ebal_test/.ssh/
total 12
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 .
drwx------. 3 ebal_test ebal_test 4096 Jun 23 15:56 ..
-r--r-----. 1 ebal_test ebal_test  181 Sep 20  2019 authorized_keys

$ sudo ls -ld ~ebal_test/.ssh/
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 /home/ebal_test/.ssh/

bastion sshd config

Edit the ssh daemon configuration file to append the below entries

cat /etc/ssh/sshd_config

AllowUsers ebal ebal_test

Match User ebal_test
   AllowAgentForwarding no
   AllowTcpForwarding yes
   X11Forwarding no
   PermitTunnel no
   GatewayPorts no
   ForceCommand echo 'This account can only be used for ProxyJump (ssh -J)'

Don’t forget to restart sshd

systemctl restart sshd

As you have seen above, I now allow two (2) users to access the ssh daemon (AllowUsers). This can also work with AllowGroups

Testing bastion

Let’s try to connect to this bastion VM

$ ssh bastion -l ebal_test uptime
This account can only be used for ProxyJump (ssh -J)
$ ssh bastion -l ebal_test
This account can only be used for ProxyJump (ssh -J)
Connection to 127.71.16.12 closed.

Interesting …

We can not login into this machine.

Let’s try with our personal user

$ ssh bastion -l ebal uptime
 18:49:14 up 3 days,  9:14,  0 users,  load average: 0.00, 0.00, 0.00

Perfect.

Let’s try from windows (mobaxterm)

mobaxterm is putty on steroids! There is also a portable version, so there is no need of installation. You can just download and extract it.

mobaxterm_proxyjump.png

Interesting…

Destination VM

Now it is time to test our access to the destination VM

$ ssh VM
ssh: connect to host 192.13.13.186 port 23456: Connection refused

bastion

$ ssh -J ebal_test@bastion ebal@vm uptime
 19:07:25 up 22:24,  2 users,  load average: 0.00, 0.01, 0.00
$ ssh -J ebal_test@bastion ebal@vm
Last login: Tue Jun 23 19:05:29 2020 from 94.242.59.170
ebal@vm:~$
ebal@vm:~$ exit
logout

Success !

Explain Command

Using this command

ssh -J ebal_test@bastion ebal@vm

  • is telling the ssh client command to use the ProxyJump feature.
  • Using the user ebal_test on bastion machine and
  • connect with the user ebal on vm.

So we can have different users!

ssh/config

Now, it is time to put everything under our ~./ssh/config file

Host bastion
     Hostname 127.71.16.12
     Port 12345
     User ebal_test
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     ProxyJump bastion
     User ebal
     Port 23456

and try again

$ ssh vm uptime
 19:17:19 up 22:33,  1 user,  load average: 0.22, 0.11, 0.03

mobaxterm with bastion

mobaxterm_settings.png

mobaxterm_proxyjump_final.png

Monday, 15 June 2020

Keyboard cleaning 101

You wake up and the sun is shining. Everything is great and you sit down at your desk, ready to join the first video call of the day. And then – OOPS! You spill your breakfast smoothie over your keyboard. Does this sound familiar?

If you’ve ever encountered a situation such as the one described above and didn’t know how to act, after this post you will. We’ve written here easy to follow, step-by-step instructions on how to clean your keyboard.

Steps for salvaging the situation

  1. Turn off your device and place it upside down and wait for the liquid to drain.
  2. Turn the device around and clean away all visible traces of the liquid.
  3. Take a photo of your keyboard for future reference.
  4. Find a flat but sturdy tool, that can be inserted under the keycap. A flat-headed screwdriver is an option, but it might scratch the device and its components. Use a plastic tool, if possible.
  5. Some keyboards require the keys to be removed from the top, some from the bottom. Insert the tool under the keycap and gently apply upward pressure by pressing the tool. Continue to do so until you hear a snap and the keycap comes off. Remove also the small plastic parts underneath. Notice: If there is unusual resistance when detaching a key, stop! The keys might require a different method for removal.
  6. Place all the plastic parts in a bowl of warm water with mild detergent, such as dishwasher soap. Let them soak for a while to allow the dirt to come off.
  7. Rinse and repeat until all the plastic parts look clean. If the parts are extra dirty, use fingertips to rub off the dirt. Notice: Be careful not to rinse the plastic parts down the drain.
  8. Place the plastic parts on a paper towel and let them dry.

Cleaning the bare keyboard

While the plastic parts are drying, clean rest of the keyboard. You will need two clean cloths (preferably microfiber) and cotton swabs. The cloth used for cleaning should be damp. You can use either water or isopropyl alcohol. Alcohol evaporates quickly and is less likely to leave any moisture around. Cotton swabs are great for hard to reach areas. Finish the cleaning with a dry cloth.

Use a cotton swab to wipe the dirt away.

Reassembly

Once everything has dried up, you can start reassembling the keyboard. Now is a good time to look at that photo you took in the beginning. If for some reason you didn’t take a photo or you lost it, no worries – Youtube and Google are your friends.

When reassembling a key, do not use excessive force. It should easily snap in place and once so, test that it feels correct. If not, disassemble and reassemble until it does.

When done, the keyboard should look just like new. Issue fixed!

Done!

Maintenance in the future

Cleaning your keyboard every once in a while is a good practice to keep it working properly. Dust, grease and crumbs can easily get under the keycaps and potentially harm the device. A thorough clean-up takes about an hour, but it ensures that your keyboard will be in the best working condition. If you want to try and speed up the process, use compressed air to blow out any loose dirt.

Thursday, 11 June 2020

20.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Dependency freeze is in four weeks (July 9) and Feature Freeze a week after that, make sure you start finishing your stuff!

Wednesday, 10 June 2020

How to use cloud-init with Edis

It is a known fact, that my favorite hosting provider is edis. I’ve seen them improving their services all these years, without forgeting their customers. Their support is great and I am really happy working with them.

That said, they dont offer (yet) a public infrastructre API like hetzner, linode or digitalocean but they offer an Auto Installer option to configure your VPS via a post-install shell script, put your ssh key and select your basic OS image.

edis_auto_installer.png

I am experimenting with this option the last few weeks, but I wanted to use my currect cloud-init configuration file without making many changes. The goal is to produce a VPS image that when finished will be ready to accept my ansible roles without making any addition change or even login to this VPS.

So here is my current solution on how to use the post-install option to provide my current cloud-init configuration!

cloud-init

cloud-init documentation

cloud-init.png
Josh Powers @ DebConf17

I will not get into cloud-init details in this blog post, but tldr; has stages, has modules, you provide your own user-data file (yaml) and it supports datasources. All these things is for telling cloud-init what to do, what to configure and when to configure it (in which step).

NoCloud Seed

I am going to use NoCloud datastore for this experiment.

so I need to configure these two (2) files

/var/lib/cloud/seed/nocloud-net/meta-data
/var/lib/cloud/seed/nocloud-net/user-data

Install cloud-init

My first entry in the post-install shell script should be

apt-get update && apt-get install cloud-init

thus I can be sure of two (2) things. First the VPS has already network access and I dont need to configure it, and second install cloud-init software, just to be sure that is there.

Variables

I try to keep my user-data file very simple but I would like to configure hostname and the sshd port.

HOSTNAME="complimentary"
SSHDPORT=22422

Users

Add a single user, provide a public ssh key for authentication and enable sudo access to this user.

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

Hardening SSH

  • Change sshd port
  • Disable Root
  • Disconnect Idle Sessions
  • Disable Password Auth
  • Disable X Forwarding
  • Allow only User (or group)
write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        ClientAliveInterval 600
        ClientAliveCountMax 3
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

enable firewall

ufw allow $SSHDPORT/tcp && ufw enable

remove cloud-init

and last but not least, I need to remove cloud-init in the end

apt-get -y autoremove --purge cloud-init lxc lxd snapd

Post Install Shell script

let’s put everything together

#!/bin/bash

apt-get update && apt-get install cloud-init

HOSTNAME="complimentary"
SSHDPORT=22422

mkdir -p /var/lib/cloud/seed/nocloud-net

# Meta Data
cat > /var/lib/cloud/seed/nocloud-net/meta-data <<EOF
dsmode: local
EOF

# User Data
cat > /var/lib/cloud/seed/nocloud-net/user-data <<EOF
#cloud-config

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

# Set TimeZone
timezone: Europe/Athens

HOSTNAME: $HOSTNAME

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet $HOSTNAME > /etc/motd
  - updatedb
  # Firewall
  - ufw allow $SSHDPORT/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

EOF

cloud-init clean --logs

cloud-init init --local

That’s it !

After a while (needs a few reboot) our VPS is up & running and we can use ansible to configure it.

Tag(s): edis, cloud-init

Monday, 08 June 2020

Racism is bad - a Barcelona centered reflection

I hope most of people reading this will not find "Racism is bad" to be controversial.

The problem is even if most (i'd hope to say all) of us think racism is bad, some of us are still consciously or unconsciously racist.

I am not going to speak about the Black Lives Matters movement, because it's mostly USA centered (which is kind of far for me) and there's much better people to listen than me, so go and listen to them.

In this case I'm going to speak about the Romani/Roma/gypsies in Barcelona (and from what i can see in this Pew Research article, most of Europe).

Institutionalized hate against them is so deep that definition 2 of 6 in the Catalan dictionary for the Romani word (gitano) is "those that act egoistically and try deceive others" and definition 5 of 8 in the Spanish dictionary is "those that with wits and lies try to cheat someone in a particular matter"

Here it is "common" to hear people say "don't be a gitano" meaning to say "don't cheat/play me" and nobody bats an eye when hearing that phrase.

It's a fact that this "community" tends to live in ghettos and has a higher percent crime ratio. I've heard people that probably think themselves as non racist say "that's just the lifestyle they like".

SARCASM: Sure, people love living in unsafe neighbourhoods and crappy houses and risking going to jail just to be able to eat.

The thing is when 50% of the population has an unfavourable view of you just because of who your parents are, or your surname is, it's hard to get a [nice] job, a place to live outside the ghetto, etc.

So please, in addition to saying that you're not a racist (which is a good first step), try to actually not be racist too.

Thursday, 28 May 2020

Send your talks for Akademy 2020 *now*

The Call for Participation is still open for two weeks more, but please make us a favour and send yours *now*.

This way we don't have to panic thinking if we are going to need to go chasing people or not, or if we're going to have too few or too many proposals.

Also if you ask the talks committee for review, we can review your talk early, give you feedback and improve it, so it's a win-win.

So head over to https://akademy.kde.org/2020/cfp, find the submit link in the middle of that wall of text and click it ;)

Sunday, 24 May 2020

chmk a simple CHM viewer

Okular can view CHM files, to do so it uses KHTML, makes sense CHM is basically HTML with images all compressed into a single file.

This is somewhat problematic since KHTML is largely unmaintained and i doubt it'll get a Qt6 port.

The problem is that the only other Qt based HTML rendering engine is QtWebEngine and while great it doesn't support stuff we would need to use it in Okular, since Okular needs to access to the rendered image of the page and also to the text since it uses the same API for all formats, be it CHM, PDF, epub, wathever.

The easiest plan to move forward is probably drop CHM from Okular, but that means no more chm viewing in KDE software, which would be a bit sad.

So I thought, ok maybe I can do a quick CHM viewer just based in QtWebEngine without trying to fit it into the Okular backend to support different formats.

And ChmK was born https://invent.kde.org/aacid/chmk.

It's still very simple, but the basics work, if you give it a file in the command line, it'll open it and you'll be able to browse it.



As you can see it doesn't have *any* UI yet, so Merge Requests more than welcome.

Saturday, 16 May 2020

Network Namespaces - Part Three

Previously on … Network Namespaces - Part Two we provided internet access to the namespace, enabled a different DNS than our system and run a graphical application (xterm/firefox) from within.

The scope of this article is to run vpn service from this namespace. We will run a vpn-client and try to enable firewall rules inside.

ip-netns07

dsvpn

My VPN choice of preference is dsvpn and you can read in the below blog post, how to setup it.

dsvpn is a TCP, point-to-point VPN, using a symmetric key.

The instructions in this article will give you an understanding how to run a different vpn service.

Find your external IP

Before running the vpn client, let’s see what is our current external IP address

ip netns exec ebal curl ifconfig.co

62.103.103.103

The above IP is an example.

IP address and route of the namespace

ip netns exec ebal ip address show v-ebal

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip netns exec ebal ip route show

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Firefox

Open firefox (see part-two) and visit ifconfig.co we noticed see that the location of our IP is based in Athens, Greece.

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-before.png

Run VPN client

We have the symmetric key dsvpn.key and we know the VPN server’s IP.

ip netns exec ebal dsvpn client dsvpn.key 93.184.216.34 443

Interface: [tun0]
Trying to reconnect
Connecting to 93.184.216.34:443...
net.ipv4.tcp_congestion_control = bbr
Connected

Host

We can not see this tunnel vpn interface from our host machine

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 94:de:80:6a:de:0e brd ff:ff:ff:ff:ff:ff

376: v-eth0@if375: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:f7:c2:fb:60:ea brd ff:ff:ff:ff:ff:ff link-netns ebal

netns

but it exists inside the namespace, we can see tun0 interface here

ip netns exec ebal ip link

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Find your external IP again

Checking your external internet IP from within the namespace

ip netns exec ebal curl ifconfig.co

93.184.216.34

Firefox netns

running again firefox, we will noticed that our the location of our IP is based in Helsinki (vpn server’s location).

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-after.png

systemd

We can wrap the dsvpn client under a systemcd service

[Unit]
Description=Dead Simple VPN - Client

[Service]
ExecStart=ip netns exec ebal /usr/local/bin/dsvpn client /root/dsvpn.key 93.184.216.34 443
Restart=always
RestartSec=20

[Install]
WantedBy=network.target

Start systemd service

systemctl start dsvpn.service

Verify

ip -n ebal a

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 192.168.192.1 peer 192.168.192.254/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 64:ff9b::c0a8:c001 peer 64:ff9b::c0a8:c0fe/96 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ee69:bdd8:3554:d81/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip -n ebal route

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20
192.168.192.254 dev tun0 proto kernel scope link src 192.168.192.1

Firewall

We can also have different firewall policies for each namespace

outside

# iptables -nvL | wc -l
127

inside

ip netns exec ebal iptables -nvL

Chain INPUT (policy ACCEPT 9 packets, 2547 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 2 packets, 216 bytes)
 pkts bytes target     prot opt in     out     source        destination

So for the VPN service running inside the namespace, we can REJECT every network traffic, except the traffic towards our VPN server and of course the veth interface (point-to-point) to our host machine.

iptable rules

Enter the namespace

inside

ip netns exec ebal bash

Before

verify that iptables rules are clear

iptables -nvL

Chain INPUT (policy ACCEPT 25 packets, 7373 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 4 packets, 376 bytes)
 pkts bytes target     prot opt in     out     source        destination

Enable firewall

./iptables.netns.ebal.sh

The content of this file

## iptable rules

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP
iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT

## netns - incoming
iptables -A INPUT -i v-ebal -s 10.0.0.0/8 -j ACCEPT

## Reject incoming traffic
iptables -A INPUT -j REJECT

## DSVPN
iptables -A OUTPUT -p tcp -m tcp -o v-ebal -d 93.184.216.34 --dport 443 -j ACCEPT

## net-ns outgoing
iptables -A OUTPUT -o v-ebal -d 10.0.0.0/8 -j ACCEPT

## Allow tun
iptables -A OUTPUT -o tun+ -j ACCEPT

## Reject outgoing traffic
iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
iptables -A OUTPUT -p udp -j REJECT --reject-with icmp-port-unreachable

After

iptables -nvL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    1   349 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    0     0 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

PS: We reject tcp/udp traffic (last 2 linew), but allow icmp (ping).

ip-netns08

End of part three.

Friday, 15 May 2020

Improve your security with two-factor authentication

Two-factor authentication (or simply 2FA) is a way of authentication where a user must provide additional verification after username and password login. The form of verification can be a string of characters delivered via text message or generated with TOTP client. Two-factor authentication improves security because compromised username and password are not enough to get the account breached.

This article will explain how to use TOTP clients for two-factor authentication and why TOTP is better than many other two-factor methods. As an example, I will show how to enable and set up TOTP client Google Authenticator in Google’s services.

What is TOTP?

TOTP is a Time-based One-Time Password algorithm, and it can be used in two-factor authentication instead of SMS or paper list verification codes. TOTP client generates periodically a new code that can be used to verify the login.

TOTP is based on cryptographic functions. When enabling TOTP, the service gives a shared secret that is entered to the client, either by scanning a QR code or writing it in text. Cryptographic functions use this shared secret and current time to generate codes to verify the login. The code generation also works without connection to the internet.

The TOTP client on a smartphone is safer than many other two-factor authentication methods, like codes sent via SMS or email. Getting hold of someone’s SMS is somewhat troublesome but still possible. An email account can also be compromised – especially if the password is also used in other places.

It’s a good idea to backup all the shared secrets that are used to generate TOTP codes. Either in text format or as a screenshot from the QR code. That way, you don’t get locked out from your accounts even if your phone gets lost. Backups of TOTP shared secrets should be stored with the same carefulness as passwords because anyone that receives the shared secrets can generate the same codes. All services don’t provide the shared secret in a text format, which should be considered when choosing a backup method.

Google Authenticator is a popular and user-friendly TOTP client, but it doesn’t have functionality for backups. User needs to take care of backing up the shared secrets someplace else when adding them in the client. The client has import and export functionality, but it’s suitable only for transferring the data from an old phone to a new one. The old phone generates a QR code for the transfer, but if taking a screenshot is disabled, that can’t be easily utilized as a backup. Google Authenticator is available for Android and iOS.

There are also other TOTP clients, for example, AuthyDuo MobileFreeOTP, and andOTP. Authy and Duo Mobile have a cloud backup feature. FreeOTP has no backup functionality, but andOTP allows exporting shared secrets in JSON format (plain-text or encrypted) and storing to your place of choice.

Preparing Google account for 2FA

Next, I will show you how to enable two-factor authentication in Google services. After that, we will install Google Authenticator and enable 2FA with Google account. In this guide, I will log in to a Google account with a desktop browser, which is very similar to how the process works for other services.

Login to your Google Account and proceed in the menu to Security> Signing into Google > 2-step verification.

If two-step verification is enabled on your Google account, you should already see an option for Google Authenticator on this page, and you can continue to the next part of this article (Installing Google Authenticator). Otherwise, continue this part. Google has now opened a window where is introduced two-step verification. You can read it through and then click forward.

On the next page, you need to select the phone you would like to use. If you can’t see your preferred phone there you need to add the Google account in question to your preferred phone:

Android: Go Settings > Accounts > Add account. Choose Google and log in.

iOS: Install Google app and login with your Google account.

Google doesn’t let you choose Google Authenticator right away. First, you need to enable some other way of two-step verification: Google Prompts (default), security key, text message, or a phone call. Google Prompts is a window that will open on your phone and let you select yes or no. Keep the default, and move forward.

Select “yes” in the prompt when it on your phone. After that, you still need to set a backup option for getting verification codes. You can add a secondary phone number to get into your account even if your phone is lost. You can use your primary phone number here if you promise to back up the shared secret, which we will get to later.

Select the text message or a call and write the code you get to the window that opens after pressing “send.” Text messages can sometimes take a long time to arrive, and a phone call could be faster (in this alternative, a robot voice reads you the code).

After that one more time is asked if you want to enable 2-step verification. Select yes and continue.

Installing Google Authenticator

Installing Google Authenticator to a smartphone is straightforward. It can be installed just like any other app from the app store. The app is free and will also work offline after the installation.

When opening the app the first time, it will show you a few introduction slides. Browse through them and press forward. Then you will see the following view where accounts (shared secrets) can be added to the app. Google Authenticator is now installed, and it’s ready to receive shared secrets. Leave your phone wait in this view.

Your desktop browser should now be in Google account’s page 2-step verification (if not, go to Security > Signing into Google > 2-step verification). Roll down until you see the Authenticator app and press “set up.” In the opening window you can choose your phone’s OS:

After this, a window opens where you can scan the QR code. Choose the option “scan a barcode” on your phone and scan the code.

After scanning, you can press the link “Can’t scan it?” to get the shared secret in text format for backup purposes. After the scan, your Google Authenticator should look something like this:

On the right side of the code, you can see a diminishing pie icon, which indicates how long the current code is valid. A new code is generated every 30 seconds by default.

After this, proceed in the browser window and click next. The next page asks you to insert a verification code. Write the code you see in the app and hit verify before a new code is generated.

After this, Google Authenticator has been successfully enabled.

From now on, Google will ask a verification code when you log in to Google. Just opening the app on your phone is enough, and you can see the code. Other accounts can be added to the app from the plus sign button in the bottom right.

Please note that if you want to disable two-factor authentication, you must first disable it from Google’s settings (or whichever service is in question). Just removing the Google Authenticator or the shared secret does not disable two-factor authentication. You might only end up locking yourself out from your account.

You have now significantly improved the security of your Google account. Next, you should check which other services that you use offer two-factor authentication.

Tuesday, 12 May 2020

How fun does my Ring Fit Adventure² look like after some time

So after a full “Ring Fit” month – i.e. 30 days played1 – let us see how this workout game held up for me.

Historic context: COVID-19 and Ring Fit Adventure

This might be less interesting for those reading now, but might be relevant information for anyone reading this in a few years.

The first part of the year 2020 has been marked by COVID-19. This disease has since spread to a pandemic, impacting the whole globe, with many countries going into a more or less full lockdown. This situation is slowly changing, as in May many countries in Europe are removing the lockdown.

As everyone who could had to stay at home, the demand for Ring Fit Adventure rose drastically. Not only is it almost impossible to get one, due to it being out of stock pretty much everywhere, there are even rumours that in China you can only get one on the black/grey market for 250 $ (i.e. more than 3x the MSRP).

30 workout days in 76 days

The main question is obviously how much did Ring Fit Adventure get me to work out.

First off, I have to admit that I did not train every work day, as I hoped, but still much more than usual.

One reason I skipped two full weeks was that I had to send the Joy-Cons for repair, as their notorious drift started to show quite badly. This prevented me to work out, as without joy-cons I could not play itt

I took another fortnightly break from it before that, due to not feeling so well.

Now let us look at some data.

According to Ring Fit Adventure itself:

  • I reached world 9 of the story3
  • my character is now at level 75
  • and my difficulty setting is at level 20
  • the total time I exercised2 is 8h 18'
  • in which I burned 2670 kcal of energy
  • and ran a total of 28 km

Looking at my own log, this is how much I trained in the past few months:

yoga gymnastics rowing Ring Fit
2019-12 3
2020-01 1 1
2020-02 2 10
2020-03 12
2020-04 7
2020-05 3 1

This table needs some comment:

  • I got my Ring Fit Adventure in early February. But as you can see, my training regime was a bit lackluster in the two months before it. It was somewhat better the previous year, but I lost that piece of paper.
  • Rowing is something I did go to last year, but have not mustered the courage to go in several months … and then COVID-19 kicked in, so I am not even allowed to go (ha! another good excuse).
  • We are only half-way through May and I just got my joy-cons back from repair yesterday, so I was not able to play Ring Fit Adventure for the whole first half of the month. And the rest of the month is still ahead of me.

In my opinion, this clearly shows that Ring Fit Adventure did trick me into training more often. Including all the hiccups it took me 76 days to work out 30 days (or 36, if you ask the game1), which is roughly every second day. As a comparison, reaching my goal of exercising every work day, I would have to play it for 55 days in the same time frame.

In sum, I am happy with the results, but will try to train more in the future, with the hope to soon be able to go work out with others (e.g. go rowing).

Adventure content so far

As mentioned, in those 30 days I reached World 93 – in fact, I could have taken on the boss today and reached World 10, but decided to instead revisit some old levels and side quests.

Following is a short summary of what Worlds 5 to 9 introduce new to the game. I have to say that so far every world brought something new and refreshing, so the gameplay never got stale. In fact, I have a small problem that I have almost too many fighting skills to choose from. Good thing that by now old fighting skills are getting levelled up, so they are again a reasonable choice to bring to battle.

What also proved as a great feature – especially after a longer break – is that when you start up Adventure mode, it summarises the story so far.

Update: Voices and languages

In the update, they also included more voice languages and the choice for Ring to have either a male or female voice.

I find that as a neat addition, and I am currently using a male French voice with English subtitles to brush up on my French while I am working out.

World 5 & 6

  • introduce days that target only specific skills/muscle groups and does so in a very nice, fun and varied way
  • introduces skill points and a skill tree
  • gyms now have also skill sets – a great and easy way to get some targeted work out in Adventure mode as well

World 7

  • the plot twist is becoming apparent and the story is becoming more varied – while nothing award-winning it is actually enjoyable
  • unlocks All Sets as a choice for setting fighting skills – i.e. choosing to go to fights with workout skills that focus on a certain muscle group, posture, cardio workout, etc.
  • new enemy that can buff defense

World 8 & 9

  • expand the skill tree, including upgrades to previous fighting skills
  • adds a new movement skill that enables to reach alternative paths (also in previous) levels, adding to replayability of past levels; and if timed right, doubles as battle evasion

Other modes

Since my last blog entry about Ring Fit Adventure, the game has received an update that introduces two new modes:

  • Rhythm game – in this mode you press and pull the RingCon and/or twist your torso to the beat of a song, while also occassionaly having to squat or stand up. It seems like a pretty fun party game, especially if they introduce more songs. The only issue I have with it is that I have trouble keeping the RingCon in place when doing the fast twists.
  • Running mode – in this mode, you simply run through a level, without any mob encounters or similar. I have not played this mode much, but for a more lazy day, it seems fitting and relaxing.

Final general thoughts

To start with the negaive, in the past weeks my RingCon did not recognise the transition from presses to pulls really well. But that was easily fixed with a simple recalibration from the options menu. I have not had those issues since.

So, after a full Ring Fit month, how do I feel about it?

While it did not make me lose much weight or gain any major muscles, my back has not ached since. Well, except maybe a little bit when I did not train for two weeks. But even then it took only one or two days of playing for it to stop aching again. I also do feel more fit in general.

In that regard I would say it is a great success.

Did it trick me into training more? Perhaps not as often as were my ambitious expectations, but it has undeniably improved my workout regime immensely.

I do think the story mode is a great way to pull me in, and apart from the fun story and overall atmosphere, I think this is in great part to the RPG elements. I am not a big fan of gamification of everything, and “adding RPG elements” is usually just an euphemism for introducing some levels, badges and collectibles to trick the player into investing more time. But in my opinion this game does it right.

That being said, I am continuing to work out using Ring Fit Adventure and will continue to bump up the difficulty to keep my workout to Moderate Workout.

hook out → very happy to be able to move and play again


  1. The game claims I have been playing for 36 days, but I suspect this includes non-adventure modes. 

  2. Ring Fit Adventure records only time actually spent exercising. For comparison, my Switch records that I played the game for 20 h. 

  3. Apparently there are 20 worlds in total (encompassing 100 levels all together) with a NG+ and NG++ as well. So with 30 full days, I am probably not even half way through. If the trend continues, the worlds are only going to get longer, not shorter. 

Network Namespaces - Part Two

Previously on… Network Namespaces - Part One we discussed how to create an isolated network namespace and use a veth interfaces to talk between the host system and the namespace.

In this article we continue our story and we will try to connect that namespace to the internet.

recap previous commands

ip netns add ebal
ip link add v-eth0 type veth peer name v-ebal
ip link set v-ebal netns ebal
ip addr add 10.10.10.10/24 dev v-eth0
ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal
ip link set v-eth0 up
ip netns exec ebal ip link set v-ebal up

Access namespace

ip netns exec ebal bash

# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:07:60:da:d5:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::e007:60ff:feda:d5cf/64 scope link
       valid_lft forever preferred_lft forever

# ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping Veth

It’s not a gateway, this is a point-to-point connection.

# ping -c3 10.10.10.10

PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.415 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.126 ms

--- 10.10.10.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2008ms
rtt min/avg/max/mdev = 0.107/0.216/0.415/0.140 ms

ip-netns03

Ping internet

trying to access anything else …

ip netns exec ebal ping -c2 192.168.122.80
ip netns exec ebal ping -c2 192.168.122.1
ip netns exec ebal ping -c2 8.8.8.8
ip netns exec ebal ping -c2 google.com
root@ubuntu2004:~# ping 192.168.122.80
ping: connect: Network is unreachable

root@ubuntu2004:~# ping 8.8.8.8
ping: connect: Network is unreachable

root@ubuntu2004:~# ping google.com
ping: google.com: Temporary failure in name resolution

root@ubuntu2004:~# exit
exit

exit from namespace.

Gateway

We need to define a gateway route from within the namespace

ip netns exec ebal ip route add default via 10.10.10.10

root@ubuntu2004:~# ip netns exec ebal ip route list
default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

test connectivity - system

we can reach the host system, but we can not visit anything else

# ip netns exec ebal ping -c1 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.075 ms

--- 192.168.122.80 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms

# ip netns exec ebal ping -c3 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from 192.168.122.80: icmp_seq=3 ttl=64 time=0.126 ms

--- 192.168.122.80 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.026/0.093/0.128/0.047 ms

# ip netns exec ebal ping -c3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms

root@ubuntu2004:~# ip netns exec ebal ping -c3 google.com
ping: google.com: Temporary failure in name resolution

ip-netns05

Forward

What is the issue here ?

We added a default route to the network namespace. Traffic will start from our v-ebal (veth interface inside the namespace), goes to the v-eth0 (veth interface to our system) and then … then nothing.

The eth0 receive the network packages but does not know what to do with them. We need to create a forward rule to our host, so the eth0 network interface will know to forward traffic from the namespace to the next hop.

echo 1 > /proc/sys/net/ipv4/ip_forward

or

sysctl -w net.ipv4.ip_forward=1

permanent forward

If we need to permanent tell the eth0 to always forward traffic, then we need to edit /etc/sysctl.conf and add below line:

net.ipv4.ip_forward = 1

To enable this option without reboot our system

sysctl -p /etc/sysctl.conf

verify

root@ubuntu2004:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Masquerade

but if we test again, we will notice that nothing happened. Actually something indeed happened but not what we expected. At this moment, eth0 knows how to forward network packages to the next hope (perhaps next hope is the router or internet gateway) but next hop will get a package from an unknown network.

Remember that our internal network, is 10.10.10.20 with a point-to-point connection to 10.10.10.10. So there is no way for network 192.168.122.0/24 to know how to talk to 10.0.0.0/8.

We have to Masquerade all packages that come from 10.0.0.0/8 and the easiest way to do this if via iptables.
Using the postrouting nat table. That means the outgoing traffic with source 10.0.0.0/8 will have a mask, will pretend to be from 192.168.122.80 (eth0) before going to the next hop (gateway).

# iptables --table nat --flush
iptables --table nat --append POSTROUTING --source 10.0.0.0/8 --jump MASQUERADE
iptables --table nat --list
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.0.0/8           anywhere

Test connectivity

test again the namespace connectivity

# ip netns exec ebal ping -c2 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.139 ms

--- 192.168.122.80 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1017ms
rtt min/avg/max/mdev = 0.054/0.096/0.139/0.042 ms

# ip netns exec ebal ping -c2 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=63 time=0.242 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=63 time=0.636 ms

--- 192.168.122.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.242/0.439/0.636/0.197 ms

# ip netns exec ebal ping -c2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=57.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=58.0 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 57.805/57.896/57.988/0.091 ms

# ip netns exec ebal ping -c2 google.com
ping: google.com: Temporary failure in name resolution

success

ip-netns06.png

DNS

almost!

If you carefully noticed above, ping on the IP works.
But no with name resolution.

netns - resolv

Reading ip-netns manual

# man ip-netns | tbl | grep resolv

  resolv.conf for a network namespace used to isolate your vpn you would name it /etc/netns/myvpn/resolv.conf.

we can create a resolver configuration file on this location:
/etc/netns/<namespace>/resolv.conf

mkdir -pv /etc/netns/ebal/
echo nameserver 88.198.92.222 > /etc/netns/ebal/resolv.conf

I am using radicalDNS for this namespace.

Verify DNS

# ip netns exec ebal cat /etc/resolv.conf
nameserver 88.198.92.222

Connect to the namespace

ip netns exec ebal bash

root@ubuntu2004:~# cat /etc/resolv.conf
nameserver 88.198.92.222

root@ubuntu2004:~# ping -c 5 ipv4.balaskas.gr
PING ipv4.balaskas.gr (158.255.214.14) 56(84) bytes of data.
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=1 ttl=50 time=64.3 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=2 ttl=50 time=64.2 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=3 ttl=50 time=66.9 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=4 ttl=50 time=63.8 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=5 ttl=50 time=63.3 ms

--- ipv4.balaskas.gr ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 63.344/64.502/66.908/1.246 ms

root@ubuntu2004:~# ping -c3 google.com
PING google.com (172.217.22.46) 56(84) bytes of data.
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=1 ttl=51 time=57.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=2 ttl=51 time=55.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=3 ttl=51 time=55.2 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 55.209/55.984/57.380/0.988 ms

bonus - run firefox from within namespace

xterm

start with something simple first, like xterm

ip netns exec ebal xterm

or

ip netns exec ebal xterm -fa liberation -fs 11

ipnetns_xterm.png

test firefox

trying to run firefox within this namespace, will produce an error

# ip netns exec ebal firefox
Running Firefox as root in a regular user's session is not supported.  ($XAUTHORITY is /home/ebal/.Xauthority which is owned by ebal.)

and xauth info will inform us, that the current Xauthority file is owned by our local user.

# xauth info
Authority file:       /home/ebal/.Xauthority
File new:             no
File locked:          no
Number of entries:    4
Changes honored:      yes
Changes made:         no
Current input:        (argv):1

okay, get inside this namespace

ip netns exec ebal bash

and provide a new authority file for firefox

XAUTHORITY=/root/.Xauthority firefox

# XAUTHORITY=/root/.Xauthority firefox

No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0.0

xhost

xhost provide access control to the Xorg graphical server.
By default should look like this:

$ xhost
access control enabled, only authorized clients can connect

We can also disable access control

xhost +

but what we need to do, is to disable access control on local

xhost +local:

firefox

and if we do all that

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ipnetns-firefox.png

End of part two

Saturday, 09 May 2020

Network Namespaces - Part One

Have you ever wondered how containers work on the network level? How they isolate resources and network access? Linux namespaces is the magic behind all these and in this blog post, I will try to explain how to setup your own private, isolated network stack on your linux box.

notes based on ubuntu:20.04, root access.

current setup

Our current setup is similar to this

ip-netns00

List ethernet cards

ip address list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

List routing table

ip route list

default via 192.168.122.1 dev eth0 proto static
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

ip-netns01

Checking internet access and dns

ping -c 5 libreops.cc

PING libreops.cc (185.199.111.153) 56(84) bytes of data.
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=1 ttl=54 time=121 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=2 ttl=54 time=124 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=3 ttl=54 time=182 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=4 ttl=54 time=162 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=5 ttl=54 time=168 ms

--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 121.065/151.405/181.760/24.299 ms

linux network namespace management

In this article we will use the below programs:

so, let us start working with network namespaces.

list

To view the network namespaces, we can type:

ip netns
ip netns list

This will return nothing, an empty list.

help

So quicly view the help of ip-netns

# ip netns help

Usage:  ip netns list
  ip netns add NAME
  ip netns attach NAME PID
  ip netns set NAME NETNSID
  ip [-all] netns delete [NAME]
  ip netns identify [PID]
  ip netns pids NAME
  ip [-all] netns exec [NAME] cmd ...
  ip netns monitor
  ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT

monitor

To monitor in real time any changes, we can open a new terminal and type:

ip netns monitor

Add a new namespace

ip netns add ebal

List namespaces

ip netns list

root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal

We have one namespace

Delete Namespace

ip netns del ebal

Full example

root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal
root@ubuntu2004:~# ip netns
ebal
root@ubuntu2004:~# ip netns del ebal
root@ubuntu2004:~#
root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~#

monitor

root@ubuntu2004:~# ip netns monitor
add ebal
delete ebal

Directory

When we create a new network namespace, it creates an object under /var/run/netns/.

root@ubuntu2004:~# ls -l /var/run/netns/
total 0
-r--r--r-- 1 root root 0 May  9 16:44 ebal

exec

We can run commands inside a namespace.

eg.

ip netns exec ebal ip a

root@ubuntu2004:~# ip netns exec ebal ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

bash

we can also open a shell inside the namespace and run commands throught the shell.
eg.

root@ubuntu2004:~# ip netns exec ebal bash

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

root@ubuntu2004:~# exit
exit

ip-netns02

as you can see, the namespace is isolated from our system. It has only one local interface and nothing else.

We can bring up the loopback interface up

root@ubuntu2004:~# ip link set lo up

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

root@ubuntu2004:~# ip r

veth

The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices.

Think Veth as a physical cable that connects two different computers. Every veth is the end of this cable.

So we need 2 virtual interfaces to connect our system and the new namespace.

ip link add v-eth0 type veth peer name v-ebal

ip-netns03

eg.

root@ubuntu2004:~# ip link add v-eth0 type veth peer name v-ebal

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

3: v-ebal@v-eth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff

4: v-eth0@v-ebal: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff

Attach veth0 to namespace

Now we are going to move one virtual interface (one end of the cable) to the new network namespace

ip link set v-ebal netns ebal

ip-netns03

we will see that the interface is not showing on our system

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside the namespace

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Connect the two virtual interfaces

outside

ip addr add 10.10.10.10/24 dev v-eth0

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal
    inet 10.10.10.10/24 scope global v-eth0
       valid_lft forever preferred_lft forever

inside

ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever

Both Interfaces are down!

But both interfaces are down, now we need to set up both interfaces:

outside

ip link set v-eth0 up

root@ubuntu2004:~# ip link set v-eth0 up 

root@ubuntu2004:~# ip link show v-eth0
4: v-eth0@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside

ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link show v-ebal
3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

did it worked?

Let’s first see our routing table:

outside

root@ubuntu2004:~# ip r
default via 192.168.122.1 dev eth0 proto static
10.10.10.0/24 dev v-eth0 proto kernel scope link src 10.10.10.10
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

inside

root@ubuntu2004:~# ip netns exec ebal ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping !

outside

root@ubuntu2004:~# ping -c 5 10.10.10.20
PING 10.10.10.20 (10.10.10.20) 56(84) bytes of data.
64 bytes from 10.10.10.20: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 10.10.10.20: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.10.10.20: icmp_seq=4 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=5 ttl=64 time=0.071 ms

--- 10.10.10.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4098ms
rtt min/avg/max/mdev = 0.028/0.047/0.071/0.014 ms

inside

root@ubuntu2004:~# ip netns exec ebal bash
root@ubuntu2004:~#
root@ubuntu2004:~# ping -c 5 10.10.10.10
PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from 10.10.10.10: icmp_seq=4 ttl=64 time=0.044 ms
64 bytes from 10.10.10.10: icmp_seq=5 ttl=64 time=0.053 ms

--- 10.10.10.10 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4088ms
rtt min/avg/max/mdev = 0.041/0.045/0.053/0.004 ms
root@ubuntu2004:~# exit
exit

It worked !!

ip-netns03

End of part one.

Thursday, 07 May 2020

HamBSD Development Log 2020-05-07

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on writing unit tests for aprsisd.

I’ve added a few unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. There’s still some todo entries there as I’ve not made up packets for all the cases I wanted to check yet.

These are the first unit tests I’ve written for HamBSD and it’s definitely a different experience compared to writing Python unit tests for example. The framework for the tests uses bsd.regress.mk(5). The tests are C programs that include functions from aprsisd.

In order to do this I’ve had to split the function that converts AX.25 packets to TNC2 packets out into a seperate file. This is the sort of thing that I’d be more comfortable doing if I had more unit test coverage. It seemed to go OK and hopefully the coverage will improve as I get more used to writing tests in this way.

I also corrected a bug from yesterday where AX.25 3rd-party packets would have their length over-reported, leaking stack memory to RF.

I’ve been reading up on the station capabilities packet and it seems a lot of fields have been added by various software over time. Successful APRS IGate Operation (WB2OSZ, 2017) has a list of some of the fields and where they came from under “IGate Status Beacon” which seems to be what this packet is used for, not necessarily advertising the capabilities of the whole station.

If this packet were to become quite long, there is the possibility for an amplification attack. Someone with a low power transmitter can send an IGate query, and then get the IGate to respond with a much longer packet at higher power. It’s not even clear in the reply why this packet would be being sent as the requestor is not mentioned.

I think this will be the first place where I implement some rate limiting and see how that works. Collecting some simple statistics like the number of packets uploaded/downloaded would also be useful for monitoring.

Next steps:

  • Keep track of number of packets uploaded and downloaded
  • Add those statistics to station capabilities packet

Wednesday, 06 May 2020

cloudflared as a doh client with libredns

Cloudflare has released an Argo Tunnel client named: cloudflared. It’s also a DNS over HTTPS (DoH) client and in this blog post, I will describe how to use cloudflared with LibreDNS, a public encrypted DNS service that people can use to maintain the secrecy of their DNS traffic, but also circumvent censorship.

Notes based on ubuntu 20.04, as root

cloudflared.png

Download and install latest stable version

curl -sLO https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz

tar xf cloudflared-stable-linux-amd64.tgz

ls -l
total 61160
-rwxr-xr-x 1 root root 43782944 May  6 03:45 cloudflared
-rw-r--r-- 1 root root 18839814 May  6 19:42 cloudflared-stable-linux-amd64.tgz

mv cloudflared /usr/local/bin/

check version

# cloudflared --version
cloudflared version 2020.5.0 (built 2020-05-06-0335 UTC)

doh support

# cloudflared proxy-dns --help
NAME:
   cloudflared proxy-dns - Run a DNS over HTTPS proxy server.

USAGE:
   cloudflared proxy-dns [command options]

LibreDNS Endpoints

LibreDNS has two endpoints:

  • dns-query
  • ads

The latest blocks trackers/ads etc.

standalone

We can use cloudflared as standalone for testing, here is on a non standard TCP port:

cloudflared proxy-dns --upstream https://doh.libredns.gr/ads --port 5454
INFO[0000] Adding DNS upstream                   url="https://doh.libredns.gr/ads"
INFO[0000] Starting DNS over HTTPS proxy server  addr="dns://localhost:5454"
INFO[0000] Starting metrics server               addr="127.0.0.1:41717/metrics"

Testing ads endpoint

$ dig @127.0.0.1 -p 5454 +short analytics.google.com
0.0.0.0
$ dig @127.0.0.1 -p 5454 +short google.com
216.58.210.14
$ dig @127.0.0.1 -p 5454 +short test.libredns.gr
116.202.176.26

conf

We have verified that cloudflared works with libredns, so let us create a configuration file.

By default, cloudflared is trying to find one of the below files (replace root with your user):

  • /root/.cloudflared/config.yaml
  • /root/.cloudflared/config.yml
  • /root/.cloudflare-warp/config.yaml
  • /root/cloudflare-warp/config.yaml
  • /root/.cloudflare-warp/config.yml
  • /root/cloudflare-warp/config.yml
  • /usr/local/etc/cloudflared/config.yml

The most promising file is:

  • /usr/local/etc/cloudflared/config.yml

Create the configuration file

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true
proxy-dns-upstream:
 - https://doh.libredns.gr/dns-query
EOF

or for ads endpoint

mkdir -pv /usr/local/etc/cloudflared
cat > /usr/local/etc/cloudflared/config.yml << EOF
proxy-dns: true
proxy-dns-upstream:
 - https://doh.libredns.gr/ads
EOF

Testing

# cloudflared

INFO[0000] Version 2020.5.0
INFO[0000] GOOS: linux, GOVersion: go1.12.7, GoArch: amd64
INFO[0000] Flags                                         proxy-dns=true proxy-dns-upstream="https://doh.libredns.gr/ads"
INFO[0000] Adding DNS upstream                           url="https://doh.libredns.gr/ads"
INFO[0000] Starting DNS over HTTPS proxy server          addr="dns://localhost:53"
INFO[0000] Starting metrics server                       addr="127.0.0.1:33519/metrics"
INFO[0000] cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
$ dig test.libredns.gr +short
116.202.176.26

Service

if you are a use of Argo Tunnel and you have a cloudflare account, then you login and get your cert.pem key. Then (and only then) you can install cloudflared as a service by:

 cloudflared service install

and you can use /etc/cloudflared or /usr/local/etc/cloudflared/ and must have two files:

  • cert.pem and
  • config.yml (the above file)

That’s it !

HamBSD Development Log 2020-05-06

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on gating from the Internet to RF.

In the morning I cleaned up the mess I made yesterday with escaping the non-printable characters in packets before uploading them. I ran some test packets through and both Xastir and aprs.fi could decode them so that must be the correct way to do it.

I also added filtering of generic station queries (?APRS?) and IGate queries (?IGATE?). When an IGate query is seen, aprsisd will now respond with a station capabilities packet. The packet is not very exciting as it only contains the “IGATE” capability right now, but at least it does that.

Third-party packets are also identified, and have their RF headers stripped, and then are unconditionally thrown away. I need to add a check there to see if TCPIP is in the path and gate it if it’s not but I didn’t get to that today.

I added a new flag to aprsisd, -b, to allow the operator to indicate whether or not this station is a bi-directional IGate station. This currently only affects the q construct added to the path for uploaded packets (there wasn’t one before today) to indicate to APRS-IS whether or not this station can be used for two-way messaging with RF stations in range. Later I’ll make this also drop incoming messages if it’s set to receive-only instead of attempting to gate them anyway.

I noticed that when I connect to aprsc servers with TLS, they have actually been appending a qAS (message generated by server) instead of qAR (message is from RF) which I think is a bug, so I filed a GitHub issue.

The big thing today was generating third-party headers. Until now, aprsisd has tried to stuff the TNC2 header back into an AX.25 packet with a lot of truncated path entries. It’s now building the headers correctly(ish) and it’s possible to have bi-directional messaging through a HamBSD IGate. The path in the encapsulated header is currently entirely missing but it still works.

As this is a completely different way of handling these packets, it meant a rewrite of a good chunk of code. The skeleton is there now, just need to fill it in.

APRS message from MM0ROR: Hello from APRS-IS!

Next steps:

  • Generate proper paths for 3rd-party packets
  • Include a path for RF headers for station capabilities and 3rd-party packets
  • Add the new -b flag to the man page

Tuesday, 05 May 2020

HamBSD Development Log 2020-05-05

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on converting AX.25 packets to the TNC2 format used by APRS-IS.

I fixed the path formatting to include the asterisks for used path entries. Before packets would always appear to APRS-IS to have been heard directly, which gave some impressive range statistics for packets that had in fact been through one or two digipeaters.

A little more filtering is now implemented for packets. The control field and PID are verified to ensure the packets are APRS packets.

The entire path for AX.25 packet read from axtap(4) interface to TNC2 formatted string going out the TCP/TLS connection has bounds checks, with almost all string functions replaced with the mem* equivalents.

It wasn’t clear if it’s necessary to escape the non-printable characters in packets before sending to APRS-IS, and it turns out that actually you’re not meant to do that. I’d implemented this with the following (based roughly on how the KISS output escaping working in kiss(4):

icp = AX25_INFO_PTR(pkt_ax25, pi);
iep = pkt_ax25 + ax25_len;
while (icp < iep) {
        ibp = icp;
        while (icp < iep) {
                if (!isprint(*icp++)) {
                        icp--;
                        break;
                }
        }
        if (icp > ibp) {
                if (tp + (icp - ibp) > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                memcpy(&pkt_tnc2[tp], ibp, icp - ibp);
                tp += icp - ibp;
        }
        if (icp < iep) {
                if (tp + 6 > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                pkt_tnc2[tp++] = '<';
                pkt_tnc2[tp++] = '0';
                pkt_tnc2[tp++] = 'x';
                pkt_tnc2[tp++] = hex_chars[(*icp >> 4) & 0xf];
                pkt_tnc2[tp++] = hex_chars[*icp & 0xf];
                pkt_tnc2[tp++] = '>';
                icp++;
        }
}

I can now probably replace this with just a single bounds check and memcpy, but then I need to worry about logging. There is a debug log for every packet that I’ll probably just call strvis(3).

This did throw up something interesting though, so maybe this wasn’t a complete waste of time. I noticed that a “<0x0d>” was getting appended to packets coming out of my Yaesu VX-8DE. It turns out that this wasn’t a bug in my code or in aprsc (the APRS-IS server software I was connected to) but it’s actually a real byte that is tagged on the end of every APRS packet generated by the radio’s firmware. I never saw it before because aprsc would interpret this byte (ASCII carriage return) as the end of a packet, it would just be lost.

Next steps:

  • Removing the non-printable character escaping again
  • Filtering generic APRS queries (to avoid packet floods)
  • Filtering 3rd-party packets

On Biometric Authentication

Biometric systems such as fingerprint readers or facial recognition may be appealing, but they have drawbacks worth mentioning. Their attack surfaces are larger than commonly believed, they make it hard to enforce good security rules and are not guaranteed to work all the time. These flaws should be considered when assessing the value of biometric authentication.

It opens the door to a lot of attacks

With a regular password-based method which doesn’t require things like scanning fingers or smiling at a camera, attackers don’t have a lot of opportunities to interfere in the login process. Either the system validating the authentication must be compromised, or the shared secret must be guessed. Biometric authentication doesn’t work like that. Because, by definition, it’s tied to your body, the attacker can easily make a copy of the secret. They just have to make a copy of the biometric data. And one leave plenty of them around. For example, think about the number of fingerprints we make every day, the number of pictures of people online, the number of times our voices may be recorded, and more generally, the number of biometric traces we leave. In 2014, hackers replicated fingerprints from high resolution photos and showed how to fool fingerprint readers. One has to keep in mind that biometric data are not bullet-proof and have a wide the attack surface. Avoiding biometric data leaks and protecting against replay attacks is complicated.

Incompatibility with security best practices

Password rotation is central to authentication systems. It reduces the impact of passwords leaks and make it harder for attackers to bruteforce them. But how one is supposed to rotate passwords in the context of biometric-based authentication systems? Eyes and fingerprints, by design, last a lifetime. If an attacker has found a way to replicate your fingerprint, there is no way in which you can change the authentication secret (well you can use another finger but that’s unsustainable). For example, hackers stole personal data from the Office of Personnel Management responsible for security clearance in the United States. Murphy’s law is always lurking. What can be hacked will be hacked. What will happen when ancestry.com gets breached? How can one defend herself against such data breaches? For now, it’s hard to say. Attacker may also have to accommodate for biometric scanners improvements. But unless all the authentication system gets reworked, they can be sure the password is most likely not changed often.

It would be dangerous to use the same key for two different doors. If the key is stolen or replicated, then both doors are compromised. In digital authentication systems, reusing secrets is also considered a bad practice. But what does it mean in the context of biometric systems? If two services use voice for authentication, how one is supposed to use a different secret for the two services? Biometric authentication requires password reuse.

Success rates

Password authentication with the right password never fails. Comparing two hashes is a simple operation which works well in practice. But on the other hand, biometric authentication has a success rate, meaning that it’s not guaranteed to work properly. The good fingerprint may be considered invalid or, even worse, the wrong fingerprint may be considered valid. Who wants an imperfect authentication system? Even a hypothetical 99.9% success rate is not enough. We can’t afford to have an average of 1 error every 1000 attempts. Even though it’s a bit old, this essay shows an overview of fingerprint readers accuracy. It’s still far from perfect.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Inductive Bias  Karsten on Free Software  Linux-natives  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog