Free Software, Free Society!
Thoughts of the FSFE Community (English)

Sunday, 08 December 2019

Kubernetes as a Service with Rancer2 at Hetzner using Terraform and Helm

In this blog post you will find my personal notes on how to setup a Kubernetes as a Service (KaaS). I will be using Terraform to create the infrastructure on Hetzner’s VMs, Rancher for KaaS and Helm to install the first application on Kubernetes.

rke_k8s.png

Many thanks to dear friend: adamo for his help.

Terraform

Let’s build our infrastructure!
We are going to use terraform to build 5 VMs

  • One (1) master
  • One (1) etcd
  • Two (2) workers
  • One (1) for the Web dashboard

I will not go to much details about terraform, but to have a basic idea

Provider.tf

provider "hcloud" {
    token = var.hcloud_token
}

Hetzner.tf

data "template_file" "userdata" {
  template = "${file("user-data.yml")}"
  vars = {
    hostname = var.domain
    sshdport = var.ssh_port
  }
}

resource "hcloud_server" "node" {
  count       = 5
  name        = "rke-${count.index}"
  image       = "ubuntu-18.04"
  server_type = "cx11"
  user_data   = data.template_file.userdata.rendered
}

Output.tf

output "IPv4" {
  value = hcloud_server.node.*.ipv4_address
}

In my user-data (cloud-init) template, the most important lines are these

  - usermod -a -G docker deploy
  - ufw allow 6443/tcp
  - ufw allow 2379/tcp
  - ufw allow 2380/tcp
  - ufw allow 80/tcp
  - ufw allow 443/tcp

build infra

$ terraform init
$ terraform plan
$ terraform apply

output

IPv4 = [
  "78.47.6x.yyy",
  "78.47.1x.yyy",
  "78.46.2x.yyy",
  "78.47.7x.yyy",
  "78.47.4x.yyy",
]

In the end we will see something like this on hetzner cloud

hetzner VMs

Rancher Kubernetes Engine

Take a look here for more details about what is required and important on using rke: Requirements.

We are going to use the rke aka the Rancher Kubernetes Engine, an extremely simple, lightning fast Kubernetes installer that works everywhere.

download

Download the latest binary from github:
Release Release v1.0.0

$ curl -sLO https://github.com/rancher/rke/releases/download/v1.0.0/rke_linux-amd64
$ chmod +x rke_linux-amd64
$ sudo mv rke_linux-amd64 /usr/local/bin/rke

version

$ rke --version

rke version v1.0.0

rke config

We are ready to configure our Kubernetes Infrastructure using the first 4 VMs.

$ rke config

master

[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]: 4
[+] SSH Address of host (1) [none]: 78.47.6x.yyy
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (78.47.6x.yyy) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (78.47.6x.yyy) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (78.47.6x.yyy) [ubuntu]:
[+] Is host (78.47.6x.yyy) a Control Plane host (y/n)? [y]:
[+] Is host (78.47.6x.yyy) a Worker host (y/n)? [n]: n
[+] Is host (78.47.6x.yyy) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (78.47.6x.yyy) [none]: rke-master
[+] Internal IP of host (78.47.6x.yyy) [none]:
[+] Docker socket path on host (78.47.6x.yyy) [/var/run/docker.sock]: 

etcd

[+] SSH Address of host (2) [none]: 78.47.1x.yyy
[+] SSH Port of host (2) [22]:
[+] SSH Private Key Path of host (78.47.1x.yyy) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (78.47.1x.yyy) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (78.47.1x.yyy) [ubuntu]:
[+] Is host (78.47.1x.yyy) a Control Plane host (y/n)? [y]: n
[+] Is host (78.47.1x.yyy) a Worker host (y/n)? [n]: n
[+] Is host (78.47.1x.yyy) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (78.47.1x.yyy) [none]: rke-etcd
[+] Internal IP of host (78.47.1x.yyy) [none]:
[+] Docker socket path on host (78.47.1x.yyy) [/var/run/docker.sock]: 

workers

worker-01

[+] SSH Address of host (3) [none]: 78.46.2x.yyy
[+] SSH Port of host (3) [22]:
[+] SSH Private Key Path of host (78.46.2x.yyy) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (78.46.2x.yyy) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (78.46.2x.yyy) [ubuntu]:
[+] Is host (78.46.2x.yyy) a Control Plane host (y/n)? [y]: n
[+] Is host (78.46.2x.yyy) a Worker host (y/n)? [n]: y
[+] Is host (78.46.2x.yyy) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (78.46.2x.yyy) [none]: rke-worker-01
[+] Internal IP of host (78.46.2x.yyy) [none]:
[+] Docker socket path on host (78.46.2x.yyy) [/var/run/docker.sock]: 

worker-02

[+] SSH Address of host (4) [none]: 78.47.4x.yyy
[+] SSH Port of host (4) [22]:
[+] SSH Private Key Path of host (78.47.4x.yyy) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (78.47.4x.yyy) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (78.47.4x.yyy) [ubuntu]:
[+] Is host (78.47.4x.yyy) a Control Plane host (y/n)? [y]: n
[+] Is host (78.47.4x.yyy) a Worker host (y/n)? [n]: y
[+] Is host (78.47.4x.yyy) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (78.47.4x.yyy) [none]: rke-worker-02
[+] Internal IP of host (78.47.4x.yyy) [none]:
[+] Docker socket path on host (78.47.4x.yyy) [/var/run/docker.sock]: 

Network Plugin Type

[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 

rke_config

[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]: none
[+] Kubernetes Docker image [rancher/hyperkube:v1.16.3-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]: 

cluster.yml

the rke config will produce a cluster yaml file, for us to review or edit in case of misconfigure

$ ls -l cluster.yml
-rw-r----- 1 ebal ebal 4720 Dec  7 20:57 cluster.yml

rke up

We are ready to setup our KaaS by running:

$ rke up
INFO[0000] Running RKE version: v1.0.0
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [78.47.6x.yyy]
INFO[0000] [dialer] Setup tunnel for host [78.47.1x.yyy]
INFO[0000] [dialer] Setup tunnel for host [78.46.2x.yyy]
INFO[0000] [dialer] Setup tunnel for host [78.47.7x.yyy]
...
INFO[0329] [dns] DNS provider coredns deployed successfully
INFO[0329] [addons] Setting up Metrics Server
INFO[0329] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0329] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0329] [addons] Executing deploy job rke-metrics-addon
INFO[0335] [addons] Metrics Server deployed successfully
INFO[0335] [ingress] Setting up nginx ingress controller
INFO[0335] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0335] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0335] [addons] Executing deploy job rke-ingress-controller
INFO[0341] [ingress] ingress controller nginx deployed successfully
INFO[0341] [addons] Setting up user addons
INFO[0341] [addons] no user addons defined
INFO[0341] Finished building Kubernetes cluster successfully 

Kubernetes

The output of rke will produce a local kube config cluster yaml file for us to connect to kubernetes cluster.

kube_config_cluster.yml

Let’s test our k8s !

$ kubectl --kubeconfig=kube_config_cluster.yml get nodes -A
NAME           STATUS   ROLES          AGE    VERSION
rke-etcd       Ready    etcd           2m5s   v1.16.3
rke-master     Ready    controlplane   2m6s   v1.16.3
rke-worker-1   Ready    worker         2m4s   v1.16.3
rke-worker-2   Ready    worker         2m2s   v1.16.3

$ kubectl --kubeconfig=kube_config_cluster.yml get pods -A
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-67cf578fc4-nlbb6     1/1     Running     0          96s
ingress-nginx   nginx-ingress-controller-7scft            1/1     Running     0          96s
ingress-nginx   nginx-ingress-controller-8bmmm            1/1     Running     0          96s
kube-system     canal-4x58t                               2/2     Running     0          114s
kube-system     canal-fbr2w                               2/2     Running     0          114s
kube-system     canal-lhz4x                               2/2     Running     1          114s
kube-system     canal-sffwm                               2/2     Running     0          114s
kube-system     coredns-57dc77df8f-9h648                  1/1     Running     0          24s
kube-system     coredns-57dc77df8f-pmtvk                  1/1     Running     0          107s
kube-system     coredns-autoscaler-7774bdbd85-qhs9g       1/1     Running     0          106s
kube-system     metrics-server-64f6dffb84-txglk           1/1     Running     0          101s
kube-system     rke-coredns-addon-deploy-job-9dhlx        0/1     Completed   0          110s
kube-system     rke-ingress-controller-deploy-job-jq679   0/1     Completed   0          98s
kube-system     rke-metrics-addon-deploy-job-nrpjm        0/1     Completed   0          104s
kube-system     rke-network-plugin-deploy-job-x7rt9       0/1     Completed   0          117s

$ kubectl --kubeconfig=kube_config_cluster.yml get componentstatus
NAME                 AGE
controller-manager   <unknown>
scheduler            <unknown>
etcd-0               <unknown>             <unknown>

$ kubectl --kubeconfig=kube_config_cluster.yml get deployments -A
NAMESPACE       NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx   default-http-backend   1/1     1            1           2m58s
kube-system     coredns                2/2     2            2           3m9s
kube-system     coredns-autoscaler     1/1     1            1           3m8s
kube-system     metrics-server         1/1     1            1           3m4s

$ kubectl --kubeconfig=kube_config_cluster.yml get ns
NAME              STATUS   AGE
default           Active   4m28s
ingress-nginx     Active   3m24s
kube-node-lease   Active   4m29s
kube-public       Active   4m29s
kube-system       Active   4m29s

Rancer2

Now login to the 5th VM we have in Hetzner:

ssh "78.47.4x.yyy" -l ubuntu -p zzzz

and install the stable version of Rancher2

$ docker run -d
    --restart=unless-stopped
    -p 80:80 -p 443:443
    --name rancher2
    -v /opt/rancher:/var/lib/rancher
    rancher/rancher:stable
    --acme-domain k8s.ipname.me

Caveat: I have create a domain and assigned to this hostname the IP of the latest VMs!
Now I can use letsencrypt with rancher via acme-domain.

verify

$ docker images -a
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
rancher/rancher     stable              5ebba94410d8        10 days ago         654MB

$ docker ps -a -a
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                      NAMES
8f798fb8184c        rancher/rancher:stable   "entrypoint.sh --acm…"   17 seconds ago      Up 15 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   rancher2

Access

Before we continue, we need to give access to these VMs so they can communicate with each other. In cloud you can create a VPC with the correct security groups. But with VMs the easiest way is to do something like this:

sudo ufw allow from "78.47.6x.yyy",
sudo ufw allow from "78.47.1x.yyy",
sudo ufw allow from "78.46.2x.yyy",
sudo ufw allow from "78.47.7x.yyy",
sudo ufw allow from "78.47.4x.yyy",

Dashboard

Open your browser and type the IP of your rancher2 VM:

https://78.47.4x.yyy

or (in my case):

https://k8s.ipname.me

and follow the below instructions

rke_02.png

rke_03.png

rke_04.png

rke_05.png

rke_06.png

rke_07.png

Connect cluster with Rancher2

Download the racnher2 yaml file to your local directory:

$ curl -sLo rancher2.yaml https://k8s.ipname.me/v3/import/nk6p4mg9tzggqscrhh8bzbqdt4447fsffwfm8lms5ghr8r498lngtp.yaml

And apply this yaml file to your kubernetes cluster:

$ kubectl --kubeconfig=kube_config_cluster.yml apply -f rancher2.yaml

clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-2704c5f created
clusterrole.rbac.authorization.k8s.io/cattle-admin configured
deployment.apps/cattle-cluster-agent configured
daemonset.apps/cattle-node-agent configured

Web Gui

rke_08.png

rke_09.png

kubectl config

We can now use the Rancher kubectl config by downloading from here:

rke_09b.png

In this post, it is rancher2.config.yml

helm

Final step is to use helm to install an application to our kubernetes cluster

download and install

$ curl -sfL https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz | tar -zxf -

$ chmod +x linux-amd64/helm
$ sudo mv linux-amd64/helm /usr/local/bin/

Add Repo

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...
Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

weave-scope

Install weave scope to rancher:

$ helm --kubeconfig rancher2.config.yml install stable/weave-scope --generate-name
NAME: weave-scope-1575800948
LAST DEPLOYED: Sun Dec  8 12:29:12 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You should now be able to access the Scope frontend in your web browser, by
using kubectl port-forward:

kubectl -n default port-forward $(kubectl -n default get endpoints
weave-scope-1575800948-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040

then browsing to http://localhost:8080/.
For more details on using Weave Scope, see the Weave Scope documentation:

https://www.weave.works/docs/scope/latest/introducing/

Proxy

Last, we are going to use kubectl to create a forwarder

$ kubectl --kubeconfig=rancher2.config.yml -n default port-forward $(kubectl --kubeconfig=rancher2.config.yml -n default get endpoints weave-scope-1575800948-weave-scope -o jsonpath='{.subsets[0].addresses[0].targetRef.name}') 8080:4040
Forwarding from 127.0.0.1:8080 -> 4040
Forwarding from [::1]:8080 -> 4040

Open your browser in this url:

  http://localhost:8080

rke_10.png

That’s it !

Saturday, 30 November 2019

A beginner's guide to C++ Ranges and Views.

C++ Ranges are one of the major new things in C++20 and “views” are a big part of ranges. This article is a short introduction for programmers that are new to C++ Ranges.

Preface

You don’t need to have any prior knowledge of C++ Ranges, but you should have basic knowledge of C++ iterators and you should have heard of C++ Concepts before. There are various resources on C++ Concepts, e.g. Good Concepts, Wikipedia (although both contain slightly outdated syntax).

This article is based on library documentation that I wrote for the SeqAn3 library. The original is available here. There is also beginner’s documentation on C++ Concepts over there.

Since none of the large standard libraries ship C++ Ranges right now, you need to use the range-v3 library if you want to try any of this. If you do, you need to replace the std::ranges:: prefixes with just ranges:: and any std::views:: prefixes with ranges::views::.

Motivation

Traditionally most generic algorithms in the C++ standard library, like std::sort, take a pair of iterators (e.g. the object returned by begin()). If you want to sort a std::vector v, you have to call std::sort(v.begin(), v.end()) and not std::sort(v). Why was this design with iterators chosen? It is more flexible, because it allows e.g.:

  • sorting only all elements after the fifth one:

    1 std::sort(v.begin() + 5, v.end())
    
  • using non-standard iterators like reverse iterators (sorts in reverse order):

    1 std::sort(v.rbegin(), v.rend());
    
  • combine both (sorts all elements except the last 5 in reverse order):

    1 std::sort(v.rbegin() + 5, v.rend());
    

But this interface is less intuitive than just calling std::sort on the entity that you wish to sort and it allows for more mistakes, e.g. mixing two incompatible iterators. C++20 introduces the notion of ranges and provides algorithms that accept such in the namespace std::ranges::, e.g. std::ranges::sort(v) now works if v is range – and vectors are ranges!

What about the examples that suggest superiority of the iterator-based approach? In C++20 you can do the following:

  • sorting only all elements after the fifth one:

    1 std::ranges::sort(std::views::drop(v, 5));
    
  • sorting in reverse order:

    1 std::ranges::sort(std::views::reverse(v));
    
  • combine both:

    1 std::ranges::sort(std::views::drop(std::views::reverse(v), 5));
    

We will discuss later what std::views::reverse(v) does, for now it is enough to understand that it returns something that appears like a container and that std::ranges::sort can sort it. Later you will see that this approach offers even more flexibility than working with iterators.

Ranges

Ranges are an abstraction of “a collection of items”, or “something iterable”. The most basic definition requires only the existence of begin() and end() on the range.

Range concepts

There are different ways to classify ranges, the most important one is by the capabilities of its iterator.

Ranges are typically input ranges (they can be read from), output ranges (they can be written to) or both. E.g. a std::vector<int> is both, but a std::vector<int> const would only be an input range.

Input ranges have different strengths that are realised through more refined concepts (i.e. types that model a stronger concept, always also model the weaker one):

Concept Description
std::ranges::input_range can be iterated from beginning to end at least once
std::ranges::forward_range can be iterated from beginning to end multiple times
std::ranges::bidirectional_range iterator can also move backwards with --
std::ranges::random_access_range you can jump to elements in constant-time []
std::ranges::contiguous_range elements are always stored consecutively in memory

These concepts are derived directly from the respective concepts on the iterators, i.e. if the iterator of a range models std::forward_iterator, than the range is a std::ranges::forward_range.

For the well-known containers from the standard library this matrix shows which concepts they model:

std::forward_list std::list std::deque std::array std::vector
std::ranges::input_range
std::ranges::forward_range
std::ranges::bidirectional_range
std::ranges::random_access_range
std::ranges::contiguous_range

There are also range concepts that are independent of input or output or one of the above concepts, e.g. std::ranges::sized_range which requires that the size of a range is retrievable by std::ranges::size() (in constant time).

Storage behaviour

Containers are the ranges most well known, they own their elements. The standard library already provides many containers, see above.

Views are ranges that are usually defined on another range and transform the underlying range via some algorithm or operation. Views do not own any data beyond their algorithm and the time it takes to construct, destruct or copy them should not depend on the number of elements they represent. The algorithm is required to be lazy-evaluated so it is feasible to combine multiple views. More on this below.

The storage behaviour is orthogonal to the range concepts defined by the iterators mentioned above, i.e. you can have a container that satisfies std::ranges::random_access_range (e.g. std::vector does, but std::list does not) and you can have views that do so or don’t.

Views

Lazy-evaluation

A key feature of views is that whatever transformation they apply, they do so at the moment you request an element, not when the view is created.

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = std::views::reverse(vec);

Here v is a view; creating it neither changes vec, nor does v store any elements. The time it takes to construct v and its size in memory is independent of the size of vec.

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = std::views::reverse(vec);
3 std::cout << *v.begin() << '\n';

This will print “6”, but the important thing is that resolving the first element of v to the last element of vec happens on-demand. This guarantees that views can be used as flexibly as iterators, but it also means that if the view performs an expensive transformation, it will have to do so repeatedly if the same element is requested multiple times.

Combinability

You may have wondered why I wrote

1 auto v = std::views::reverse(vec);

and not

1 std::views::reverse v{vec};

That’s because std::views::reverse is not the view itself, it’s an adaptor that takes the underlying range (in our case the vector) and returns a view object over the vector. The exact type of this view is hidden behind the auto statement. This has the advantage, that we don’t need to worry about the template arguments of the view type, but more importantly the adaptor has an additional feature: it can be chained with other adaptors!

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = vec | std::views::reverse | std::views::drop(2);
3 
4 std::cout << *v.begin() << '\n';

What will this print?

Here is the solution It will print “4”, because “4” is the 0-th element of the reversed string after dropping the first two.

In the above example the vector is “piped” (similar to the unix command line) into the reverse adaptor and then into the drop adaptor and a combined view object is returned. The pipe is just a different notation that improves readability, i.e. vec | foo | bar(3) | baz(7) is equivalent to baz(bar(foo(vec), 3), 7). Note that accessing the 0th element of the view is still lazy, determining which element it maps to happens at the time of access.

Exercise

Create a view on std::vector vec{1, 2, 3, 4, 5, 6}; that filters out all uneven numbers and squares the remaining (even) values, i.e.

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = vec | // ...?
3 
4 std::cout << *v.begin() << '\n'; // should print 4

To solve this you can use std::views::transform and std::views::filter. Both take a invocable as argument, e.g. a lambda expression. std::views::transform applies the lambda on each element in the underlying range and std::views::filter “removes” those elements that its lambda function evaluates to false for.

Here is the solution

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = vec
3        | std::views::filter(   [] (auto const i) { return i % 2 == 0; })
4        | std::views::transform([] (auto const i) { return i*i; });
5 
6 std::cout << *v.begin() << '\n'; // prints 4

View concepts

Views are a specific kind of range that is formalised in the std::ranges::view concept. Every view returned by a view adaptor models this concept, but which other range concepts are modeled by a view?

It depends on the underlying range and also the view itself. With few exceptions, views don’t model more/stronger range concepts than their underlying range (except that they are always a std::ranges::view) and they try to preserve as much of the underlying range’s concepts as possible. For instance the view returned by std::views::reverse models std::ranges::random_access_range (and weaker concepts) iff the underlying range also models the respective concept. It never models std::ranges::contiguous_range, because the third element of the view is not located immediately after the second in memory (but instead before the second).

Perhaps surprising to some, many views also model std::ranges::output_range if the underlying range does, i.e. views are not read-only:

1 std::vector vec{1, 2, 3, 4, 5, 6};
2 auto v = vec | std::views::reverse | std::views::drop(2);
3 
4 *v.begin() = 42; // now vec == {1, 2, 3, 42, 5, 6 } !!

Exercise

Have a look at the solution to the previous exercise (filter+transform). Which of the following concepts do you think v models?

Concept yes/no?
std::ranges::input_range
std::ranges::forward_range
std::ranges::bidirectional_range
std::ranges::random_access_range
std::ranges::contiguous_range
std::ranges::view
std::ranges::sized_range
std::ranges::output_range

Here is the solution

Concept yes/no?
std::ranges::input_range
std::ranges::forward_range
std::ranges::bidirectional_range
std::ranges::random_access_range
std::ranges::contiguous_range
std::ranges::view
std::ranges::sized_range
std::ranges::output_range

The filter does not preserve random-access and therefore not contiguity, because it doesn’t “know” which element of the underlying range is the i-th one in constant time. It cannot “jump” there, it needs to move through the underlying range element-by-element. This also means we don’t know the size.

The transform view would be able to jump, because it always performs the same operation on every element independently of each other; and it would also preserve sized-ness because the size remains the same. In any case, both properties are lost due to the filter. On the other hand the transform view produces a new element on every access (the result of the multiplication), therefore v is not an output range, you cannot assign values to its elements. This would have prevented modelling contiguous-range as well – if it hadn’t been already by the filter – because values are created on-demand and are not stored in memory at all.

Understanding which range concepts “survive” which particular view needs some practice. For the SeqAn3 library we try to document this in detail, I hope we will see something similar on cppreference.com.

Post scriptum

I am quite busy currently with my PhD thesis, but I plan to publish some smaller articles on ranges and views before the holiday season. Most will be based on text pieces I have already written but that never found their way to this blog (library documentation, WG21 papers, snippets from the thesis, …).

Thanks for reading, I hope this article was helpful! If you have any questions, please comment here or on twitter/mastodon.

Friday, 22 November 2019

Align user IDs inside and outside Docker with subuser mapping

  • Seravo
  • 16:34, Friday, 22 November 2019

While Docker is quite handy in many ways, one inconvenient aspect is that if one mounts some host machine directory inside Docker, and the Docker image does something to those files as a non-root user, one will run into problems if the UID of the host machine user and the Docker image user do not match.

Consider this scenario: I my user ’otto’ is running on the host machine as UID 1001. Then I spin up a development image with Docker where I do development and store files in a locally mounted directory. Inside there is a temporary Docker user called ’vagrant’ with the UID 1000. Now whenever any files are updated (say via composer update for example) it will fail to work, as the mounted directory is owned by UID 1001 and the Docker image user with UID 1000 will not be able to write anything.

Docker run –user does not help

If one has complete control of the Docker image, one could modify the UID of the Docker user and rebuild. Most of the time that is however not an option. Docker run has the option --user but that does not help if the Docker image already has created the user with a particular name and UID/GID settings.

Subuser and subgroup mapping to the rescue

To use subuser and subgroup mapping, first enable the feature in Docker by editing the /etc/docker/daemon.json config file to include:

{
  "userns-remap": "otto"
}

Naturally, replace ’otto’ with your own username. For changes to take effect, restart Docker with sudo systemctl restart docker and check with sudo systemctl status docker that is restarted wihtout failures and is running.

To instruct the mapping, edit the files /etc/subuid (for user IDs) and /etc/subgid (for group IDs). In the case above, I mapped users 0-1000 to have the same UID/GID both inside and outside Docker. I want for root to have UID 0 all the time. However the user IDs should start from 1001 and not from 1000. This way the most common UID inside the Docker container 1000 will match my own laptop UID 1001 of my user.

otto:0:1000
otto:1001:65536

Voila!

Read more

Original idea by https://ilya-bystrov.github.io/posts/docker-daemon-remapping/

Details at http://man7.org/linux/man-pages/man5/subuid.5.html and https://echorand.me/posts/docker-user-namespacing-remap-system-user/

Sunday, 17 November 2019

New challenges for Free Software business models

This year the FSFE community meeting was combined with the “South Tyrol Free Software Conference” (SFScon) in Bolzano. For me this was a special event because the first international FSFE community meeting ever happened as well at the SFScon in 2006. Back then I met many people from FSFE in person for the first time. For me this was the starting point for getting more and more involved in the Free Software Foundation Europe.

At this years conference I gave a talk about the “New challenges for Free Software business models” at the FSFE track. A few weeks ago I published a article about this topic in the German Linux Magazine. As many of you may know, Free Software as such is not a business model but a license model which can be combined with many different business and development models.

Distinction between business-, license- and development-model

I’m convinced that working business models around Free Software are a important building block for Free Software to compete successfully with the proprietary software world. The questions how to make money with Free Software and how to build sustainable and strong companies around Free Software are important topics almost right from the beginning of the Free Software movement. Over time we come up with various business models which worked quite well. But the change in technology over the last few years start to put some of the more successful business models at risk. The talk summarized the current challenges and invited the audience to think about possible solutions.


(This blog contain some presentation slides, you can see them here.)

After the talk I had many interesting discussions. Most people agreed that this is a problem. One suggestion was to have a look at Mozilla for a successful business model. Another idea was that the big IaaS providers might buy some of the companies behind the software in the future and continue the development, which wouldn’t be a problem as long as they would stick to Free Software. Yet another interesting thought was that if you look at the software market as a whole you will realize that Free Software is still a small piece of the cake. As long as the cake as a whole and the Free Software part in particular grows fast we don’t have to worry that much how the Free Software part is split up, there will be enough space for everyone. The e.foundation was also mentioned as a possible example for a successful business model and many more ideas floated around.

I don’t want to comment on the individual ideas here but it shows that we had a lively discussion with many interesting ideas and thoughts.

Do you also have some thoughts around this topic? Feel free to share them in the comments!

Sunday, 10 November 2019

19.12 releases branches created

The branch naming has changed to try to accommodate for the stopping of the "KDE Applications" brand, it's now called
release/19.12

Make sure you commit anything you want to end up in the 19.12 releases to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 14 of November.

More interesting dates
November 28, 2019: KDE Applications 19.12 RC (19.11.90) Tagging and Release
December 5, 2019: KDE Applications 19.12 Tagging
December 12, 2019: KDE Applications 19.12 Release

https://community.kde.org/Schedules/Applications/19.12_Release_Schedule

Cheers,
Albert

P.S: Yes, this release unfortunately falls in the middle of the debranding of "KDE Applications" and there's still a few things called "KDE Applications" here and there

[*] There's a small issue with kwave we're working on figuring it out

Thursday, 07 November 2019

You can use a C++11 range for loop over a static array

void DeviceListing::populateListing(const show showStatus)
{
const Solid::DeviceInterface::Type needHardware[] = {
Solid::DeviceInterface::Processor,
Solid::DeviceInterface::StorageDrive,
Solid::DeviceInterface::Battery,
Solid::DeviceInterface::PortableMediaPlayer,
Solid::DeviceInterface::Camera
};

clear();

for (unsigned int i = 0; i < (sizeof(needHardware)/sizeof(Solid::DeviceInterface::Type)); i++) {
QTreeWidgetItem *tmpDevice = createListItems(needHardware[i]);
deviceMap[needHardware[i]] = static_cast(tmpDevice);

if ((tmpDevice->childCount() > 0) || (showStatus == ALL)) {
addTopLevelItem(tmpDevice);
}
}
}

in C++11 you can rewrite it in the much easier to read

void DeviceListing::populateListing(const show showStatus)
{
const Solid::DeviceInterface::Type needHardware[] = {
Solid::DeviceInterface::Processor,
Solid::DeviceInterface::StorageDrive,
Solid::DeviceInterface::Battery,
Solid::DeviceInterface::PortableMediaPlayer,
Solid::DeviceInterface::Camera
};

clear();

for (const Solid::DeviceInterface::Type nh : needHardware) {
QTreeWidgetItem *tmpDevice = createListItems(nh);
deviceMap[nh] = static_cast(tmpDevice);

if ((tmpDevice->childCount() > 0) || (showStatus == ALL)) {
addTopLevelItem(tmpDevice);
}
}
}

Sunday, 03 November 2019

On “Clean Architecture”

I recently did what I rarely do: buy and read an educational book. Shocking, I know, but I can assure you that I’m fine đŸ˜‰

The book I ordered is Clean Architecture – A Craftsman’s Guide to Software Structure and Design by Robert C. Martin. As the title suggests it is about software architecture.

I’ve barely read half of the book, but I’ve already learned a ton! I find it curious that as a halfway decent programmer, I often more or less know what Martin is talking about when he is describing a certain architectural pattern, but I often didn’t know said pattern had a name, or what consequences using said pattern really implied. It is really refreshing to see the bigger picture and having all the pros and cons of certain design decisions layed out in an overview.

One important point the book is trying to put across is how important it is to distinguish between important things like business rule, and not so important things as details. Let me try to give you an example.

Lets say I want to build a reactive Android XMPP chat application using Smack (foreshadowing? đŸ˜‰ ). Lets identify the details. Surely Smack is a detail. Even though I’d be using Smack for some of the core functionalities of the app, I could just as well chose another XMPP library like babbler to get the job done. But there are even more details, Android for example.

In fact, when you strip out all the details, you are left with reactive chat application. Even XMPP is a detail! A chat application doesn’t care, what protocol you use to send and receive messages, heck it doesn’t even care if it is run on Android or on any other device (that can run java).

I’m still not quite sure, if the keyword reactive is a detail, as I’d say it is a more a programming paradigm. Details are things that can easily be switched out and/or extended and I don’t think you can easily replace a programming paradigm.

The book does a great job of identifying and describing simple rules that, when applied to a project lead to a cleaner, more structured architecture. All in all it teaches how important software architecture in general is.

There is however one drawback with the book. It constantly wants to make you want to jump straight into your next big project with lots of features, so it is hard to keep reading while all that excited ;P

If you are a software developer – no matter whether you work on small hobby projects or big enterprise products, whether or not you pursue to become a Software Architect – I can only recommend reading this book!

One more thought: If you want to support a free software project, maybe donating books like this is a way to contribute?

Happy Hacking!

Wednesday, 30 October 2019

Akademy Guest Post: Some thoughts regarding discussion culture

First of all I want to say thanks for hosting me on your blog and letting me write this guest post!

I am Florian MĂźller from Germany, working as a Sysadmin at a small tech collective. I have been a KDE user for years, but I am quite new to the KDE community. This years Akademy was my first and I am looking forward to dive in deeper and contribute more to KDE.

I had a really good time at Akademy in Milano, got to know so many cool people and learned a lot about the KDE community and KDE in general. Thanks a lot to everyone organizing this unique event! I am sure it won't be the last time I'll attend Akademy.

I want to share some things I noticed about discussion culture, please feel free to criticize/discuss/comment, so maybe we can work out some guidelines for that in the future.
First of all some words about discussion culture in general. Because all people are different, we need technics to allow everyone to participate in group discussions. When people are not that confident in general, they tend to not raise their voices in bigger groups, also if they have opponent opinions to the main stream. So in order to have a greater culmination of creativity and ideas, we need to create an atmosphere to make it easier for everyone to participate.

One thing would be the setup of the room the discussion will be held in. Do you want to have a central speaker in the front leading the discussion or do you want to setup the chairs and tables in a circle so everyone is on the same level. Sometimes there are mostly lecture halls available, so time to plan in advance and ask for a different room has to be taken in accountance. The advantage of a circle is that everybody is able to see everyone else. We can also think about forming smaller discussion groups when possible and a topic can be split up into smaller parts.

Another point you can think about is how you organize the turn keeping. One could appoint a person who only deals with turn keeping. We could alternate the job of turn keeping from one discussion to the next. At the beginning of the discussion we should make it transparent on how turn keeping will work. For example we could have a rule that people who want to say something the first time, move to the top of the speakers list to make sure that rare voices will be heard. We could also have a speakers list alternated by gender. There could be a maximum time for speakers so that single people don’t dominate the whole discussion by simply talking others out.

There are many other aspect we can think about when we want to make group discussions more inclusive. With this post, I just wanted to get the discussion started, so we can improve on that in the future.

Here are some links to continue reading:

https://www.workingdigital.de/en/blog/post/respectful-culture-of-discussion
https://www.wikihow.com/Be-Good-at-Group-Discussion

You can reach me directly at mail@flo-mue.de.

Sunday, 27 October 2019

My participation in the FSFE 2019 GA meeting in Essen

View on the Ruhr at the venue. Thankful to be among the guests that the FSFE GA invited to join their meeting, I went on 11 October 2019 to the Linuxhotel “Villa Vogelsang” in Essen. Unfortunately it was not possible to buy a separate ticket for the S-bahn on the website of Deutsche Bahn, so […]

Saturday, 26 October 2019

LibreDNS has a new AdBlock endpoint

LibreDNS has a new endpoint

 https://doh.libredns.gr/ads

This new endpoint is unique cause it blocks by default Ads & Trackers !

 

AdBlock

We are currently using Steven Black’s hosts file.

 

noticeable & mentionable

LibreDNS DOES NOT keep any logs and we are using OpenNIC as TLD Tier1 root NS

 

Here are my settings

 

ads doh

Tuesday, 22 October 2019

The 3rd FSFE System Hackers hackathon

On 10 and 11 October, the FSFE System Hackers met in person to tackle problems and new features regarding the servers and services the FSFE is running. The team consists of dedicated volunteers who ensure that the community and staff can work effectively. The recent meeting built on the great work of the past 2 years which have been shaped by large personal and technical changes.

The System Hackers are responsible for the maintenance and development of a large number of services. From the fsfe.org website’s deployment to the mail servers and blogs, from Git to internal services like DNS and monitoring, all these services, virtual machines and physical servers are handled by this friendly group that is always looking forward to welcoming new members.

Overview of the FSFE's services and servers

Overview of the FSFE's services and servers

So in October, six of us met in Cologne. Fittingly, according to a saying in this region, if you do something for the third time, it’s already tradition. So we accomplished this after successful meetings in Berlin (April 2018) and Vienna (March 2019). And although it took place on workdays, it’s been the meeting with the highest participation so far!

Getting. Things. Done!

After the first and second meeting were mostly about getting an overview of historically grown and sparsely documented infrastructure and bringing it into a stable state, we were able to deal with a few more general topics this time. At the same time, we exchanged our knowledge with newly joined team members. Please find the areas we worked on below:

  • Florian migrated the FSFE Blogs to a new server and thereby also updated the underlying Wordpress to the latest version. This has been a major blocker for several other tasks and our largest security risk. There are still a few things left to do, e.g. creating a theme in line with the FSFE design and some announcement to the community. However, the most complicated part is done!
  • Altogether, we upgraded a lot of machines to Debian 10, just after we lifted most servers to Debian 9 in March. Some are still missing, but since the migration is rather painless, we can do that during the next months.
  • We confirmed that the new decentralised backup system setup by myself and based on Borg works fine. This gives us more confidence in our infrastructure.
  • Thanks to Florian and Albert, we finally got rid of the last 2 services that were not using Let’s Encrypt’s self-renewing certificates.
  • Vincent and Francesco took care of finishing the migration of all our Docker containers to use the Docker-in-Docker deployment instead of the hacky Ansible playbooks we used initially. This has a few security advantages and enables the next developments for a more resilient Docker infrastructure.
  • At the moment, all our Docker containers run on one single virtual machine. Although this runs on a Proxmox/Ceph cluster, it’s obviously a single point of failure. However, for a distribution on multiple servers we lack the hardware resources. Nonetheless, we already have concrete plans how to make the Docker setup more resilient as soon as we have more hardware available. Vincent documented this on a wiki page.
  • On the human side, we made sure that all of us know what’s on the plate for the next weeks and months. We have quite a few open issues collected in our Kanban board, and we quickly went through all of them to sketch the possible next steps and distribute responsibilities.

Started projects in the making

Two days are quite some time and we worked hard to use them as effectively as possible, so some tasks have been started but could not be completed – partly because we just did no have enough time, partly because they require more coordination and in-depth discussion:

  • As follow-up on a few unpleasant surprises with Mailman’s default values, we figured that it is important to have an automatic overview of the most sensible settings of the 127 (!) mailing lists we host. Vincent started to work on a way to extract this information in a human- and machine-readable format and merge/compare it with the more verbose documentation on the mailing lists we have internally.
  • Francesco tackled a different weak point we have: monitoring. We lack a tool that informs us immediately about problems in our infrastructure, e.g. defunct core services, full disk drives or expired certificates. Since this is not trivial at all, it requires some more time.
  • Thomas, maintainer of the FSFE wiki, researched on a way to better organise and distribute the SSH accesses in our team. Right now, we have no comfortable way to add or remove SSH keys on our more than 20 machines. His idea is to use an Ansible playbook to manage these, and thereby also create a shared Ansible inventory which can be used as a submodule for the other playbooks we use in the team so we don’t have to maintain all of them individually if a machine is added, changed or removed.
  • One of the most ancient physical machines we still run is hosting the SVN service which is only used by one service now: DNS. We started to work on migrating that over to Git and simultaneously improving the error-checking of the DNS configuration. Albert and I will continue with that gradually.
  • Not on the system hackers meeting itself but two days later, BjĂśrn, Albert and I worked on getting a Nextcloud instance running. Caused by our rather special LDAP setup, we had to debug a lot of strange behaviour but finally figured everything out. Now, the last missing blocker is some user/permission setting within our LDAP. As soon as this is finished, we can shut down one more historically grown, customised-hacked and user-unfriendly service.

Overall, the perspective for the System Hackers is better than ever. We are a growing team carried by motivated and skilled volunteers with a shared vision of how the systems should develop. At the same time, we have a lot of public and internal documentation available to make it easy for new people to join us.

I would like to thank Albert, Florian, Francesco, Thomas and Vincent for their participation in this meeting, and them and all other System Hackers for their dedication to keep the FSFE running!

Tuesday, 15 October 2019

self-hosted Dns Over Https service

LibreOps & LibreDNS

LibreOps announced a new public service: LibreDNS, a new DoH/DoT (DNS over Https/DNS over TLS) free public service for people that want to bypass DNS restrictions and/or want to use TLS in their DNS queries. Firefox has already collaborated with Cloudflare for this case but I believe we can do better than using a centralized public service of a profit-company.

Personal Notes

So here are my personal notes for using LibreDNS in firefox

Firefox

Open Preferences/Options
firefox options

Enable DoH
firefox doh

TRR mode 2

Now the tricky part.

TRR mode is 2 when you enable DoH. What does this mean?

2 is when firefox is trying to use DoH but if it fails (or timeout) then firefox will go back to ask your operating system’s DNS.

DoH is a URL, so the first time firefox needs to resolve doh.libredns.gr and it will ask your operating system for that.

host file

There is way to exclude doh.libredns.gr from DoH , and use your /etc/hosts file instead your local DNS and enable TRR mode to 3, which means you will ONLY use DoH service for DNS queries.

# grep doh.libredns.gr /etc/hosts
116.203.115.192 doh.libredns.gr

TRR mode 3

and in

about:config

about:config

DNS Leak

Try DNS Leak Test to verify that your local ISP is NOT your firefox DNS

https://dnsleaktest.com/

Thunderbird

Thunderbird also supports DoH and here are my settings

about:config

PS: Do not forget, this is NOT a global change, just your firefox will ask libredns for any dns query.

Saturday, 12 October 2019

Spotify is Defective by Design

I never used Spotify, since it contains DRM. Instead I still buy DRM-free CDs.
Most of my audio collection is stored in free formats such as FLAC and Ogg Vorbis,
or Red Book in the case of CDs, everything can be played by free players such as
VLC or mpd.

Spotify, which uses a central server, also spies on the listener. Everytime you
listen a song, Spotify knows which song you have listened and when and where.
By contrast free embedded operating systems such as Rockbox do not phone home.
CDs can be baught anonymously and ripped using free software, there is no need
for an internet commection.

Defective by Design recommends the book “Spotify Teardown” which I haven’t read
yet. The book is an innovative investigation of the inner workings of Spotify that
traces the transformation of audio files into streamed experience.

Tuesday, 08 October 2019

2 years since Catalan Independence Referendum, an update

Note 1: This is not KDE or Free Software related, if you're not interested, stop reading, no one is forcing you to read
Note 2: Yes, this is still going to Planet KDE, KDE friends and colleagues ask me about it almost every time we met, so there's definitely interest
Note 3: You're more than welcome to comment, but remember this blog is my house, so don't complain when i don't tolerate stuff i wouldn't tolerate at my home

You may remember Catalonia held an Independence referendum 2 years ago, lots of things have happened since then, I'm going to try to summarize, if you're interested in my initial reaction read my blog from that very same day.

On October 27 2017, following the referendum results, the Parliament of Catalonia declared Independence by a majority of 70 out of 135 MPs. That was mostly ignored by every single country in the world. A few hours later the Spanish government used bigger-army-diplomacy (AKA article 155 of Spanish Constitution) to decide that the Parliament of Catalonia would be suspended and new elections would happen in Catalonia on December 21.

On November 2nd 2017, a judge put most of the Catalan government in jail with the charges of "you've been terribly bad".

They still remain in jail awaiting for trial results (trial finished a few months ago).

Notable exceptions of government officials not in jail are president Carles Puigdemont and Ministers Clara PonsatĂ­ and Toni ComĂ­n, that exiled themselves to other European countries. Spain has tried several times to get European countries to extradite them to Spain because "they've been terribly bad", but that has failed every single time, so they ended up revoking the extradition requests.

Elections happened on December 21 2017, and to shocking surprise of no one, almost virtually the same results happened if you count the pro-independence vs anti-independence blocks.

Since then the Catalan pro-independence government has been basically very low-key in their actions.

Meanwhile, Spain had a its own elections in April this year. They did this nice thing of letting the jailed (but still not sentenced to anything, so innocent) Catalan politicians run, and several of them won Congress seats. Then they said "oh but you know, you're a very dangerous person, so we're not going to let you attend Congress sessions". Not that it matters now, since Spain is unable to govern itself and is having it's 4th election in 4 years this November.

We also had elections in the European Union, and you know what? The same happened! They let catalan-jailed politicians run but then they decided they would not let them take the seats. Actually, this time is even worse since Carles Puigdemont and Toni ComĂ­n, that are living in Brussels without any extradition petition (i.e. they're basically free citizens of Europe), have also been rejected from taking their seats for some esoteric reason.

As a "fun fact", in late 2018 some Spanish regions had elections. Andalucia was one of them and the current government is a coalition of PP+C+VOX, i.e. right wing conservatives, right wing liberals and ultra right wing nut-jobs. One of their first decisions was to put away 100000 euros for grants to teach Spanish to Spanish born people (not for helping immigrants, they're right wing crazies after all) living in Catalonia that don't know how speak Spanish. I'm 99.99% sure the number of people that matches that description is very close to 0 people. You heard well, the poorest region of Spain decided to subsidize the 4th richest region for something that is virtually useless. Thanks!

Much less "fun fact", last week Monday, the Spanish police decided to detain 9 pro-independence people (later to be 7 since 2 were let go) with terrorism charges. The investigation is on-going and technically it should be secret, but we've seen pictures all over the news of what the cops say to be material to make bombs, and all i can see is a pressure cooking pot and some fireworks used typically for Ball de diables.

I don't want to 100% rule out this people having actual plans to do something nasty, but Spanish police/judges/state history of just fabricating charges against people they don't like is so long (An anarchist recently spent *18* months in jail awaiting trial for tweeting stuff "Goku lives, the fight continues" to be then just found innocent after trial) that i would not be surprised either if this is just Spain doing bigger-army-diplomacy again.

TL;DR: Everything is fucked up and I can't really see a way out at this point.

Monday, 07 October 2019

Linux App Summit 2019 schedule is out!

We published the Linux App Summit 2019 schedule last week.

We have a bunch of interesting talks (sadly we had to leave almost 40 almost as interesting talks out, we had lots of awesome submissions!) ranging from flatpak and snaps to how product management is good for Free Software projects with some thought provoking talks thrown in to make us think what's the future of our platform.

I am going and you should totally come too!

The attendance is free but please register at https://events.linuxappsummit.org/


Better Work Together

Do you want to work on stuff that matters? Since 2010 a group of people in Wellington, New Zealand have been organizing themselves to do just that. Now you can read how they did it!

Their organization is called Enspiral and stories about their amazing journey so far has been documented in the cleverly titled book “Better Work Together“. I have been following Enspiral for years and pretty much devoured the book. It was such a joy to read the personal stories and I am inspired to explore “reinventing Enspiral” (as they call it) here in The Netherlands.

Please go read the book! For free! It is shared under a Creative Commons license, so here is a link to a free and gratis download of the book Better Work Together. Of course, do give some of your money to support the authors if you like what you read.

However, you are probaly super-busy and do not have time to read all the books in the world. So, to help you a bit there the rest of this post is “Better Work Together in Quotes”: a selection of the quotes that resonated a lot with me and that give you an idea of what Enspiral and the book are all about.

If these quotes resonate with you as well, I recommend you to pick up the book and invite you to contact me to see if we can exchange ideas or even join forces to jointly work (more) on stuff that truly matters to us!

Here come the quotes, boldface added to the ones that especially stand out to me:

“Solidarity is not the result of world-changingly good ideas, it is the cause.”

“The work itself – the process of collaboration – ends up as important as whatever product or service is being delivered.”

“If you are sincere in your desire to make the world a better place then your personal success is our number one priority.”

“Thank you for your attention”

– As a species we have never been so powerful.
– With this power we’re changing things.
– We are reaching limits we haven’t encountered before.
– We have the potential (and the urgent need) to change everything.
– What do we do?
– Where do we start?

“Our attention is our power. It is the invisible, constant force that ultimately determines our individual and collective potential.”

“I don’t care what you work on, whether it’s climate change, global poverty, self management, social enterprise, planting trees, gender equality, decolonisation, or steady state economics. If your primary mission is to make the world a better place, your personal success is the reason Enspiral exists.”

“There is a trickle of human energy going into the most important issues of our times. Enspiral exists to help turn that trickle into a river.”

“We organise differently: No one should lead all the time and everyone should lead some of the time.”

“Get paid well to do work you love, with people
you love, while working on a systemic issue you care deeply about.”

“It isn’t easy, but it is possible.”

“The only way to understand Enspiral is to listen to many peo-
ple, which is why many of us are contributing to this book.”

“Either they were working full time on something that wasn’t aligned with their purpose or they were struggling financially as they put all their energy into volunteering.”

“It occurred to me that maybe the best way I could contribute was to help people who wanted to change the world get highly paid contract work.”

“A culture of experimentation”

Managing the cost of failure:
– Respect the status quo
– Copy patterns that work
– Start small
– Let things settle
– Deliver value early

“Distributing leadership”

“We got by, but we never managed to achieve abundance.”

“The dream lives on, continually emerging.”

“We discover the path by walking it together.”

“We tend to follow that paths we can see.”

“Enspiral is agnostic about individual purpose, supporting anyone to do meaningful work – or ‘stuff that matters’ – no matter how they define it. There is no adherence to dogma about specific ways to change the world, other than to help one another.”

“We value the person over the product, the flowering and unfolding of the person over profit. And we’ve created a place where these are not mutually exclusive.”

“Capitalism bombards us with messages of unworthiness and pejorative definitions of success and advertising manipulates our will. Even if we are aware of this, it’s not easy to articulate an alternative story.”


Saturday, 28 September 2019

CentOS 8 NetInstall

a few days ago CentOS-8 (1905) was released and you can find details here ReleaseNotes

Below is a visual guide on how to net-install centos8 1905

notes on a qemu-kvm

Boot

01centos81905.png

Select Language

02centos81905.png

Menu

I have marked the next screens. For netinstall you need to setup first network

03centos81905.png

Time

04centos81905.png

Network

05centos81905.png

Disable kdump

06centos81905.png

Add Repo

ftp.otenet.gr/linux/centos/8/BaseOS/x86_64/os/

07centos81905.png

Server Installation

08centos81905.png
Disk

09centos81905.png

Review

10centos81905.png

Begin Installation

11centos81905.png

Root

12centos81905.png

User

Make this user administrator

13centos81905.png

Installation

14centos81905.png
15centos81905.png

Reboot

16centos81905.png

Grub

17centos81905.png

Boot

18centos81905.png

CentOS-8 (1905)

19centos81905.png

Tag(s): centos8

Thursday, 26 September 2019

Using template file with terraform

When using tf most of times you need to reuse your Infrastructure as Code, and so your code should be written in such way. In my (very simple) use-case, I need to reuse user-data for cloud-init to setup different VMs but I do not want to rewrite basic/common things every time. Luckily, we can use the template_file.

user-data.yml

In the below yaml file, you will see that we are using tf string-template to produce hostname with this variable:

"${hostname}"

here is the file:

#cloud-config

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

# Set TimeZone
timezone: Europe/Athens

hostname: "${hostname}"

# Install packages
packages:
  - mlocate
  - figlet

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# Remove cloud-init
runcmd:
  - figlet "${hostname}" > /etc/motd
  - updatedb

Variables

Let’s see our tf variables:

$ cat Variables.tf
variable "hcloud_token" {
    description = "Hetzner Access API token"
    default = ""
}
variable "gandi_api_token" {
    description = "Gandi API token"
    default = ""
}
variable "domain" {
    description = " The domain name "
    default = "example.org"
}

Terraform Template

So we need to use user-data.yml as a template and replace hostname with var.domain

$ cat example.tf

Two simple steps:

  • First we read user-data.yml as template and replace hostname with var.domain
  • Then we render the template result to user_data as string
provider "hcloud" {
  token = "${var.hcloud_token}"
}

data "template_file" "userdata" {
  template = "${file("user-data.yml")}"
  vars = {
    hostname  = "${var.domain}"
  }
}

resource "hcloud_server" "node1" {
  name = "node1"
  image = "ubuntu-18.04"
  server_type = "cx11"
  user_data = "${data.template_file.userdata.rendered}"
}
$ terraform version
Terraform v0.12.3

And that’s it !

Tag(s): terraform

Sunday, 22 September 2019

Akademy 2019




It's 10 days already since Akademy 2019 finished and I'm already missing it :/

Akademy is a week-long action-packed conference, talks, BoFs, daytrip, dinner with old and new friends, it's all a great combination and shows how amazing KDE (yes, the community, that's our name) is.

On the talks side i missed some that i wanted to attend because i had to extend my time at the registration booth helping fellow KDE people that had forgotten to register (yes, our setup could be a bit easier, doesn't help that you have to register for talks, for travel support and for the actual conference in three different places), but I am not complaining, you get to interact with lots of people in the registration desk, it's a good way to meet people you may not have met otherwise, so please make sure you volunteer next year ;)

One of the talks i want to highlight is Dan VrĂĄtil's talk about C++, I agree with him that we could do much better in making our APIs more expressive using the power of "modern" C++ (when do we stop it calling modern?). It's a pity that the slides are not up so you'll have to live with KĂŠvin Ottens sketch of it for now.



My talk was sadly not very well attended since i was sharing time with the more interesting talk by Marco and Bhushan about Plasma in embedded devices (i would have gone there if it wasn't because i had a talk) so if you're interested in fuzzing please read my slides and give me a shout if you want to volunteer to help us fuzz all the things!

On the BoFs side one of the hardest but most interesting we had was the one about KDE Applications (the N things we release monthly in one go) vs KDE applications (all applications made by us), and i think we may be on the right track there, there's a plan, needs finishing out, but I'm confident it may actually work :)

One of the things that shows how amazing this conference is and how many interesting things are happening is the fact that i made a small list of bugs i wanted to work on if i ever got bored of the talks or the BoFs, i don't think i even started on any of them ^_^

Akademy 2020

Akademy is a core event for KDE and we need to find people to help us organising it every year. If you think you can help, please have a look at the call for hosts document.

Thanks

I would like to thank the UnixMiB friends for hosting us, i know it's lots of work and i hope you know we all very much appreciate the effort you put in.

I would like to thank the Akademy-team on KDE's side too, you are amazing and pull out great work year after year, keep it up!

I would like to thank the KDE e.V. for partially sponsoring my attendance to Akademy, please donate to KDE if you think the work done at Akademy is important.

Friday, 20 September 2019

Partition MisAlignment

this article also has an alternative title:

How I Learned to Stop Worrying and Loved my Team

This is a story of troubleshooting cloud disk volumes (long post).

Cloud Disk Volume

Working with data disk volumes in the cloud have a few benefits. One of them is when the volume runs out of space, you can just increase it! No need of replacing the disk, no need of buying a new one, no need of transferring 1TB of data from one disk to another. It is a very simple matter.

Partitions Vs Disks

My personal opinion is not to use partitions. Cloud data disk on EVS (elastic volume service) or cloud volumes for short, they do not need a partition table. You can use the entire disk for data.

Use: /dev/vdb instead of /dev/vdb1

Filesystem

You have to choose your filesystem carefully. You can use XFS that supports Online resizing via xfs_growfs, but you can not shrunk them. But I understand that most of us are used to work with extended filesystem ext4 and to be honest I also feel more comfortable with ext4.

You can read the below extensive article in wikipedia Comparison of file systems for more info, and you can search online regarding performance between xfs and ext4. There are really close to each other nowadays.

Increase Disk

Today, working on a simple operational task (increase a cloud disk volume), I followed the official documentation. This is something that I have done in the past like a million times. To provide a proper documentation I will use redhat’s examples:

In a nutshell

  • Umount data disk
  • Increase disk volume within the cloud dashboard
  • Extend (change) the geometry
  • Check filesystem
  • Resize ext4 filesystem
  • Mount data disk

Commands

Let’s present the commands for reference:

# umount /dev/vdb1

[increase cloud disk volume]

# partprobe

# fdisk /dev/vdb
[delete partition]
[create partition]

# partprobe

# e2fsck /dev/vdb1
# e2fsck -f /dev/vdb1
# resize2fs /dev/vdb1
# mount /dev/vdb1

And here is fdisk in more detail:

Fdisk

# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0004e2c8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

Delete


Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Create

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-2936012799, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-2936012799, default 2936012799):

Created a new partition 1 of type 'Linux' and of size 1.4 TiB.

Print

Command (m for help): p
Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0004e2c8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1        2048 2936012799 2936010752  1.4T 83 Linux

Write

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

File system consistency check

An interesting error occurred, something that I had never seen before when using e2fsck

# e2fsck /dev/vdb1
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/vdb1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Superblock invalid, trying backup blocks

Panic

I think I lost 1 TB of files!

At that point, I informed my team to raise awareness.

partition_panic.png

Yes I know, I was a bit sad at the moment. I’ve done this work a million times before, also the Impostor Syndrome kicked in!

Snapshot

I was lucky enough because I could create a snapshot, de-attach the disk from the VM, create a new disk from the snapshot and work on the new (test) disk to try recovering 1TB of lost files!

Make File System

mke2fs has a dry-run option that will show us the superblocks:

mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 367001344 4k blocks and 91750400 inodes
Filesystem UUID: f130f422-2ad7-4f36-a6cb-6984da34ead1
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848

Testing super blocks

so I created a small script to test every super block against /dev/vdb1

e2fsck  -b  32768      /dev/vdb1
e2fsck  -b  98304      /dev/vdb1
e2fsck  -b  163840     /dev/vdb1
e2fsck  -b  229376     /dev/vdb1
e2fsck  -b  294912     /dev/vdb1
e2fsck  -b  819200     /dev/vdb1
e2fsck  -b  884736     /dev/vdb1
e2fsck  -b  1605632    /dev/vdb1
e2fsck  -b  2654208    /dev/vdb1
e2fsck  -b  4096000    /dev/vdb1
e2fsck  -b  7962624    /dev/vdb1
e2fsck  -b  11239424   /dev/vdb1
e2fsck  -b  20480000   /dev/vdb1
e2fsck  -b  23887872   /dev/vdb1
e2fsck  -b  71663616   /dev/vdb1
e2fsck  -b  78675968   /dev/vdb1
e2fsck  -b  102400000  /dev/vdb1
e2fsck  -b  214990848  /dev/vdb1

Unfortunalyt none of the above commands worked!

last-ditch recovery method

There is a nuclear option DO NOT DO IT

mke2fs -S /dev/vdb1

Write superblock and group descriptors only. This is useful if all of the superblock and backup superblocks are corrupted, and a last-ditch recovery method is desired. It causes mke2fs to reinitialize the superblock and group descriptors, while not touching the inode table and the block and inode bitmaps.

Then e2fsck -y -f /dev/vdb1 moved 1TB of files under lost+found with their inode as the name of every file.

I cannot stress this enough: DO NOT DO IT !

Misalignment

So what is the issue?

See the difference of fdisk on 1TB and 1.4TB

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1        2048 2936012799 2936010752  1.4T 83 Linux

The First sector is now at 2048 instead of 1.

Okay delete disk, create a new one from the snapshot and try again.

Fdisk Part Two

Now it is time to manually put the first sector on 1.

# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p

Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0004e2c8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1        2048 2936012799 2936010752  1.4T 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2936012799, default 2048): 1
Value out of range.

Value out of range.

damn it!

sfdisk

In our SRE team, we use something like a Bat-Signal to ask for All hands on a problem and that was what we were doing. A colleague made a point that fdisk is not the best tool for the job, but we should use sfdisk instead. I actually use sfdisk to create backups and restore partition tables but I was trying not to deviate from the documentation and I was not sure that everybody knew how to use sfdisk.

So another colleague suggested to use a similar 1TB disk from another VM.
I could hear the gears in my mind working…

sfdisk export partition table

sfdisk -d /dev/vdb > vdb.out

# fdisk -l /dev/vdb
Disk /dev/vdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009e732

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

# sfdisk -d /dev/vdb > vdb.out

# cat vdb.out
label: dos
label-id: 0x0009e732
device: /dev/vdb
unit: sectors

/dev/vdb1 : start=           1, size=  2097151999, type=83

okay we have something here to work with, start sector is 1 and the geometry is 1TB for an ext file system. Identically to the initial partition table (before using fdisk).

sfdisk restore partition table

sfdisk /dev/vdb < vdb.out

# sfdisk /dev/vdb < vdb.out

Checking that no-one is using this disk right now ... OK

Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0004e2c8

Old situation:

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1        2048 2936012799 2936010752  1.4T 83 Linux

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x0009e732.
Created a new partition 1 of type 'Linux' and of size 1000 GiB.
/dev/vdb2:
New situation:

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk -l /dev/vdb
Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009e732

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

Filesystem Check ?

# e2fsck -f /dev/vdb1
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
SATADISK: 766227/65536000 files (1.9% non-contiguous), 200102796/262143999 blocks

f#ck YES

Mount ?

# mount /dev/vdb1 /mnt

# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb1       985G  748G  187G  81% /mnt

f3ck Yeah !!

Extend geometry

It is time to extend the partition geometry to 1.4TB with sfdisk.
If you remember from the fdisk output

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux
/dev/vdb1        2048 2936012799 2936010752  1.4T 83 Linux

We have 2936010752 sectors in total.
The End sector of 1.4T is 2936012799
Simple math problem: End Sector - Sectors = 2936012799 - 2936010752 = 2047

The previous fdisk command, had the Start Sector at 2048,
So 2048 - 2047 = 1 the preferable Start Sector!

New sfdisk

By editing the text vdb.out file to re-present our new situation:

# diff vdb.out vdb.out.14
6c6
< /dev/vdb1 : start=           1, size=  2097151999, type=83
---
> /dev/vdb1 : start=           1, size=  2936010752, type=83

1.4TB

Let’s put everything together

# sfdisk /dev/vdb < vdb.out.14
Checking that no-one is using this disk right now ... OK

Disk /dev/vdb: 1.4 TiB, 1503238553600 bytes, 2936012800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009e732

Old situation:

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2097151999 2097151999 1000G 83 Linux

>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x0009e732.
Created a new partition 1 of type 'Linux' and of size 1.4 TiB.
/dev/vdb2:
New situation:

Device     Boot Start        End    Sectors  Size Id Type
/dev/vdb1           1 2936010752 2936010752  1.4T 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# e2fsck /dev/vdb1
e2fsck 1.42.13 (17-May-2015)
SATADISK: clean, 766227/65536000 files, 200102796/262143999 blocks

# e2fsck -f /dev/vdb1
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
SATADISK: 766227/65536000 files (1.9% non-contiguous), 200102796/262143999 blocks

# resize2fs /dev/vdb1
resize2fs 1.42.13 (17-May-2015)
Resizing the filesystem on /dev/vdb1 to 367001344 (4k) blocks.
The filesystem on /dev/vdb1 is now 367001344 (4k) blocks long.

# mount /dev/vdb1 /mnt

# df -h  /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb1       1.4T  748G  561G  58%  /mnt

Finally!!

Partition Alignment

By the way, you can read this amazing article to fully understand why this happened:

Partition Alignment

Tuesday, 17 September 2019

Thoughts on GNU and Richard Stallman

  • Rekado
  • 12:05, Tuesday, 17 September 2019

Richard Stallman has resigned as president and from the board of directors of the Free Software Foundation. I welcome this decision.

As a co-maintainer of GNU packages (including Guix, the Guix Workflow Language, the Guile Picture Language, etc), and as a contributor to various other GNU software, I would like to state that while I'm grateful for Richard Stallman's founding of the GNU project and his past contributions to GNU, it would be wrong to continue to remain silent on the negative effects his behaviour and words have had over the past years. His actions have hurt people and alienated them from the free software movement.

When I joined GNU I used to think of Richard as just a bit of a quirky person with odd habits, with a passion for nitpicking and clear language, but also with a vision of freeing people from oppression at the hands of a boring dystopia mediated by computers. Good intentions, however, aren't enough. Richard's actions over the past years sadly have been detrimental to achieving the vision that he outlined in the GNU Manifesto, to benefit all computer users.

GNU's not Unix, but Richard ain't GNU either (RAGE?). GNU is bigger than any one person, even its founder. I'm still convinced that GNU has an important role to play towards providing a harmonized, trustworthy, freedom-respecting operating system environment that benefits all computer users. I call upon other maintainers of GNU software to embrace the responsibilities that working on a social project such as GNU brings. The GNU Manifesto states that "GNU serves as an example to inspire and a banner to rally others to join us in sharing". Let us do that by welcoming people of all backgrounds into GNU and by working hard to provide a healthy environment for fruitful collaboration.

Monday, 09 September 2019

Spoofing commits to repositories on GitHub

The following has already been reported to GitHub via HackerOne. Someone from GitHub has closed the report as “informative” but told me that it’s a known low-risk issue. As such, while they haven’t explicitly said so, I figure they don’t mind me blogging about it.

Check out this commit in torvalds’ linux.git on GitHub. In case this is fixed, here’s a screenshot of what I see when I look at this link:

GitHub page showing a commit in torvalds/linux with the commit message add super evil code

How did this get past review? It didn’t. You can spoof commits in any repo on GitHub due to the way they handle forks of repositories internally. Instead of copying repositories when forks occur, the objects in the git repository are shared and only the refs are stored per-repository. (GitHub tell me that private repositories are handled differently to avoid private objects leaking out this way. I didn’t verify this but I have no reason to suspect it is not true.)

To reproduce this:

  1. Fork a repository
  2. Push a commit to your fork
  3. Put your commit ref on the end of:
https://github.com/[parent]/[repo]/commit/

That’s all there is to it. You can also add .diff or .patch to the end of the URL and those URLs work too, in the namespace of the parent.

The situation that worries me relates to distribution packaging. Debian has a policy that deltas to packages in the stable repository should be as small as possible, targetting fixes by backporting patches from newer releases.

If you get a bug report on your Debian package with a link to a commit on GitHub, you had better double check that this commit really did come from the upstream author and hasn’t been spoofed in this way. Even if it shows it was authored by the upstream’s GitHub account or email address, this still isn’t proof because this is easily spoofed in git too.

The best defence against being caught out by this is probably signed commits, but if the upstream is not doing that, you can clone the repository from GitHub and check to see that the commit is on a branch that exists in the upstream repository. If the commit is in another fork, the upstream repo won’t have a ref for a branch that contains that commit.

Tuesday, 27 August 2019

FSFE booth on Veganmania Donauinsel 2019

Veganmania Donauinsel 2019
FSFE Information stall on Veganmania Donauinsel 2019

Once more free software activists from Vienna used the opportunity of the local vegan summer festival to inform about the possibility to increase our independence on computers and mobile devices. It was the second such event in Vienna this year. But unlike the first which was directly in the city center with loads of passers by this street festival took place in Viennas big recreation area on the island in the Danube river. It is rather close to the city center also and therefore many local people visit it in their spare time. The organisers estimated 9000 visitors per day.

The FSFE booth was manned there all the time from Saturday between 12:00 and 21:00 and Sunday from 10:00 to 19:00. It had a great spot far enough away from the stage with live music in order to allow undisturbed conversations and still close enough to the other 90 stalls with drinks, food, merchantise and a variety of stalls on other subjects like animal welfare, veganism sustainability, shelters and environmental protection.

Since it was an outdoor event on a meadow and because we don’t own a tent we couldn’t hang-up our posters. We just used our umbrella to not be exposed directly to the strong summer sun. And we had huge luck with the weather. Shortly after the festival was closed down on Saturday heavy rain started and it lasted until shortly before the event started again the next day.

Over the years we have collected a few regulars on our information stalls who normally drop by but again mostly totally new people frequented our FSFE information desk. Many of them had no prior knowledge what free software is about. Most of the time we were engaged in conversations with interested people and many explicitly thanked us for being there. We frequently explained why we man an FSFE information stall on a vegan summer festival: If you use the same ethical considerations that lead people to adopt a vegan life style in information technology you end up with free software.

A researcher explicitly came from an other county to the city because he wanted to visit our FSFE stall and talk to us about social implications of free software.

This weekend was an other very successful FSFE stall and we look forward to the next opportunity to man our information desk. We might even try to have stalls on other public events in the future which feature NGO information desks. At least if the fees are not unreasonably high.

Monday, 26 August 2019

Open Source is more than licenses

A few weeks ago I was honored to deliver the keynote of the Open Source Awards in Edinburgh. I decided to talk about a subject that I wanted to talk about for quite some time but never found the right opportunity for. There is no video recording of my talk but several people asked me for a summary. So I decided to use some spare time in a plane to summarize it in a blog post.

I started to use computers and write software in the early 80s when I was 10 years old. This was also the time when Richard Stallman wrote the 4 freedoms, started the GNU project, founded the FSF and created the GPL. His idea was that users and developers should be in control of the computer they own which requires Free Software. At the time the computing experience was only the personal computer in front of you and the hopefully Free and Open Source software running on it.

The equation was (Personal Hardware) + (Free Software) = (Digital Freedom)

In the meantime the IT world has changed and evolved a lot. Now we have ubiquitous internet access, computer in cars, TVs, watches and other IoT devices. We have the full mobile revolution. We have cloud computing where the data storage and compute are distributed over different data centers owned and controlled by different people and organizations all over the world. We have strong software patents, DRM, code signing and other crypto, software as a service, more closed hardware, social networking and the power of the network effect.

Overall the world has changed a lot since the 80s. Most of the Open Source and Free Software community still focuses mainly on software licenses. I’m asking myself if we are not missing the bigger picture by limiting the Free Software and Open Source movement to licensing questions only.

Richard Stallman wanted to be in control of his computer. Let’s go through some of the current big questions regarding control in IT and let’s see how we are doing:

Facebook

Facebook is lately under a lot of attack for countless violations of user privacy, being involved in election meddling, triggering a genocide in Myanmar, threatening democracy and many other things. Let’s see if Free Software would solve this problem:

If Facebook would release all the code tomorrow as Free and Open Source software our community would be super happy. WE have won. But would it really solve any problems? I can’t run Facebook on my own computer because I don’t have a Facebook server cluster. And even if I could it would be very lonely there because I would be the only user. So Free Software is important and great but actually doesn’t give users and freedom or control in the Facebook case. More is needed than Free Software licenses.

Microsoft

I hear from a lot of people in the Free and Open Source community that Microsoft is good now. They changed under the latest CEO and are no longer the evil empire. They now ship a Linux kernel in Windows 10 and provide a lot of Free and Open Source tools in their Linux containers in the Azure Cloud. I think it’s definitely a nice step in the right direction but their Cloud solutions still have the strongest vendor lock-in, Windows 10 is not free in price nor gives you freedom. In fact they don’t have an Open Source business model anywhere. They just USE Linux and Open Source. So the fact that more software in the Microsoft ecosystem is now available under Free Software licenses doesn’t give any more freedom to the users.

Machine Learning

Machine Learning is an important new technology that can be used for many thing from picture recognition to voice recognition to self driving cars. The interesting thing is that the hardware and the software alone are useless. What is also needed for a working machine learning system are the data to train the neural network. This training data is often the secret ingredient which is super valuable. So if Tesla would release all their software tomorrow as Free Software and you would buy a Tesla to have access to the hardware than you are still unable to study, build and improve the self driving car functionality. You would need the millions of hours of video recording and driver data to make your neural network useful. So Free software alone is not enough to give users control

5G

There is a lot of discussions in the western world if 5G infrastructure can be trusted. Do we know if there are back doors in cell towers if they are bought from Huawei or other Chinese companies? The Free and Open Source community answers that the software should be licenses under a Free Software license and then all is good. But can we actually check if the software running on the infrastructure is the same we have as source code? For that we would need reproducible builds, access to all the code signing and encryption keys and the infrastructure should fetch new software updates from our update server and not the one provided by the manufacturer. So the software license is important but doesn’t give you the full control and freedom.

Android

Android is a very popular mobile OS in the Free Software community. The reason is that it’s released under a Free Software license. I know a lot of Free Software activists who run a custom build of Android on their phone and only install Free Software from app stores like F-Droid. Unfortunately 99% of normal users out there don’t get these freedoms because their phones can’t be unlocked, or they lack the technical knowledge how to do it or they rely on software that is only available in the Google PlayStore. Users are trapped in the classic vendor lock-in. So the fact that the Android core is Free Software actually doesn’t give much freedom to 99% of all its users.

So what is the conclusion?

I think the Open Source and Free Software community who cares about the 4 freedoms of Stallman and being in control of their digital lives and user freedom has to expand their scope. Free Software licenses are needed but are by far not enough anymore to fight for user freedom and to guarantee users are in control of their digital life. The formula (Personal Hardware) + (Free Software) = (Digital Freedom) is not valid anymore. There are more ingredients needed. I hope that the Free Software community can and will reform itself to focus on more topics than licenses alone. The world needs people who fight for digital rights and user freedoms now more than ever.

Saturday, 24 August 2019

Walkthrough Installation of WackoWiki v5.5.12

WackoWiki is the wiki of my choice and one of the first opensource project I’ve ever contributed, and I still use wackowiki for personal use.

A few days ago, wackowiki released version 5.5.12. In this blog post I will try to share my experience on installing wackowiki on a new ubuntu 18.04 LTS.

Ansible Role

I’ve created an example ansible role for the wackowiki for the Requirements section: WackoWiki Ansible Role

Requirements

Ubuntu 18.04.3 LTS

apt -y install
       php
       php-common
       php-bcmath
       php-ctype
       php-gd
       php-iconv
       php-json
       php-mbstring
       php-mysql
       apache2
       libapache2-mod-php
       mariadb-server
       unzip

Apache2

We need to enable mod_reqwrite in apache2 but also to add the appropiate configuration in the default conf in VirtualHost

# a2enmod rewrite

# vim /etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
...
    # enable.htaccess
    <Directory /var/www/html/>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Require all granted
    </Directory>
...
</VirtualHost>

MySQL

wacko.sql

CREATE DATABASE IF NOT EXISTS wacko;
CREATE USER 'wacko'@'localhost' IDENTIFIED BY 'YOURNEWPASSWORD';
GRANT  ALL PRIVILEGES ON wacko.* TO 'wacko'@'localhost';
FLUSH  PRIVILEGES;

# mysql < wacko.sql

WackoWiki

curl -sLO https://downloads.sourceforge.net/wackowiki/wacko.r5.5.12.zip
unzip wacko.r5.5.12.zip
mv wacko.r5.5.12/wacko /var/www/html/wacko/
chown -R www-data:www-data /var/www/html/wacko/

Web Installation

01_wackowiki_install_5512.png

02_wackowiki_install_5512.png

03_wackowiki_install_5512.png

04_wackowiki_install_5512.png

05_wackowiki_install_5512.png

06_wackowiki_install_5512.png

07_wackowiki_install_5512.png

08_wackowiki_install_5512.png

09_wackowiki_install_5512.png

10_wackowiki_install_5512.png

Post Install

Last, we need to remove write permission for the wackowiki configuration file and remove setup folder

root@ubuntu:~# chmod -w /var/www/html/wacko/config/config.php
root@ubuntu:~# mv /var/www/html/wacko/setup/ /var/www/html/._setup

11_wackowiki_install_5512.png

WackoWiki

12_wackowiki_install_5512.png

13_wackowiki_install_5512.png

14_wackowiki_install_5512.png

Tag(s): wacko, wiki

Monday, 19 August 2019

Blocking untrusted USB devices

badusb

For fun and security (and a bit of paranoia), I thought I should whitelist my trusted USB devices and block everything else.

USBGuard

We have a couple of tools that can help us with that. USBGuard is the one I found to be the most configurable and well documented.

NOTICE: All commands here require certain privileges. To make commands easier to read, I omitted adding sudo in the beginning. But you probably need to.

Installation

USBGuard should already be packaged for your favorite Linux distribution.

One important thing to consider though is that on Debian (and derivatives) installing a package that comes with a systemd service file ends up being started and enabled by default. That means that if your input devices are USB-connected, you will find yourself locked out of your system. This may include even devices that are not physically plugged in a USB port (eg. your laptop built-in keyboard).

The upstream developer actually has a relevant warning:

WARNING: before you start using usbguard be sure to configure it first unless you know exactly what you are doing (all USB devices will get blocked).

But that didn't stop the Debian developers, who maintain that package, to allow USBGuard daemon to start with zero configuration đŸ¤ˇ

Systemd

You can find more detailed guides on how to prevent this "feature", but for the scope of this post here is what I did.

Systemd comes with a mask feature, that will prevent a certain service from being started. So for instance, if you try this:

sudo systemctl mask nginx.service
sudo systemctl start nginx.service

You'll get this error:

Failed to start nginx.service: Unit nginx.service is masked.

In our case, we can't use the mask command because USBGuard is not installed yet. But what mask actually does is just create a symlink. So all we have to do is create it manually:

sudo ln -s /dev/null /etc/systemd/system/usbguard.service

And now we can safely install USBGuard:

sudo apt install usbguard

Configuration

First thing to do is create an initial policy that whitelists all of our usb devices. Now it's a good time to plug-in devices that you tend to use often and you already trust. You can of course whitelist devices at any point.

usbguard generate-policy

The above command will display the list of your currently plugged devices with an allow keyword in the beginning. Let's save that to USBGuard's configuration file:

usbguard generate-policy > /etc/usbguard/rules.conf

Now it's safe to unmask, start and enable USBGuard daemon:

systemctl unmask usbguard.service
systemctl start usbguard.service
systemctl enable usbguard.service

Testing

To test this actually works try to plug a new device, not whitelisted yet. Let's a simple USB stick. Hopefully it will be blocked. To confirm that:

usbguard list-devices

This lists all your detected devices. The new device you just plugged-in should have a block keyword in the beginning. For a more filtered output:

usbguard list-devices | grep block

You should see something like this:

13: block id 0xxx:0xxx serial <...>

Allowing devices

Now let's say you actually want to unblock this device, because it came from a friend you trust. The command we run above also contained an ID number. The first thing on that line. We can use that and allow that device:

usbguard allow-device 13

Whitelisting devices

Using allow-device doesn't whitelist the device for ever. So let's say you bought a new external disk and you want to whitelist it. USBGuard has an append-rule command. You just need to paste the whole device line starting with an allow keyword.

Plug the device and see the USBGuard output:

usbguard list-devices | grep block

You should see something like this:

21: block id 0xxx:0xxx serial <...>

Copy the whole line starting from id and then use it but prefix it with an allow keyword (mind the single quotes used to wrap the entire rule):

usbguard append-rule 'allow id 0xxx:0xxx serial <...>'

Editing rules

At any point you can see the whitelisted devices:

usbguard list-rules

And you use the id number in the beginning of each line in order to interact with that specific rule. For example to remove a device:

usbguard remove-rule <id>

And remember, there is no such thing as absolute security. It all comes down to your Threat model.


Comments and reactions on Mastodon, Diaspora, Twitter and Lobsters.

Saturday, 17 August 2019

Building Archlinux Packages in Gitlab

GitLab is my favorite online git hosting provider, and I really love the CI feature (that now most of the online project providers are also starting supporting it).

Archlinux uses git and you can find everything here: Arch Linux git repositories

There are almost 2500 packages there! There are 6500 in core/extra/community (primary repos) and almost 55k Packages in AUR, the Archlinux User Repository.

We are going to use git to retrieve our PKGBUILD from aur archlinux as an example.
The same can be done with one of the core packages by using the above git repo.

So here is a very simple .gitlab-ci.yml file that we can use to build an archlinux package in gitlab

image: archlinux/base:latest

before_script:
    - export PKGNAME=tallow

run-build:
  stage: build
  artifacts:
    paths:
    - "*.pkg.tar.xz"
    expire_in: 1 week
  script:
      # Create "Bob the Builder" !
    - groupadd bob && useradd -m -c "Bob the Builder" -g bob bob
      # Update archlinux and install git
    - pacman -Syy && pacman -Su --noconfirm --needed git base-devel
      # Git Clone package repository
    - git clone https://aur.archlinux.org/$PKGNAME.git
    - chown -R bob:bob $PKGNAME/
      # Read PKGBUILD
    - source $PKGNAME/PKGBUILD
      # Install Dependencies
    - pacman -Syu --noconfirm --needed --asdeps "${makedepends[@]}" "${depends[@]}"
      # Let Bob the Builder, build package
    - su - bob -s /bin/sh -c "cd $(pwd)/$PKGNAME/ && makepkg"
      # Get artifact
    - mv $PKGNAME/*.pkg.tar.xz ./

You can use this link to verify the above example: tallow at gitlab

But let me explain the steps:

  • First we create a user, Bob the Builder as in archlinux we can not use root to build a package for security reasons.
  • Then we update our container and install git and base-devel group. This group contains all relevant archlinux packages for building a new one.
  • After that, we git clone the package repo
  • Install any dependencies. This is a neat trick that I’ve found in archlinux forum using source command to create shell variables (arrays).
  • Now it is time for Bob to build the package !
  • and finally, we move the artifact in our local folder
Tag(s): archlinux, gitlab

Thursday, 15 August 2019

MinIO Intro Notes

MinIO is a high performance object storage server compatible with Amazon S3 APIs

In a previous article, I mentioned minio as an S3 gateway between my system and backblaze b2. I was impressed by minio. So in this blog post, I would like to investigate the primary use of minio as an S3 storage provider!

Install Minio

Minio, is also software written in Go. That means we can simple use the static binary executable in our machine.

Download

The latest release of minio is here:

curl -sLO https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio

Version

./minio version

$ ./minio version

Version: 2019-08-01T22:18:54Z
Release-Tag: RELEASE.2019-08-01T22-18-54Z
Commit-ID: c5ac901e8dac48d45079095a6bab04674872b28b

Operating System

Although we can use the static binary from minio’s site, I would propose to install minio through your distribution’s package manager, in Arch Linux is:

$ sudo pacman -S minio

this method, will also provide you, with a simple systemd service unit and a configuration file.

/etc/minio/minio.conf

# Local export path.
MINIO_VOLUMES="/srv/minio/data/"
# Access Key of the server.
# MINIO_ACCESS_KEY=Server-Access-Key
# Secret key of the server.
# MINIO_SECRET_KEY=Server-Secret-Key
# Use if you want to run Minio on a custom port.
# MINIO_OPTS="--address :9199"

Docker

Or if you like docker, you can use docker!

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data

Standalone

We can run minion as standalone

$ minio server /data

Create a test directory to use as storage:

$ mkdir -pv minio_data/
mkdir: created directory 'minio_data/'

$ /usr/bin/minio server ./minio_data/

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 1 week ago ┃
┃ Update: Run `minio update`                                    ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Endpoint:  http://192.168.1.3:9000  http://192.168.42.1:9000  http://172.17.0.1:9000  http://172.18.0.1:9000  http://172.19.0.1:9000  http://192.168.122.1:9000  http://127.0.0.1:9000
AccessKey: KYAS2LSSPXRZFH9P6RHS
SecretKey: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur 

Browser Access:
   http://192.168.1.3:9000  http://192.168.42.1:9000  http://172.17.0.1:9000  http://172.18.0.1:9000  http://172.19.0.1:9000  http://192.168.122.1:9000  http://127.0.0.1:9000        

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.1.3:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Update Minio

okay, our package is from one week ago, but that’s okay. We can overwrite our package build (although not
recommended) with this:

$ sudo curl -sLo /usr/bin/minio https://dl.min.io/server/minio/release/linux-amd64/minio

again, NOT recommended.

Check version

minio version

Version: 2019-08-01T22:18:54Z
Release-Tag: RELEASE.2019-08-01T22-18-54Z
Commit-ID: c5ac901e8dac48d45079095a6bab04674872b28b

minio update

An alternative way, is to use the built-in update method:

$ sudo minio update

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ You are running an older version of MinIO released 5 days ago    ┃
┃ Update: https://dl.min.io/server/minio/release/linux-amd64/minio ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Update to RELEASE.2019-08-07T01-59-21Z ? [y/n]: y
MinIO updated to version RELEASE.2019-08-07T01-59-21Z successfully.

minio version

Version: 2019-08-07T01:59:21Z
Release-Tag: RELEASE.2019-08-07T01-59-21Z
Commit-ID: 930943f058f01f37cfbc2265d5f80ea7026ec55d

Run minio

run minion as standalone and localhost (not exposing our system to outside):

minio server --address 127.0.0.1:9000 ~/./minio_data/

output

$ minio server --address 127.0.0.1:9000 ~/./minio_data/

Endpoint:  http://127.0.0.1:9000
AccessKey: KYAS2LSSPXRZFH9P6RHS
SecretKey: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur 

Browser Access:
   http://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://127.0.0.1:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

Web Dashboard

minio comes with it’s own web dashboard!

minio_localhost.png

minio_dashboard.png

New Bucket

Let’s create a new bucket for testing purposes:

minio_create_new_bucket.png

minio_new_bucket.png

minio_new_bucket_name.png

minio_bucket0001.png

Minio Client

minio comes with it’s own minio client or mc

Install minio client

Binary Download

curl -sLO https://dl.min.io/client/mc/release/linux-amd64/mc

or better through your package manager:

sudo pacman -S minio-client

Access key / Secret Key

Now export our AK/SK in our enviroment

export -p MINIO_ACCESS_KEY=KYAS2LSSPXRZFH9P6RHS
export -p MINIO_SECRET_KEY=qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

minio host

or you can configure the minio server as a host:

./mc config host add myminio http://127.0.0.1:9000 KYAS2LSSPXRZFH9P6RHS qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur

I prefer this way, cause I dont have to export keys every time.

List buckets

$ mc ls myminio
[2019-08-05 20:44:42 EEST]      0B bucket0001/

$ mc ls myminio/bucket0001
(empty)

List Policy

mc admin policy list myminio

$ mc admin policy list myminio
readonly
readwrite
writeonly

Credentials

If we do not want to get random Credentials every time, we can define them in our environment:

export MINIO_ACCESS_KEY=admin
export MINIO_SECRET_KEY=password
minio server --address 127.0.0.1:9000 .minio_data{1...10}

with minio client:

$ mc config host add myminio http://127.0.0.1:9000 admin password

mc: Configuration written to `/home/ebal/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/home/ebal/.mc/share`.
mc: Initialized share uploads `/home/ebal/.mc/share/uploads.json` file.
mc: Initialized share downloads `/home/ebal/.mc/share/downloads.json` file.
Added `myminio` successfully.

mc admin config get myminio/ | jq .credential

$ mc admin config get myminio/ | jq .credential
{
  "accessKey": "8RMC49VEC1IHYS8FY29Q",
  "expiration": "1970-01-01T00:00:00Z",
  "secretKey": "AY+IjQZomX6ZClIBJrjgxRJ6ugu+Mpcx6rD+kr13",
  "status": "enabled"
}

s3cmd

Let’s configure s3cmd to use our minio data server:

$ sudo pacman -S s3cmd

Configure s3cmd

s3cmd --configure

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: KYAS2LSSPXRZFH9P6RHS
Secret Key: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: http://127.0.0.1:9000
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]: 
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: n
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: 
New settings:
  Access Key: KYAS2LSSPXRZFH9P6RHS
  Secret Key: qPZnIBJDe6GTRrUWcfdtKk7GPL4fGyqANDzJxkur
  Default Region: US
  S3 Endpoint: http://127.0.0.1:9000
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
ERROR: Test failed: [Errno -2] Name or service not known

Retry configuration? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/home/ebal/.s3cfg'

Test it

$ s3cmd ls
2019-08-05 17:44  s3://bucket0001

Distributed

Let’s make a more complex example and test the distributed capabilities of minio

Create folders

mkdir -pv .minio_data{1..10}

$ mkdir -pv .minio_data{1..10}

mkdir: created directory '.minio_data1'
mkdir: created directory '.minio_data2'
mkdir: created directory '.minio_data3'
mkdir: created directory '.minio_data4'
mkdir: created directory '.minio_data5'
mkdir: created directory '.minio_data6'
mkdir: created directory '.minio_data7'
mkdir: created directory '.minio_data8'
mkdir: created directory '.minio_data9'
mkdir: created directory '.minio_data10'

Start Server

Be-aware you have to user 3 dots (…) to enable erasure-code distribution (see below).

and start minio server like this:

minio server --address 127.0.0.1:9000 .minio_data{1...10}

$ minio server --address 127.0.0.1:9000 .minio_data{1...10}

Waiting for all other servers to be online to format the disks.

Status:         10 Online, 0 Offline.
Endpoint:  http://127.0.0.1:9000
AccessKey: CDSBN216JQR5B3F3VG71
SecretKey: CE+ti7XuLBrV3uasxSjRyhAKX8oxtZYnnEwRU9ik 

Browser Access:
   http://127.0.0.1:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://127.0.0.1:9000 CDSBN216JQR5B3F3VG71 CE+ti7XuLBrV3uasxSjRyhAKX8oxtZYnnEwRU9ik

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

configure mc

$ ./mc config host add myminio http://127.0.0.1:9000 WWFUTUKB110NS1V70R27 73ecITehtG2rOF6F08rfRmbF+iqXjNr6qmgAvdb2
Added `myminio` successfully.

admin info

mc admin info myminio

$ mc admin info myminio
●  127.0.0.1:9000
   Uptime: 3 minutes
  Version: 2019-08-07T01:59:21Z
  Storage: Used 25 KiB
   Drives: 10/10 OK

minio_admin_info_drive_okay.png

Create files

Creating random files

for i in $(seq 10000) ;do echo $RANDOM > file$i ; done

and by the way, we can use mc to list our local files also!

$ mc ls file* | head

[2019-08-05 21:27:01 EEST]      6B file1
[2019-08-05 21:27:01 EEST]      5B file10
[2019-08-05 21:27:01 EEST]      5B file100
[2019-08-05 21:27:01 EEST]      6B file11
[2019-08-05 21:27:01 EEST]      6B file12
[2019-08-05 21:27:01 EEST]      6B file13
[2019-08-05 21:27:01 EEST]      6B file14
[2019-08-05 21:27:01 EEST]      5B file15
[2019-08-05 21:27:01 EEST]      5B file16

Create bucket

mc ls myminio

$ mc mb myminio/bucket0002
Bucket created successfully `myminio/bucket0002`.

$ mc ls myminio
[2019-08-05 21:41:35 EEST]      0B bucket0002/

Copy files

mc cp file* myminio/bucket0002/

minio_copy_files.png

be patient, even in a local filesystem, it will take a long time.

minio_copy_files_finish.png

Erasure Code

copying from MinIO docs

you may lose up to half (N/2) of the total drives
MinIO shards the objects across N/2 data and N/2 parity drives

Here is the

$ du -sh .minio_data*

79M    .minio_data1
79M    .minio_data10
79M    .minio_data2
79M    .minio_data3
79M    .minio_data4
79M    .minio_data5
79M    .minio_data6
79M    .minio_data7
79M    .minio_data8
79M    .minio_data9

but what size did our files had?

$ du -sh files/
40M     files

Very insteresting.

$ tree .minio_data*

Here is shorter list, to get an idea how objects are structured: minio_data_tree.txt

$ mc ls myminio/bucket0002 | wc -l
10000

minio_dashboard_tree.txt

Delete a folder

Let’s see how handles corrupted disks, but before that let’s keep a hash of our files:

md5sum file* > /tmp/files.before

now remove:

$ rm -rf .minio_data10 

$ ls -la
total 0
drwxr-x---  1 ebal ebal    226 Aug 15 20:25 .
drwx--x---+ 1 ebal ebal   3532 Aug 15 19:13 ..
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data1
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data2
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data3
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data4
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data5
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data6
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data7
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data8
drwxr-x---  1 ebal ebal     40 Aug 15 20:25 .minio_data9

Notice that folder: minio_data10 is not there.

mc admin info myminio/

$ mc admin info myminio/
●  127.0.0.1:9000
   Uptime: 6 days
  Version: 2019-08-14T20:37:41Z
  Storage: Used 57 MiB
   Drives: 9/10 OK

minio_admin_info_drive.png

This is the msg in minio server console:

API: SYSTEM()
Time: 20:23:50 EEST 08/15/2019
DeploymentID: 7852c1e1-146a-4ce9-8a05-50ad7b925fef
Error: unformatted disk found
       endpoint=.minio_data10
       3: cmd/prepare-storage.go:40:cmd.glob..func15.1()
       2: cmd/xl-sets.go:212:cmd.(*xlSets).connectDisks()
       1: cmd/xl-sets.go:243:cmd.(*xlSets).monitorAndConnectEndpoints()

Error: unformatted disk found

We will see that minio will try to create the disk/volume/folder in our system:

$ du -sh .minio_data*
79M    .minio_data1
0       .minio_data10
79M    .minio_data2
79M    .minio_data3
79M    .minio_data4
79M    .minio_data5
79M    .minio_data6
79M    .minio_data7
79M    .minio_data8
79M    .minio_data9

Heal

Minio comes with a healing ability:

$ mc admin heal --recursive myminio/

minio_heal.png

$ du -sh .minio_data*

79M     .minio_data1
79M     .minio_data10
79M     .minio_data2
79M     .minio_data3
79M     .minio_data4
79M     .minio_data5
79M     .minio_data6
79M     .minio_data7
79M     .minio_data8
79M     .minio_data9
$ mc admin heal --recursive myminio/
 ◐  bucket0002/file9999
    10,000/10,000 objects; 55 KiB in 58m21s
    ┌────────┬────────┬─────────────────────┐
    │ Green  │ 10,004 │ 100.0% ████████████ │
    │ Yellow │      0 │   0.0%              │
    │ Red    │      0 │   0.0%              │
    │ Grey   │      0 │   0.0%              │
    └────────┴────────┴─────────────────────┘
Tag(s): minio, s3

Friday, 09 August 2019

Order your Akademy t-shirt *NOW*

If you want an Akademy 2019 t-shirt you have until Monday 12th Aug at 1100CEST (i.e. in 2 days and a bit) to order it.

Head over to https://akademy.kde.org/2019/akademy-2019-t-shirt and get yourself one of the exclusive t-shirts with Jen's awesome design :)

Sunday, 28 July 2019

My KDE Onboarding Sprint 2019 report

This week I took part on the KDE Onboarding Sprint 2019 (part of what's been known as Nuremberg Megasprint (i.e. KDEConnect+KWin+Onboarding) in, you guessed it, Nuremberg.

The goal of the sprint was "how do we make it easier for people to start contributing". We mostly focused on the "start contributing *code*" side, though we briefly touched artists and translators too.

This is *my* summary, a more official one will appear somewhere else, so don't get annoyed at me if the blog is a bit opinionated (though i'll try it not to)

The main issues we've identified when trying to contribute to KDE software is:
* Getting dependencies is [sometimes] hard
* Actually running the software is [sometimes] hard

Dependencies are hard

Say you want to build dolphin from the git master branch. For that (at the time of writing) you need KDE Frameworks 5.57, this means that if you run the latest Ubuntu or the latest OpenSUSE you can't build it because they ship older versions.

Our current answer for that is kdesrc-build but it's not the most easy to use script, and sometimes you may end up building QtWebEngine or QtWebKit, which as a newbie is something you most likely don't want to do.

Running is hard

Running the software you have just built (once you've passed the dependencies problem) is not trivial either.

Most of our software can't be run uninstalled (KDE Frameworks are a notable exception here, but newbies rarely start developing KDE Frameworks).

This means that you may try to run make install, which if you didn't pass -DCMAKE_INSTALL_PREFIX pointing somewhere in your home you'll probably have to run make install as root since it defaults to /usr/local (this will be fixed in next extra-cmake-modules release to point to a somewhat better prefix) that isn't that useful either since none of your software is looking for stuff in /usr/local. Newbies may be tempted to use -DCMAKE_INSTALL_PREFIX=/usr but that's *VERY* dangerous since it can easily mess up your own system.

For applications, our typical answer is use -DCMAKE_INSTALL_PREFIX=/home/something/else at cmake stage, run make install and then set the environment variables to pick up things from /home/something/else, a newbie will say "which variables" at this stage probably (and not newbies too, I don't think i remember them all). To help with that we generate a prefix.sh in the build dir and after the next extra-cmake-release we will tell the users that they need to run it for things to work.

But still that's quite convoluted and I know from experience answering people in IRC that lots of people get stuck there. It's also very IDE unfriendly since IDEs don't usually have the "install" concept, it's run & build for them.

Solutions

We ended up focusing on two possible solutions:

* Conan: Conan "the C/C++ Package Manager for Developers" (or so they say) is something like pip in the python world but for C/C++. The idea is that by using Conan to get the dependencies we will solve most of the problems in that area. Whether it can help or not with the running side is still unclear, but some of our people involved in the Conan effort think they may either be able to come up with a solution or get the Conan devs to help us with it. Note Conan is not my speciality by far, so this may not be totally correct.

* Flatpak: Flatpak is "a next-generation technology for building and distributing desktop applications on Linux" (or so they say). The benefits of using flatpak are multiple, but focusing on onboarding are. "Getting dependencies is solved", dependencies are either part of the flatpatk SDK and you have them or the flatpak manifest for the application says how to get and build them and that will automagically work for you as it works for everyone else using the same manifest. "Running is solved" because when you build a flatpak it gets built into a self contained artifact so running it is just running it, no installing or environment variable fiddling is needed. We also have [preliminary] support in KDevelop (or you can use Gnome Builder if you want a more flatpak-centric experience for now). The main problem we have with flatpak at this point is that most of our apps are not totally flatpak-ready (e.g. Okular can't print). But that's something we need to fix anyway so it shouldn't be counted as a problem (IMHO).

Summary

*Personally* i think Flatpak is the way to go here, but that means that collectively we need to say "Let's do it", it's something we all have to take into account and thus we have to streamline the manifest handling/updating, focus on fixing the Flatpak related issues that our software may have, etc.

Thanks

I would like to thank SUSE for hosting us in their offices and the KDE e.V. for sponsoring my attendance to the sprint, please donate to KDE if you think the work done at sprints is important.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

        Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog – Think. Innovation.  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free speech is better than free beer  English – Jelle Hermsen  English – Jens Lechtenbörger  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog