Thoughts of the FSFE Community

Tuesday, 22 January 2019

Some Ideas for 2019

Paul Boddie's Free Software-related blog » English | 23:47, Tuesday, 22 January 2019

Well, after my last article moaning about having wishes and goals while ignoring the preconditions for, and contributing factors in, the realisation of such wishes and goals, I thought I might as well be constructive and post some ideas I could imagine working on this year. It would be a bonus to get paid to work on such things, but I don’t hold out too much hope in that regard.

In a way, this is to make up for not writing an article summarising what I managed to look at in 2018. But then again, it can be a bit wearing to have to read through people’s catalogues of work even if I do try and make my own more approachable and not just list tons of work items, which is what one tends to see on a monthly basis in other channels.

In any case, 2018 saw a fair amount of personal focus on the L4Re ecosystem, as one can tell from looking at my article history. Having dabbled with L4Re and Fiasco.OC a bit in 2017 with the MIPS Creator CI20, I finally confronted certain aspects of the software and got it working on various devices, which had been something of an ambition for at least a couple of years. I also got back into looking at PIC32 hardware and software experiments, tidying up and building on earlier work, and I keep nudging along my Python-like language and toolchain, Lichen.

Anyway, here are a few ideas I have been having for supporting a general strategy of building flexible, sustainable and secure computing environments that respect the end-user. Such respect not being limited to software freedom, but also extending to things like privacy, affordability and longevity that are often disregarded in the narrow focus on only one set of end-user rights.

Building a General-Purpose System with L4Re

Apart from writing unfinished articles about supporting hardware devices on the Ben NanoNote and Letux 400, I also spent some time last year considering the mechanisms supporting filesystems in L4Re. For an outsider like myself, the picture isn’t particularly clear, but the mechanisms don’t really seem particularly well documented, and I am not convinced that the filesystem support is what people might expect from a microkernel-based system.

Like L4Re’s device support, the way filesystems are made available to tasks appears to use libraries extensively, whereas I would expect more use of individual programs, with interprocess messages and shared memory being employed to move the data around. To evaluate my expectations, I have been writing programs that operate in such a way, employing a “toy” filesystem to test the viability of such an approach. The plan is to make use of libext2fs to expose ext2/3/4 filesystems to L4Re components, then to try and replace the existing file access mechanisms with ones that access these file-serving components.

It is unfortunate that systems like these no longer seem to get much attention from the wider Free Software community. There was once a project to port GNU Hurd to L4-family microkernels, but with the state of the art having seemingly been regarded as insufficient or inappropriate, the focus drifted off onto other things, and now there doesn’t seem to be much appetite to resume such work. Even the existing Hurd implementation doesn’t get that much interest these days, either. And yet there are plenty of technical, social and practical reasons for not just settling for using the Linux kernel and systems based on it for every last application and device.

Extending Hardware Support within L4Re

It is all very well developing filesystem support, but there also has to be support for the things that provide the storage on which those filesystems reside. This is something I didn’t bother to look at when getting L4Re working on various devices because the priority was to have something to show, meaning that the display had to work, along with testing and demonstrating other well-understood peripherals, with the keyboard or keypad being something that could be supported with relative ease and then used to demonstrate other aspects of the solution.

It seems perverse that one must implement support for SD or microSD card storage all over again when the software being run is already being loaded from such storage, but this is rather like the way that “live CD” versions of GNU/Linux would load an environment directly from a CD, yet an installed version of such an environment might not have the capability to access the CD drive. Still, this is an unavoidable step along the path to make a practical system.

It might also be nice to make the CI20 support a bit better. Although a device notionally supported by L4Re, various missing pieces of hardware support needed to be added, and the HDMI output capability remains unavailable. Here, the mystery hardware left undocumented by the datasheet happens to be used in other chipsets and has been supported in the Linux kernel for many of them for quite some time. Hopefully, the exercise will not be too frustrating.

Another device that might be a good candidate for L4Re is the Efika MX Smartbook. Although having a modest specification by today’s bloated-Web and pointless-eye-candy standards, it has a nice keyboard (with a more sensible layout than the irritating HP Elitebook, as I recently discovered) and is several times more powerful than the Letux 400. My brother, David, has already investigated certain aspects of the hardware, and it might make the basis of a nice portable system. And since support in Linux and other more commonly-used technologies has been left to rot, why not look into developing a more lasting alternative?

Reviving Mail-Based Communication

It is tiresome to see the continuing neglect of the health of e-mail, despite it being used as the bedrock of the Internet’s activities, while proprietary and predatory social media platforms enjoy undeserved attention and promotion in mass media and in society at large. Governmental and corporate negligence mean that the average person is subjected to an array of undesirable, disturbing and generally unwanted communications from chancers and criminals through their e-mail accounts which, if it had ever happened to the same degree with postal mail, would have seen people routinely arrested and convicted for their activities.

It is tempting to think that “they” do not want “us” to have a more reliable form of mail-based communication because that would involve things like encryption and digital signatures. Even when these things are deemed necessary, they always seem to be rolled out as part of a special service that hosts “our” encryption and signing keys for us, through which we must go to access our messages. It is, I suppose, yet another example of the abandonment of common infrastructure: that when something better is needed, effort and attention is siphoned off from the “commons” and ploughed into something separate that might make someone a bit of money.

There are certainly challenges involved in making e-mail better, with any discussion of managing identities, vouching for and recognising correspondents, and the management of keys most likely to lead to dispute about the “best” way of doing things. But in the end, we probably find ourselves pursuing perfect solutions that do everything whilst proprietary competitors just focus on doing a handful of things effectively. So I envisage turning this around and evaluating whether a more limited form of mail-based communication can be done in the way that most people would need.

I did look fairly recently at a selection of different projects seeking to advise and support people on providing their own e-mail infrastructure. That is perhaps worth an article in its own right. And on the subject of mail-based communication, I hope to look a bit more at imip-agent again after neglecting it for so long.

Sustaining a Python Alternative

One motivation for developing my Python-like language and toolchain, Lichen, was to explore ways in which Python might have been developed to better suit my own needs and preferences. Although I still use Python extensively, I remain mindful of the need to write conservative, uncomplicated code without the kind of needless tricks and flourishes that Python’s expanding feature set can tempt the developer with, and thus I almost always consider the possibility of being able to use the Lichen toolchain with my projects one day.

Lichen may still be a proof of concept, but there has been work done on gradually pushing it towards being genuinely usable. I spent some time considering the way floating point numbers might be represented, and this raised some interesting issues around how they might be stored within instances. Like the tuple performance optimisations, I hope to introduce floating point support into the established feature set of Lichen and hopefully offer decent-enough performance, with the latter aspect being yet another path of investigation this year.

Documenting and Publishing

A continuing worry I have is whether I should still be running MoinMoin sites or even sites derived from MoinMoin “export dumps” for published information that is merely static. I have spent some time developing a parsing and formatting tool to generate static content from Moin content, thus avoiding running Moin altogether and also avoiding having to run a script acting as a URL-preserving front-end to exported Moin content (which is unfortunately required because of how the “export dump” seems to work).

Currently, this tool supports HTML and Moin output, the latter to test the parsing activity, with Graphviz content rendered as inline SVG or in other supported formats (although inline SVG is really what you want). Some macros are supported, but I need to add support for others, like the search macros which are useful for generating page listings. Themes are also supported, but I need to make sure that things like tables of contents – already supported with a macro – can be styled appropriately.

Already, I can generate the bulk of my existing project documentation, and the aim here is to be able to focus on improving that documentation, particularly for things like Lichen that really need explanations to be written before I need to start reviewing the code from scratch as if I were a total newcomer to the work. I have also considered using this tool as the basis for a decentralised wiki solution, but that can probably wait for a while given how many other things I have said I want to do!

Anything More?

There are probably other things that are worth looking at, but these are perhaps the ones I feel are most pressing. It could be said that pursuing all these at once would spread me and my efforts very thin, but I tend to cycle through projects periodically and hope that I can catch up with what I was previously doing, hence the mention above of documenting my work.

I wonder how much other people think about the year ahead, whether it is a productive and ultimately rewarding exercise to state aspirations and goals in this kind of way. New Year’s resolutions are a familiar practice, of course, but here I make no promises!

Nevertheless, a belated Happy New Year to anyone still reading! I hope we can all pursue our objectives enthusiastically over the year ahead and make a real and positive difference to computing, its users and to our societies.

Monday, 21 January 2019

Using Terraform and cloud-init on Hetzner

Evaggelos Balaskas - System Engineer | 20:09, Monday, 21 January 2019

Using Terraform by HashiCorp and cloud-init on Hetzner cloud provider.

Nowadays with the help of modern tools, we use our infrastructure as code. This approach is very useful because we can have Immutable design with our infra by declaring the state would like our infra to be. This also provide us with flexibility and a more generic way on how to handle our infra as lego bricks, especially on scaling.

UPDATE: 2019.01.22

 

Hetzner

We need to create an Access API Token within a new project under the console of hetzner cloud.

hetzner_token.png

Copy this token and with that in place we can continue with terraform.
For the purposes of this article, I am going to use as the API token: 01234567890

 

Install Terraform

the latest terraform version at the time of writing this blog post is: v.11.11

$ curl -sL https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip |
   bsdtar -xf- && chmod +x terraform
$ sudo mv terraform /usr/local/bin/

and verify it

$ terraform version
Terraform v0.11.11

 

Terraform Provider for Hetzner Cloud

To use the hetzner cloud via terraform, we need the terraform-provider-hcloud plugin.

hcloud, is part of terraform providers repository. So the first time of initialize our project, terraform will download this plugin locally.

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "hcloud" (1.7.0)...
...
* provider.hcloud: version = "~> 1.7"

 

Compile hcloud

If you like, you can always build hcloud from the source code.
There are notes on how to build the plugin here Terraform Hetzner Cloud provider.

GitLab CI

or you can even download the artifact from my gitlab-ci repo.

Plugin directory

You will find the terraform hcloud plugin under your current directory:

./.terraform/plugins/linux_amd64/terraform-provider-hcloud_v1.7.0_x4

I prefer to copy the tf plugins centralized under my home directory:

$ mkdir -pv ~/.terraform/plugins/linux_amd64/
$ mv ./.terraform/plugins/linux_amd64/terraform-provider-hcloud_v1.7.0_x4 ~/.terraform.d/plugins/linux_amd64/terraform-provider-hcloud

or if you choose the artifact from gitlab:

$ curl -sL -o ~/.terraform/plugins/linux_amd64/terraform-provider-hcloud https://gitlab.com/ebal/terraform-provider-hcloud-ci/-/jobs/artifacts/master/raw/bin/terraform-provider-hcloud?job=run-build

That said, when working with multiple terraform projects you may be in a position that you need different versions of the same tf-plugin. In that case it is better to have them under your current working directory/project instead of your home directory. Perhaps one project needs v1.2.3 and another v4.5.6 of the same tf-plugin.

 

Hetzner Cloud API

Here is a few examples on how to use the Hetzner Cloud API:

$ export -p API_TOKEN="01234567890"

$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/datacenters | jq -r .datacenters[].name
fsn1-dc8
nbg1-dc3
hel1-dc2
fsn1-dc14
$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/locations | jq -r .locations[].name
fsn1
nbg1
hel1
$ curl -sH "Authorization: Bearer $API_TOKEN" https://api.hetzner.cloud/v1/images | jq -r .images[].name
ubuntu-16.04
debian-9
centos-7
fedora-27
ubuntu-18.04
fedora-28

 

hetzner.tf

At this point, we are ready to write our terraform file.
It can be as simple as this (CentOS 7):

# Set the variable value in *.tfvars file
# or using -var="hcloud_token=..." CLI option
variable "hcloud_token" {}

# Configure the Hetzner Cloud Provider
provider "hcloud" {
  token = "${var.hcloud_token}"
}

# Create a new server running centos
resource "hcloud_server" "node1" {
  name = "node1"
  image = "centos-7"
  server_type = "cx11"
}

 

Project_Ebal

or a more complex config: Ubuntu 18.04 LTS

# Project_Ebal
variable "hcloud_token" {}

# Configure the Hetzner Cloud Provider
provider "hcloud" {
  token = "${var.hcloud_token}"
}

# Create a new server running centos
resource "hcloud_server" "Project_Ebal" {
  name = "ebal_project"
  image = "ubuntu-18.04"
  server_type = "cx11"
  location = "nbg1"
}

 

Repository Structure

Although in this blog post we have a small and simple example of using hetzner cloud with terraform, on larger projects is usually best to have separated terraform files for variables, code and output. For more info, you can take a look here: VCS Repository Structure - Workspaces

  ├── variables.tf
  ├── main.tf
  ├── outputs.tf

 

Cloud-init

To use cloud-init with hetzner is very simple.
We just need to add this declaration user_data = "${file("user-data.yml")}" to terraform file.
So our previous tf is now this:

# Project_Ebal
variable "hcloud_token" {}

# Configure the Hetzner Cloud Provider
provider "hcloud" {
  token = "${var.hcloud_token}"
}

# Create a new server running centos
resource "hcloud_server" "Project_Ebal" {
  name = "ebal_project"
  image = "ubuntu-18.04"
  server_type = "cx11"
  location = "nbg1"
  user_data = "${file("user-data.yml")}"
}

to get the IP_Address of the virtual machine, I would also like to have an output declaration:

output "ipv4_address" {
  value = "${hcloud_server.ebal_project.ipv4_address}"
}

 

Clout-init

You will find more notes on cloud-init on a previous blog post: Cloud-init with CentOS 7.

below is an example of user-data.yml

#cloud-config

disable_root: true
ssh_pwauth: no

users:
  - name: ubuntu
    ssh_import_id:
     - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

# Set TimeZone
timezone: Europe/Athens

# Install packages
packages:
  - mlocate
  - vim
  - figlet

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# Remove cloud-init
runcmd:
  - figlet Project_Ebal > /etc/motd
  - updatedb

 

Terraform

First thing with terraform is to initialize our environment.

Init

$ terraform init

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

 

Plan

Of course it is not necessary to plan and then plan with out.
You can skip this step, here exist only for documentation purposes.

$ terraform plan


Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + hcloud_server.ebal_project
      id:            <computed>
      backup_window: <computed>
      backups:       "false"
      datacenter:    <computed>
      image:         "ubuntu-18.04"
      ipv4_address:  <computed>
      ipv6_address:  <computed>
      ipv6_network:  <computed>
      keep_disk:     "false"
      location:      "nbg1"
      name:          "ebal_project"
      server_type:   "cx11"
      status:        <computed>
      user_data:     "sk6134s+ys+wVdGITc+zWhbONYw="

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

 

Out

$ terraform plan -out terraform.tfplan


Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + hcloud_server.ebal_project
      id:            <computed>
      backup_window: <computed>
      backups:       "false"
      datacenter:    <computed>
      image:         "ubuntu-18.04"
      ipv4_address:  <computed>
      ipv6_address:  <computed>
      ipv6_network:  <computed>
      keep_disk:     "false"
      location:      "nbg1"
      name:          "ebal_project"
      server_type:   "cx11"
      status:        <computed>
      user_data:     "sk6134s+ys+wVdGITc+zWhbONYw="

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: terraform.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "terraform.tfplan"

 

Apply

$ terraform apply "terraform.tfplan"

hcloud_server.ebal_project: Creating...
  backup_window: "" => "<computed>"
  backups:       "" => "false"
  datacenter:    "" => "<computed>"
  image:         "" => "ubuntu-18.04"
  ipv4_address:  "" => "<computed>"
  ipv6_address:  "" => "<computed>"
  ipv6_network:  "" => "<computed>"
  keep_disk:     "" => "false"
  location:      "" => "nbg1"
  name:          "" => "ebal_project"
  server_type:   "" => "cx11"
  status:        "" => "<computed>"
  user_data:     "" => "sk6134s+ys+wVdGITc+zWhbONYw="
hcloud_server.ebal_project: Still creating... (10s elapsed)
hcloud_server.ebal_project: Still creating... (20s elapsed)
hcloud_server.ebal_project: Creation complete after 23s (ID: 1676988)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

ipv4_address = 1.2.3.4

 

SSH and verify cloud-init

$ ssh 1.2.3.4 -l ubuntu

Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-43-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Fri Jan 18 12:17:14 EET 2019

  System load:  0.41              Processes:           89
  Usage of /:   9.7% of 18.72GB   Users logged in:     0
  Memory usage: 8%                IP address for eth0: 1.2.3.4
  Swap usage:   0%

0 packages can be updated.
0 updates are security updates.

project_ebal

 

Destroy

Be Careful without providing a specific terraform out plan, terraform will destroy every tfplan within your working directory/project. So it is always a good practice to explicit destroy a specify resource/tfplan.

$ terraform destroy should better be:

$ terraform destroy -out terraform.tfplan

hcloud_server.ebal_project: Refreshing state... (ID: 1676988)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - hcloud_server.ebal_project

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

hcloud_server.ebal_project: Destroying... (ID: 1676988)
hcloud_server.ebal_project: Destruction complete after 1s

Destroy complete! Resources: 1 destroyed.

 

That’s it !

 

Sunday, 20 January 2019

Kubernetes Certification CKA – readings and advices about how to obtain it

english – Davide Giunchi | 22:00, Sunday, 20 January 2019

<figure class="wp-block-image">cka certificate</figure>

I’ve recently obtained the official “Cerfified Kubernetes Administrator“, some people write me on linkedin asking opinions about how to pass the exam, for this reason i’ve decied to share my experience and write down some advices and userful resources. Every CKA’s candidate must sign an NDA, so i can’t spread specific details.

If you don’t have specific knowledge about container tecnologies and docker, read the O’Reilly’s book “Docker: Up & Running“, the exam doesn’t require specific knowledge about docker but, since Kubernetes is an orchestrator for containers, this is a needed base.

IMHO the best book about k8s is “Kubernetes in Action” , it covers a lot of aspect about this software and it’s very detailed, you don’t need to read it from the beginning till the end, some chapters of the “part 3” are not required by the exam.

In parallel of your readings, you need to do practice, so open an account on a cloud provider that provides managed kubernete’s clusters, by now there’s a lot of providers that support it, the one that it’s considered the best about k8s is Google Cloud Platform. Sign in an account, insert your credit card and you will get free 300$ to be used within 1 year.

First make practice with clusters already created by provider and managed with GKE, you need to understand well how to work with Pod, Deployment, Service, ConfigMap ecc… you need to get very comfortable with the command line kubectl since this is the k8s’s principal CLI.

After that period, learn how to install a cluster: create 3 vm (you can use the g1-small with preemptible feature so you will consume small credit), connect via ssh and create cluster formed by 1 master and 2 worker with kubeadm.

Connect to the servers, check the k8s’s systemd units, where the logs are located and …. broke it! a part of the exam will be about how to do debugging on a broken cluster. After you have fixed the cluseter, destroy the virtual machines, create other 3 machines and re-broke it on another way! 🙂

During the exam you will be allowed to use only the official documentation, so during your tests get accustomed to use https://kubernetes.io/docs , avoid to use StackOverflow&c.

The environment cka-practice-environment is a good test, you can easily use it with docker-compose. The advices in this document are useful, and this “commented curriculum” is useful as a recap before the exam. At the end go through “Kubernetes the Hard Way” to learn how to configure by hand a cluster and understand the details of every component.

When you are ready, schedule the exam. It’s a practice exam that will last 3 hours max, as every Linux Foundation/CNCF’s exam it’s a remote one, you need a computer with Chrome and a particular extension installed, this extension will share your webcam, desktop and the microphone. When the exam will start, you get a Linux console with kubectl already configured with 6 clusters, before every question will be written the kubectl config use-context command to connect to the correct cluster. T all’inizio di ognuna delle 24 domande vi verrà indicato su quale cluster dovete agire tramite kubectl config use-context.

The exam is formed by 24 questions, a minimum score of 74% is required to pass it, the three hours are enough to complete all the exercises, the exam is rigorous but can be passed by anyone who has prepared with commitment.

Good luck!

Wednesday, 16 January 2019

An Absence of Strategy?

Paul Boddie's Free Software-related blog » English | 13:10, Wednesday, 16 January 2019

I keep starting articles but not finishing them. However, after responding to some correspondence recently, where I got into a minor rant about a particular topic, I thought about starting this article and more or less airing the rant for a wider audience. I don’t intend to be negative here, so even if this sounds like me having a moan about how things are, I really do want to see positive and constructive things happen to remedy what I see as deficiencies in the way people go about promoting and supporting Free Software.

The original topic of the correspondence was my brother’s article about submitting “apps” to F-Droid, the Free Software application repository for Android, which somehow got misattributed to me in the FSFE newsletter. As anyone who knows both of us can imagine, it is not particularly unusual that people mix us up, but it does still surprise me how people can be fluid about other people’s names and assume that two people with the same family name are the same person.

Eventually, the correction was made, for which I am grateful, and it must be said that I do also appreciate the effort that goes into writing the newsletter. Having previously had the task of doing some of the Fellowship interviews, I know that such things require more work than people might think, largely go either unnoticed or unremarked, and as a participant in the process it can be easy to wonder afterwards if it was worth the bother. I do actually follow the FSFE Planet and the discussion mailing list, so I’d like to think that I keep up with what other people do, but the newsletter must have some value to those who don’t want to follow a range of channels.

A Rant about Free Software on Mobile

Well, it wasn’t as much of a rant as it was a moan about how there doesn’t seem to be a coherent strategy about Free Software on mobile devices. The FSFE has had some kind of campaign about Android for quite some time. What it amounts to is promoting Free Software applications and Free Software distributions on phones.

This probably isn’t significantly different from the activities promoted by the FSF, whose Defective By Design campaign features a gift guide promoting phones that run Free Software. The FSF also promotes and funds the Replicant project, more of which below.

For all I know, the situation about getting Free Software applications onto a phone probably isn’t all that dire, assuming that Google and phone vendors don’t try and prevent users from installing software that isn’t delivered via Google Play or other officially-sanctioned channels. Or as the Android documentation puts it:

<aside>Note: Some network providers don’t allow users to install applications from unknown sources.</aside>

Of course, this is rather reminiscent of the “bad old days” where some people could copy things on and off their phone using Bluetooth (or for those with particularly long memories, infrared communication) whereas others had those features disabled by their provider. So, while some people get to enjoy the benefits of Free Software, others are denied them: another case of divide-and-rule in action, I suppose.

But it is the situation about Free Software distributions, more specifically having a Free Software operating system with Free Software device drivers and Free Software firmware, that is most worrying. The FSFE campaign points people to the two enduring initiatives for putting a different kind of Android distribution on phones: Replicant and LineageOS (previously CyanogenMod).

While LineageOS seems to try and support as many devices as possible, it relies on non-free software to support device hardware. Meanwhile, Replicant employs only Free Software and is therefore limited in which devices it may support and to what extent those device’s features will function.

Although I can’t really muster much enthusiasm for Android and its derivatives, I don’t think there is anything wrong with trying to provide a completely Free Software distribution of that software. Certainly, there will always be challenges with the evolution of the upstream code, being steered this way and that by its corporate parent for maximum corporate benefit, but this isn’t really much different to clinging on to the pace of change (and churn) in an openly-developed project like the Linux kernel.

But ultimately, these initiatives will always be reacting to what other people, specifically those working for large companies, have decided to do. It will always be about chasing the latest release of the upstream software and making it acceptable for a Free Software audience. And it will always be about seeing whether the latest devices can be supported or not and then trying to figure out how. And this is where most people start to wonder why things always have to be like this, spurring my rant.

For the Long Term

To be fair to the FSFE’s Android-related campaign, the advice given does give people some concrete activities to consider: it isn’t simply the “go out and write code” battle cry that sometimes drifts through the air after an acrimonious episode where nobody can agree with each other. Helping F-Droid get more applications published, writing more Free Software applications, helping the operating system projects with their efforts: these are all worthwhile things for people to do.

But we only need look at the FSF’s ethical gift guide to see the severe challenges being faced over the longer term. For yet another year, the only offerings are older, refurbished Samsung smartphones, the most recent being the Galaxy S3 and Galaxy Note 2 from 2012. Now, there is nothing wrong with older hardware or refurbished devices. After all, I have written about older and much less powerful hardware than that which I believe still should have a place in modern life.

Nor should people regard the price of such refurbished units as particularly expensive. Of course you can buy a new phone with better specifications for the same or even less, but that new phone won’t be running only Free Software. Yes, there is always the Fairphone whose creators’ grip on Free Software, software longevity and other matters that weren’t confronted in the beginning is supposedly now rather better, although the hardware drivers remain non-free, so it isn’t really comparable, either.

Here, it is illustrative to consider community-originating efforts to develop smartphones, particularly since there is a perception that such efforts eventually end up pitching “expensive” and “underpowered” devices that “aren’t competitive”. There are obviously a collection of such initiatives ongoing at any given point, but ignoring random crowdfunding campaigns and corporate publicity stunts, we might usefully focus on some more familiar projects: the GTA04 successor to the Openmoko FreeRunner and the Neo900 successor to the Nokia N900.

Both of these projects are in a not-easily-explained state. The GTA04 device was made in a number of incrementally refined versions and sold primarily to people who already had a FreeRunner into which they could install the new hardware. However, difficulties with the last hardware revision meant that it was no longer viable as a product, with the cost of overcoming production problems being rather likely to exceed any money otherwise made on the units.

Meanwhile, the Neo900 project is effectively stalled having experienced several setbacks, notably the freezing of funds by PayPal for no really good reason, and difficulties in finding and retaining qualified people to do things like board layout. Although there are aspirations to get to completion, perhaps with some modification of the original goals, the path to completion remains obscure and uncertain. It is certainly hard to sustain my initial enthusiasm for the project, even if I do sympathise deeply with the struggles and difficulties of those trying to deliver something that they want to see succeed perhaps more than anyone else.

The future is not necessarily entirely bleak for these projects, though. Experience from the GTA04 effort may have been beneficial to the development of the Pyra handheld computer, whose own genesis has been troubled at times and yet forthrightly communicated in an honourably transparent fashion by its initiator, and the CPU board for that device may end up as the basis for a new product known as the GTA15. Given the common architectural heritage of the GTA04, N900 and Neo900, it would not be completely inconceivable that if some kind of way forward could be found for the Neo900, GTA15 might be some kind of contributing element to that.

What these projects should illustrate, however, is that the foundations of a Free Software mobile device are difficult to prepare, subject to sudden and potentially catastrophic setbacks or outright failure, and they require persistence and plenty of resources, not least of the financial kind but also the dedication of people with the right competence and experience. Sadly, these projects never get the attention, the recognition, or the generosity they deserve.

Seeing the Bigger Picture

If we care about being able to support all the different elements of our phones with Free Software, instead of crossing out items in the specification list, sacrificing functionality because nobody knows how it works without signing a non-disclosure agreement and then only being allowed to release a binary blob for loading into specific Linux kernel versions, then we need to be there at the start, when the phone gets designed, and play a role in making sure that everything can be supported by Free Software. Spending time and effort on figuring out someone else’s hardware is not time and effort well spent.

Indeed, from the moment a proprietary product gets into the hands of developers, the clock is ticking. Already, given the pace of product development and shipment, the product is heading for obsolescence and its manufacturer will be tooling up to make its successor. Even if downstream developers work quickly and get as much of the hardware supported as they can, there will be only be a certain period before the product becomes difficult to obtain. And then the process of catch-up starts all over again with the next product.

Of course, product variations always used to happen with desktop and laptop computers. One day you’d get a laptop with one chipset and the next day the “same” laptop would contain something else. The only thing that eased the pain involved was broad hardware support for these kinds of devices, and even then there would be chipsets notorious for their lack of support in things like the Linux kernel.

Such pitfalls cultivated demands for products that could run Free Software and be supplied with such software instead of the usual proprietary products bundled as a consequence of Microsoft’s anticompetitive and coercive business practices. It was no longer enough to accept that we might buy a computer with bundled software and “install over it”, that this might emphasise our Free Software credentials. Credible advocates of Free Software have sought to identify vendors offering systems that are either already supplied with Free Software or that come without any preinstalled software at all, in both cases being fully capable of supporting a Free Software distribution.

But we now find ourselves in the absurd situation with mobile devices where remedial measures comparable to “install over it” are almost the only things people can suggest, that there really aren’t any mobile device vendors who can offer a bare, supportable device or one that is, say, already running Replicant and offering access to all of the device’s features. And although refurbished devices are sold that may run Replicant well enough, we lack another essential guarantee that may not have been so important in the past, one that community-originating hardware projects might be able to help with.

In being involved with the design of these devices, we can seek to dictate how long they remain viable. Instead of having a large corporation decide that now is the time for you to buy their next device and that the product you bought and liked is now deliberately unavailable, we can seek to keep making devices as long as they have a role and have people wanting to keep using them.

If something runs a Free Software distribution well enough, and if that device can still be made, it becomes a safe choice and something we can recommend to others. At last, we get some kind of certainty in a world whose stream of continual change is often fad-driven, exploitative and needless.

The Strategic Vacuum

So it seems obvious to me that if people want Free Software on phones, they need to cultivate the efforts to make that Free Software viable, which means cultivating sustainable hardware design and actually promoting and supporting the projects pursuing it. Otherwise, it is like trying to plant an orchard without paying attention to the soil, cursing that the trees will not grow whilst being oblivious to the fact that the ground is concrete.

And this goes beyond this particular domain. Free Software advocacy is all well and good, but there needs to be practical action that goes beyond pitching in and nudging things towards success. It is wonderful that collective effort and collaboration can take small endeavours and grow them into solutions that are useful for many, but it can be too much to expect everything to just coalesce as if by magic, that people and projects will just discover each other, work together, strengthen each other and multiply the benefits.

There needs to be a strategy, for people to identify real-world problems that are not being addressed by Free Software, to identify the projects that might be applied to those problems, and to propose actual solutions. In the absence of Free Software, proprietary and exploitative solutions are able to stake out their territory, entrench themselves and to thrive.

Here, I struggle to see which Free Software organisations have both the breadth of scope and the depth of focus to make a difference. Developer-driven organisations like Debian and KDE have a lot going on, and they deliver non-trivial software systems, and yet sustaining something like a viable mobile platform (encompassing hardware and software) is seemingly beyond their reach. Neither have a coherent answer to other significant challenges of our time, such as the dominance of proprietary (and destructive) communications platforms.

Meanwhile, advocacy-driven organisations like the FSFE merely give advice to people about how they might help Free Software. The FSF arguably combines this with actual development through the GNU project, and there are those lists of urgent activities, but one cannot help but have the impression that the urgency is reactive and not the product of strategic vision (beyond the goal of the proliferation of Free Software, of course). I would like to think that the Software Freedom Conservancy might combine things more effectively, but it perhaps remains more of an umbrella organisation with a similar emphasis on broad Free Software adoption.

I would like to think that the step of getting involved in Free Software is but the first step towards fairer, kinder, more transparent, more productive and more sustainable societies. Traces of such visions can be seen in the communications of various organisations and yet they largely hold back from suggesting what the next steps might be. And yet, I think, we will in future really need to take those steps in order to protect ourselves, our societies, and the things we care about.

In general, we notice the absence of strategy; in specific cases, we notice it keenly. Which organisations are willing and able to fill this strategic void coherently and decisively, to offer concrete solutions as opposed to suggestions, to have something definite to work towards, and to direct effort and resources into actually realising such goals? Surely, in this age of hoping for the best, those organisations will be the ones that deserve our support the most.

Tuesday, 15 January 2019

Open Call for Humanitarian Design Challenges

Blog – Think. Innovation. | 07:54, Tuesday, 15 January 2019

Would you like your product-related Humanitarian Design Challenge solved for free?

Do you have a brilliant idea for an innovative product for the beneficiaries of your mission-driven organization or for your colleagues in the field? Or do they use an existing product that needs serious improvement?

Then here is your chance: have a team of Industrial Product Design students create that for you!

What you get

We set up a collaboration with Rotterdam University of Applied Sciences to work on practical challenges in Humanitarian Development, Aid and Disaster Relief:

• Duration: February 2019 until May 2019
• Value: a team of students works 8 full-time weeks on your idea
• Result: a working prototype of the solution

What you bring in

• An idea for a physical product (may have a digital component, but not digital-only)
• A clear, practical and concise challenge description (format and examples available)
• 3 (online) meetings between February and March with users, stakeholders and/or challenge owner to provide context information and answer practical questions
• If the challenge owner or a knowledgeable field worker is in the Netherlands: a physical meeting in Rotterdam with the team at the start would be helpful

All designs and documentation of the solution will be freely published online as Open Source, to the benefit of you, users and other stakeholders, future (student) teams and anyone interested.

Previous programs resulted in amongst others solutions for Field Ready: a dust-mask created from plastic waste to use in the air polluted city Kathmandu, an ear/nose syringe, bricks made from PET bottles, a hydroponics system, a 3D-printable sharps bottle box and a blood warmer.

Interested?

Contact me for more information and receiving the challenge description format and examples.

This description is also available in PDF.

– Diderik

Thursday, 10 January 2019

Okular: PDF Signature + Certificate support has landed

TSDgeos' blog | 23:12, Thursday, 10 January 2019

As of a few minutes ago, i merged the code from Chinmoy Ranjan Pradhan's GSOC to support showing PDF Signatures and Certificates in Okular.



Signature handling is a big step for us, but it's also very complex, so i expect it to have bugs and things that can be improved so testers more than welcome.

Compiling is a bit "hard" since it requires poppler 0.73 that was released a few days ago.

But thanks to flatpak, there's no need to compile it, you can run the KDE Okular Nightly on your system to try it

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo
flatpak install kdeapps org.kde.okular

Note: if you have okular installed from another flatpak repo (for example flathub) this will switch you to the KDE Nightlies, you may want to switch back after testing.

And then you can try the adobe sample pdf
flatpak run --share=network org.kde.okular https://blogs.adobe.com/security/SampleSignedPDFDocument.pdf

And you should get stuff like this

Saturday, 05 January 2019

sjfonts 2.1 released

TSDgeos' blog | 17:49, Saturday, 05 January 2019

More than 11 years after sjfonts 2.0.2 was released today I'm announcing sjfonts 2.1

It contains two enhacements contributed by Yuri Chornoivan
* Delphine font now has the Euro sign
* Steve font now has "basic" Cyrillic characters

If by any chance your distribution is packaging them, update!


https://sourceforge.net/projects/sjfonts/files/sjfonts/sjfonts-2.1/

Yes, it's on sourceforge ;)

Wednesday, 02 January 2019

2018 and 2019

Free Software – Frank Karlitschek_ | 17:20, Wednesday, 02 January 2019

2018 is over and 2019 starts. This is a great opportunity to look back, reflect and to try to look into the future. I predict that 2019 will be a very good year for privacy, open source and decentralized cloud software. Maybe even the mainstream breakthrough of federated and decentralized internet services!

Let me explain why:

The mainstream opinion about centralized services started to change in 2018 and I think this trend will continue in 2019. More and more people see the issue with large, centralized data silos that control more and more of our private lives, democratic processes and society as a whole. Some examples from 2018 where bad news hit the press include:

  • The never ending list of Facebook scandals: Wired
  • Twitter election meddling: BostonGlobe
  • Amazon Alexa is listening to private conversations and is leaking the data: Heise and  BusinessInsider
  • Dropbox is leaking private date: TechTarget
  • Google Plus is insecure and will shut down: CNBC

This year, Europe introduced the GDPR to regulate the collection of private data. I believe it is a good start and think we ultimately we need rules as described in the User Data Manifesto
I expected that people in the US and Asia wouldn’t take the GDPR seriously and make fun of Europeans tendency to ‘over-regulate’. So I was surprised to see that the GDPR was widely praised as a step into the right direction. People in Asia and US are already asking for similar regulations in their markets, California has already announced its own variant of the GDPR with the California Consumer Privacy Act.

This clearly shows that the world is changing. People realize more and more that extensive centralized data collection is a problem. This is an opportunity for open source and decentralized and federated alternatives to enter the mainstream.

At Nextcloud we have become widely recognized as one of the major alternatives. And this year was big for us, with three big releases introducing new technologies the world needs going forward. Let me name just a few:

  • End-to-end Encryption. In 2018 Nextcloud launched support for full end 2 end encrypted file sync and share.
  • Nextcloud Talk. Beginning of 2018 we launched Nextcloud Talk as a fully integrated self hosted, open source and decentralized chat and audio/video call solution
  • Just a few weeks ago we launched Social with ActivityPub support to integrated with Mastodon and other projects of the Fediverse.
  • Simple Signup. In summer we launched the Simple Signup feature to make it possible for new users to sign up at one of the Nextcloud providers directly from the Mobile and Desktop apps.
  • We launched our unique Video Verification feature to become the most secure file share platform.
  • In summer we announced the initiative to ship Nextcloud preinstalled on millions of NEC routers, something that will take off in 2019, you might have seen the prototype devices on social media.
  • This fall we launched the Nextcloud Include program with funding from the Reinhard von König Preis for innovation. I’m happy we run this project together with my old friends from KDE.

In 2018 I traveled to more events and countries than ever before. It’s great to see how the Nextcloud community is growing all over the globe. On the company and business side we also have good news. The Nextcloud company is growing nicely in all areas. There will be separate news about this soon.

Of course it’s the mission of Nextcloud to not do everything alone. This is why we launched a lot of integration projects in 2018. For example with Rocket.Chat, Moodle, StorJ, Mastodon and others. I’m really happy to see that other open source and decentralization projects do as well as Nextcloud.

I think 2019 could be the year where open source, federated and self-hosted technology hits mainstream, taking on the proprietary, centralized data silos keeping people’s personal information hostage. Society becoming more critical about data collection will fuel this development.

If you want to make a difference then join Nextcloud or one of the other project that develop open source decentralized and federated solutions. I think 2019 is the year were we can win the internet back!

Tuesday, 01 January 2019

Cultural Techniques

English on Björn Schießle - I came for the code but stayed for the freedom | 22:00, Tuesday, 01 January 2019

CC BY-SA 3.0, via wikimedia/Paulis

These days I looked up the German word “Kulturtechnik” at Wikipedia which translates to “cultural techniques” in English. Surprisingly there is no English Wipipedia article for it, so I have to quote the German one. This section attracted my attention the most:

[Für Kulturtechniken] sind ein oder mehrere Voraussetzungen nötig: das Beherrschen von Lesen, Schreiben und Rechnen, die Fähigkeit zur bildlichen Darstellung, analytische Fähigkeiten, die Anwendung von kulturhistorischem Wissen oder die Vernetzung verschiedener Methoden.

Bei der Entwicklung von Kulturtechniken handelt es sich nicht um Leistungen von Einzelpersonen, sondern um Gruppenleistungen, die in einem soziokulturellen Kontext entstehen. Alle genannten Voraussetzungen benötigen daher immer die soziale Interaktion und gesellschaftliche Teilhabe (Partizipation).

According to this description, cultural techniques are not capabilities of individuals but achievements of a collective, done in a socio-cultural context, it always requires social interaction and participation. This means that reading, writing or math are no culture techniques by itself, but collaborative writing would qualify as such, for example with a wiki or any other collaboration platforms.

Free Software as a cultural technique

When talking about Free Software I argued already in the past that software is a new cultural technique. My arguments typically was along the line that software is everywhere and shapes our world. Software changed the way we live, learn, work, communicate, participate in society and share our culture. I think that’s still true, but this Wikipedia article added an important aspect to me. With the distinction between the tools and what we achieve collectively with it, I think we can argue that software alone is not a cultural technique, but Free Software is.

By definition Free Software is a licensing model for software. A software license that gives the users the freedom to use, study, share and improve the software makes it Free Software. These days Free Software influence all areas of our live. Cars, airplanes, cash terminals, pay machines, the internet, televisions, smart phones, I could continue the list indefinitely, nothing would be possible without Free Software. The freedom given by the license and the influence it has on all areas of our live changed the way we develop software. A new development model was established and is used for most Free Software these days. The most successful Free Software is developed by open communities with a strong focus on collaboration and participation. This model is embraced by individuals and large organizations. According to the 2017 Linux Kernel Report, 4300 developers from over 500 companies have contributed to the kernel, with a impressive list of large companies. Everyone works at the same code, often in parallel. People discuss the changes proposed by each other and improve it together until it is ready to be released. The community is not limited to the code, in a similar collaborative way the corresponding design, documentation, artwork and translations are created. People exchange ideas in online forums, real time chats and meet at conferences. All this happens in a transparent and socio-cultural context, open for everyone to join.

But not only the way Free Software is build, also the usage of the software fits the definition of cultural techniques. From a user perspective, Free Software fosters collaboration and participation in many ways. It can be shared freely so that it encourage collaboration in its area of use. For example pupils can exchange the software they use to do their homework or to prepare their presentation. This teaches a culture of collaboration and make sure that everyone has the same possibilities to participate. Different departments in organizations can exchange software and give it to as much employees as needed without worrying that the maximum number of users, allowed by the license is already exceeded.

In a world defined by software, access to software decides who has access to our culture, to our communication tools, about our possibilities in education and at work. Free Software makes sure that everyone has the same possibilities to participate in today’s society. It fosters collaboration and participation in contrast to proprietary software which divides people and make sure that everyone is on his own.

All this proves to me that Free Software is the latest cultural technique. As such it requires special attention by policy makers and society. I think it is in all our interest to protect and foster this new cultural technique.

Friday, 28 December 2018

Ubuntu/Gnome on a tablet

Seravo | 10:48, Friday, 28 December 2018

Back in 2012 we blogged about Meego/Mer/Nemo, Android for x86 and other Linux operating systems for touchscreens and tablets. Back in the day we also compared Ubuntu Unity vs. Gnome 3 and which would work better as a touch operated system, and Gnome 3 was clearly the more mature system.

Since then Ubuntu has dropped Unity and migrated to Gnome 3, and many other parts of the software ecosystem have matured as well, so we decided to take Ubuntu on a tablet for another spin.

Ubuntu 18.04 LTS on the ExoPC (WeTab) – easy to install!

The ExoPC is a tablet device from Intel with regular Intel Atom x86 processor, 2 GB RAM, a 64 GB SSD, 2 USB ports, touch screen etc. Not a high-end tablet by today’s specs, but it was pretty nice back in 2013 when it came out. It is still a decent piece of hardware, and not the least because all of the device drivers for it can be found in the mainline Linux kernel and therefore Ubuntu 18.04 installed on it works fully out-of-the-box as expected.

Installation is easy. Just boot the tablet with a USB installation drive inserted, and during the boot press the “BBS” icon and start the installation and the rest is familiar to anybody who has installed Ubuntu before. An external keyboard connected via USB is helpful during the process though.

All of the hardware is detected as one would expect. WLAN works, camera works, sound, microphone and everything else we tested works. Even the rotation sensor works and the screen turns around as the tablet physically rotates. Since the tablet is very old, the experience is a bit sluggish though.

Touch screen functionality

The basic Gnome 3 desktop that Ubuntu 18.04 comes with works fairly well on a touch device. Naturally many of the apps don’t work that well as they have been designed for desktop/laptop use, but you can manage it.

For example connecting to a wifi is easy – just tap on the system settings in the top right corner, select wireless network, enter the password using the on-screen-keyboard and there you go. To play Aisleriot just click on the apps icon in the lower left corner, scroll to the game icon and tap it to launch. Playing cards using your finger works well, but for example playing Mines is impossible since there is not secondary mouse button gesture to tap on a mine. Some parts of the UI are adapted to react to long-tap by popping up a menu, just like a right-click would do, but long-tap is not a built-in feature into all of Ubuntu/Gnome.

Why not just use Android?

Nowadays Android tablets are very capable and dirty cheap. So why waste time on exploring options? Most Android tablets are consumer devices. That means that the seller makes their money purely on the hardware as a one-off deal. The manufacturer is unlikely to provide software updates for more than 1-2 years, and even for that time just minimally. In a professional setting one would need to be able to have the same line of devices for multiple years, have security updates ensured, backups, management, multi-user capabilities etc. With a real Linux distribution (e.g. Debian, Ubuntu, RedHat, SUSE etc) you get all of these “corporate” features.

Using Ubuntu 18.04 you are guaranteed to have security updates until 2023. With a real Linux system you can run rsync+rdiff-backup on the /home directory to make backups via SSH automatically to a secure location, and if you device breaks down it is easy to re-install a clean device. With Android each version has its vendor-specific quirks and as you mostly don’t get root access to the devices at all, which makes the management of these devices often cumbersome and unreliable. Being able to control the operating system and contents of the computer can be very important in a corporate setting. Say you build a fleet management system and have a tablet installed in 500 trucks. Having a stable and dependable device and software strategy for 5–10 years will be much more achievable using real Linux systems instead of Android.

As a consumer or hobbyist you might also want to have control. With a real Linux distribution on your tablet you can manage it just like you manage your laptops and workstations.

Where to buy an Ubuntu tablet?

Unfortunately we were unable to find any online stores that sell pre-installed Ubuntu tablets. There are nowadays plenty of pre-installed Ubuntu laptop sellers, but no tablets. Hopefully there will be more people that realize the benefits of having more control over their digital systems and that pre-installed Linux tablets (other than just Android) will become more common in the future!

Tuesday, 18 December 2018

Dropbox To End Sync Support For All Filesystems Except Ext4 on Linux

Evaggelos Balaskas - System Engineer | 15:26, Tuesday, 18 December 2018

I am only using btrfs for the last few years, without any problem. Drobox’s decision is based on supporting Extended file attributes and even so btrfs supports extended attributes, seems you will get this error:

dropbox_error

I have the benefit of using encrypted disks via LUKS so in this blog post, I will only present a way to have an virtual disk with ext4, to your dropbox folder on-top of your btrfs!

 

Allocating disk space

Let’s say that your have 2G of dropbox space, allocate 2G of file size:

fallocate -l 2G Dropbox.img

you can verify the disk image by:

qemu-img info Dropbox.img

image: Dropbox.img
file format: raw
virtual size: 2.0G (2147483648 bytes)
disk size: 2.0G

 

Map Virtual Disk to Loop Device

Then map your virtual disk to a loop device:

sudo losetup `sudo losetup -f` Dropbox.img

verify it:

losetup -a

/dev/loop0: []: (/home/ebal/Dropbox.img)

then loop0 it is.

 

Create an ext4 filesystem

sudo mkfs.ext4 -L Dropbox /dev/loop0

Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 09c7eb41-b5f3-4c58-8965-5e366ddf4d97
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

 

Move Dropbox folder

You have to move your previous dropbox folder to another location. Dont forget to cloase dropbox-client and any other application that is using files from this folder.

mv Dropbox Dropbox.bak

renamed 'Dropbox' -> 'Dropbox.bak'

and then create a new Dropbox folder:

mkdir -pv Dropbox/

mkdir: created directory 'Dropbox/'

 

Mount the ext4 filesystem

sudo mount /dev/loop0 ~/Dropbox/

verify it:

df -h ~/Dropbox/

Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0      2.0G  6.0M  1.8G   1% /home/ebal/Dropbox

and give the right perms:

sudo chown -R ebal:ebal ~/Dropbox/

 

Copy files

Now, copy the files from the old Dropbox folder to the new:

rsync -ravx Dropbox.bak/ Dropbox/

sent 464,823,778 bytes  received 13,563 bytes  309,891,560.67 bytes/sec
total size is 464,651,441  speedup is 1.00

and start dropbox-client

dropbox_running

Tag(s): dropbox

Saturday, 15 December 2018

Setup VLC Remote

Evaggelos Balaskas - System Engineer | 19:11, Saturday, 15 December 2018

Four Step Process

vlc_01.png
vlc_02.png
vlc_03.png

$ sudo iptables -nvL | grep 8765

0    0    ACCEPT    tcp  --  *    *    192.168.0.0/24    0.0.0.0/0    tcp dpt:8765
Tag(s): vlc

Saturday, 08 December 2018

Open Innovation & Open Design

Blog – Think. Innovation. | 16:39, Saturday, 08 December 2018

This article originally appeared on thespindle.org about my presentation and workshop on Technology Trends, Open Source and Open Design for Humanitarian NGOs and other organizations:

On November 27th 2018, The Spindle, in collaboration with HumanityX, organised a Future Session about global technological developments and open innovation & open design. Participants to this meeting came from various organisations, sectors, and backgrounds, which provided fruitful input and discussions, especially during the workshop part of the session.

The session was led by expert Diderik van Wingerden, who is an Open Source Innovation expert and pragmatic idealist. To find out more about Diderik and his work, visit www.think-innovation.com. You can find Diderik’s presentation and the material that he used for this session here.

Introduction to Technological Trends
Diderik started off by presenting a great selection of today’s technological trends and developments, among which virtual reality, big (open) data, artificial intelligence (AI), blockchain, 3D printing, open source & open design, internet of things and robots & drones.

New Dimensions
Today, new dimensions are emerging due to many open initiatives. Diderik shortly introduced us to the technological trends mentioned above and argued that the world is going “Open Source”, which basically refers to freedom. That is, freedom to study, use, change, repair, distribute, sell, improve and so on, for free. It became clear that many big companies, such as Microsoft, Facebook, Google and IBM do Open Source because they can potentially make more profit out of it. This is because they do not share their data, they only share their production methods. The core component of their business is gathering, using and selling data.

Dream Collaborator Workshop
After the presentation, the interactive part of the session started. Participants were asked to think, from the perspective of their own organisation, of a ‘dream collaborator’ that could solve everything they strive for. The goals of the organisations and individuals that were present during this day reflect many SDGs, such as fighting poverty, advocating sexual & reproductive health rights, combating child exploitation and supporting equality. As stressed by Diderik, we cannot solve single SDGs with single solutions and should therefore strive to solve them all at once. They are very much interlinked and dependent on each other. So, participants were inivited to think about how technology, now and in the future, could help solving the problems we are trying to combat. Diderik provided everyone a so-called ‘Tech-Deck’, which consisted of 8 cards, each being one of the technological developments mentioned above.

The group got divided into three subgroups, each of them based on a case that was suggested during the session. While focusing on cases of Oxfam, Terre Des Hommes and CHOICE, subgroups explored and fantasized in what way the technological developments could possibly contribute to solving the different challenges. Eventually, each group presented their case in which it became clear that all groups really used their fantasy and came up with inventive and innovative technological solutions that could potentially solve a great variety of issues in the development sector. A few examples are:

  • Using a 3D printer to print the birth control pill and condoms, or to print spare parts for machines, or prosthetics.
  • Using drones to quickly and effectively transport (first) aid kits, blood and medical equipment.
  • Using AI, Big Data and satellite images to monitor which areas are suitable for urban farming.
  • Using virtual reality to teach and change certain behaviours, or work on stress reduction through virtual experience.
  • Using blockchain to monitor and supervise production processes, or chains.
  • Using open design and internet of things to build your own greenhouse and contribute to food production.

Expert of the session, Diderik van Wingerden, surprised all participants by offering all organisations 1 hour of consultancy to bring ideas further! For all of you who became curious, have questions or want more information, mail to diderik [at] think-innovation.com.

In 2019, we will continue with new Future Sessions. Stay tuned since new information will follow soon. Hope to see you in 2019!

Sunday, 18 November 2018

Apple iOS Vs your Linux Mail, Contact and Calendar Server

Evaggelos Balaskas - System Engineer | 20:51, Sunday, 18 November 2018

The purpose of this blog post is to act as a visual guide/tutorial on how to setup an iOS device (iPad or iPhone) using the native apps against a custom Linux Mail, Calendar & Contact server.

Disclaimer: I wrote this blog post after 36hours with an apple device. I have never had any previous encagement with an apple product. Huge culture change & learning curve. Be aware, that the below notes may not apply to your setup.

Original creation date: Friday 12 Oct 2018
Last Update: Sunday 18 Nov 2018

 

Linux Mail Server

Notes are based on the below setup:

  • CentOS 6.10
  • Dovecot IMAP server with STARTTLS (TCP Port: 143) with Encrypted Password Authentication.
  • Postfix SMTP with STARTTLS (TCP Port: 587) with Encrypted Password Authentication.
  • Baïkal as Calendar & Contact server.

 

Thunderbird

Thunderbird settings for imap / smtp over STARTTLS and encrypted authentication

mail settings

 

Baikal

Dashboard

baikal dashboard

 

CardDAV

contact URI for user Username

https://baikal.baikal.example.org/html/card.php/addressbooks/Username/default

CalDAV

calendar URI for user Username

https://baikal.example.org/html/cal.php/calendars/Username/default

 

iOS

There is a lot of online documentation but none in one place. Random Stack Overflow articles & posts in the internet. It took me almost an entire day (and night) to figure things out. In the end, I enabled debug mode on my dovecot/postifx & apache web server. After that, throught trail and error, I managed to setup both iPhone & iPad using only native apps.

 

Mail

Open Password & Accounts & click on New Account

iPad_iOS_mail_01

Choose Other

iPad_iOS_mail_02

iPad_iOS_mail_03

iPad_iOS_mail_04

 

Now the tricky part, you have to click Next and fill the imap & smtp settings.

 

iPad_iOS_mail_05

iPad_iOS_mail_06

iPad_iOS_mail_07

 

Now we have to go back and change the settings, to enable STARTTLS and encrypted password authentication.

 

iPad_iOS_mail_08

iPad_iOS_mail_09

 

STARTTLS with Encrypted Passwords for Authentication

 

iPad_iOS_mail_10

iPad_iOS_mail_11

iPad_iOS_mail_12

iPad_iOS_mail_13

iPad_iOS_mail_14

iPad_iOS_mail_15

iPad_iOS_mail_16

 

In the home-page of the iPad/iPhone we will see the Mail-Notifications have already fetch some headers.

 

iPad_iOS_mail_17

and finally, open the native mail app:

iPad_iOS_mail_18

 

Contact Server

Now ready for setting up the contact account

https://baikal.baikal.example.org/html/card.php/addressbooks/Username/default

iPad_iOS_mail_19

iPad_iOS_mail_20

iPad_iOS_mail_21

iPad_iOS_mail_22

iPad_iOS_mail_23

 

Opening Contact App:

 

iPad_iOS_mail_24

 

Calendar Server

https://baikal.example.org/html/cal.php/calendars/Username/default

iPad_iOS_mail_25

iPad_iOS_mail_26

iPad_iOS_mail_27

iPad_iOS_mail_28

iPad_iOS_mail_29

iPad_iOS_mail_30

iPad_iOS_mail_31

 

Cloud-init with CentOS 7

Evaggelos Balaskas - System Engineer | 14:04, Sunday, 18 November 2018

Cloud-init is the defacto multi-distribution package that handles early initialization of a cloud instance

This article is a mini-HowTo use cloud-init with centos7 in your own libvirt qemu/kvm lab, instead of using a public cloud provider.

 

How Cloud-init works

cloud-init.png

Josh Powers @ DebConf17

How really works?

Cloud-init has Boot Stages

  • Generator
  • Local
  • Network
  • Config
  • Final

and supports modules to extend configuration and support.

Here is a brief list of modules (sorted by name):

  • bootcmd
  • final-message
  • growpart
  • keys-to-console
  • locale
  • migrator
  • mounts
  • package-update-upgrade-install
  • phone-home
  • power-state-change
  • puppet
  • resizefs
  • rsyslog
  • runcmd
  • scripts-per-boot
  • scripts-per-instance
  • scripts-per-once
  • scripts-user
  • set_hostname
  • set-passwords
  • ssh
  • ssh-authkey-fingerprints
  • timezone
  • update_etc_hosts
  • update_hostname
  • users-groups
  • write-files
  • yum-add-repo

 

Gist

Cloud-init example using a Generic Cloud CentOS-7 on a libvirtd qmu/kvm lab · GitHub

 

Generic Cloud CentOS 7

You can find a plethora of centos7 cloud images here:

Download the latest version

$ curl -LO http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz

Uncompress file

$ xz -v --keep -d CentOS-7-x86_64-GenericCloud.qcow2.xz

Check cloud image

$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2

image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 863M
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

The default image is 8G.
If you need to resize it, check below in this article.

 

Create metadata file

meta-data are data that comes from the cloud provider itself. In this example, I will use static network configuration.

cat > meta-data <<EOF
instance-id: testingcentos7
local-hostname: testingcentos7

network-interfaces: |
  iface eth0 inet static
  address 192.168.122.228
  network 192.168.122.0
  netmask 255.255.255.0
  broadcast 192.168.122.255
  gateway 192.168.122.1

# vim:syntax=yaml
EOF

 

Crete cloud-init (userdata) file

user-data are data that comes from you aka the user.

cat > user-data <<EOF
#cloud-config

# Set default user and their public ssh key
# eg. https://github.com/ebal.keys
users:
  - name: ebal
    ssh-authorized-keys:
      - `curl -s -L https://github.com/ebal.keys`
    sudo: ALL=(ALL) NOPASSWD:ALL

# Enable cloud-init modules
cloud_config_modules:
  - resolv_conf
  - runcmd
  - timezone
  - package-update-upgrade-install

# Set TimeZone
timezone: Europe/Athens

# Set DNS
manage_resolv_conf: true
resolv_conf:
  nameservers: ['9.9.9.9']

# Install packages
packages:
  - mlocate
  - vim
  - epel-release

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# Remove cloud-init
runcmd:
  - yum -y remove cloud-init
  - updatedb

# Configure where output will go
output:
  all: ">> /var/log/cloud-init.log"

# vim:syntax=yaml
EOF

 

Create the cloud-init ISO

When using libvirt with qemu/kvm the most common way to pass the meta-data/user-data to cloud-init, is through an iso (cdrom).

$ genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data

or

$ mkisofs -o cloud-init.iso -V cidata -J -r user-data meta-data

 

Provision new virtual machine

Finally run this as root:

# virt-install
    --name centos7_test
    --memory 2048
    --vcpus 1
    --metadata description="My centos7 cloud-init test"
    --import
    --disk CentOS-7-x86_64-GenericCloud.qcow2,format=qcow2,bus=virtio
    --disk cloud-init.iso,device=cdrom
    --network bridge=virbr0,model=virtio
    --os-type=linux
    --os-variant=centos7.0
    --noautoconsole

 

The List of Os Variants

There is an interesting command to find out all the os variants that are being supported by libvirt in your lab:

eg. CentOS

$ osinfo-query os | grep CentOS

centos6.0  |  CentOS  6.0  |  6.0  |  http://centos.org/centos/6.0
centos6.1  |  CentOS  6.1  |  6.1  |  http://centos.org/centos/6.1
centos6.2  |  CentOS  6.2  |  6.2  |  http://centos.org/centos/6.2
centos6.3  |  CentOS  6.3  |  6.3  |  http://centos.org/centos/6.3
centos6.4  |  CentOS  6.4  |  6.4  |  http://centos.org/centos/6.4
centos6.5  |  CentOS  6.5  |  6.5  |  http://centos.org/centos/6.5
centos6.6  |  CentOS  6.6  |  6.6  |  http://centos.org/centos/6.6
centos6.7  |  CentOS  6.7  |  6.7  |  http://centos.org/centos/6.7
centos6.8  |  CentOS  6.8  |  6.8  |  http://centos.org/centos/6.8
centos6.9  |  CentOS  6.9  |  6.9  |  http://centos.org/centos/6.9
centos7.0  |  CentOS  7.0  |  7.0  |  http://centos.org/centos/7.0

 

DHCP

If you are not using a static network configuration scheme, then to identify the IP of your cloud instance, type:

$ virsh net-dhcp-leases default

 Expiry Time           MAC address         Protocol   IP address           Hostname   Client ID or DUID
---------------------------------------------------------------------------------------------------------
 2018-11-17 15:40:31   52:54:00:57:79:3e   ipv4       192.168.122.144/24   -          -                  

 

Resize

The easiest way to grow/resize your virtual machine is via qemu-img command:

$ qemu-img resize CentOS-7-x86_64-GenericCloud.qcow2 20G

Image resized.

$ qemu-img info CentOS-7-x86_64-GenericCloud.qcow2

image: CentOS-7-x86_64-GenericCloud.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 870M
cluster_size: 65536
Format specific information:
    compat: 0.10
    refcount bits: 16

You can add the below lines into your user-data file

growpart:
  mode: auto
  devices: ['/']
  ignore_growroot_disabled: false

The result:

[root@testingcentos7 ebal]# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        20G  870M   20G   5% /

 

Default cloud-init.cfg

For reference, this is the default centos7 cloud-init configuration file.

# /etc/cloud/cloud.cfg 
users:
 - default

disable_root: 1
ssh_pwauth:   0

mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys:   0
ssh_genkeytypes:  ~
syslog_fix_perms: ~

cloud_init_modules:
 - migrator
 - bootcmd
 - write-files
 - growpart
 - resizefs
 - set_hostname
 - update_hostname
 - update_etc_hosts
 - rsyslog
 - users-groups
 - ssh

cloud_config_modules:
 - mounts
 - locale
 - set-passwords
 - rh_subscription
 - yum-add-repo
 - package-update-upgrade-install
 - timezone
 - puppet
 - chef
 - salt-minion
 - mcollective
 - disable-ec2-metadata
 - runcmd

cloud_final_modules:
 - rightscale_userdata
 - scripts-per-once
 - scripts-per-boot
 - scripts-per-instance
 - scripts-user
 - ssh-authkey-fingerprints
 - keys-to-console
 - phone-home
 - final-message
 - power-state-change

system_info:
  default_user:
    name: centos
    lock_passwd: true
    gecos: Cloud User
    groups: [wheel, adm, systemd-journal]
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    shell: /bin/bash
  distro: rhel
  paths:
    cloud_dir: /var/lib/cloud
    templates_dir: /etc/cloud/templates
  ssh_svcname: sshd

# vim:syntax=yaml

Sunday, 11 November 2018

Publishing Applications through F-Droid

David Boddie - Updates (Full Articles) | 17:16, Sunday, 11 November 2018

In 2016 I started working on a set of Python modules for reading and writing bytecode for the Dalvik virtual machine so that I could experiment with creating applications on Android without having to write them in Java. In the time since then, on and off, I have written a few small applications to learn about Android and explore the capabilities of the devices I own. Some of these were examples, demos and tests for the compiler framework, but others were intended to be useful to me. I could have just downloaded applications to perform the tasks I wanted to do, but I wanted minimal applications that I could understand and trust, so I wrote my own simple applications and was happy to install them locally on my phone.

In September I had the need to back up some data from a phone I no longer use, so I wrote a few small applications to dump data to the phone's storage area, allowing me to retrieve it using the adb tool on my desktop computer. I wondered if other people might find applications like these useful and asked on the FSFE's Android mailing list. In the discussion that followed it was suggested that I try to publish my applications via F-Droid.

Preparations

The starting point for someone who wants to publish applications via F-Droid is F-Droid's Contribute page. The documentation contains a set of links to relevant guides for various kinds of developers, including those making applications and those deploying the F-Droid server software on their own servers.

I found the Contributing to F-Droid document to be a useful shortcut for finding out the fundamental steps needed to submit an application for publication. This document is part of the F-Droid Data repository which holds information about all the applications provided in the catalogue. However, I was a bit uncertain about how to write the metadata for my applications. The problem being that the F-Droid infrastructure builds all the applications from source code and most of these are built using the standard Android SDK, but my applications are not.

Getting Help

I joined the F-Droid forum to ask for assistance with this problem under the topic of Non-standard development tools and was quickly offered help in getting my toolchain integrated into the F-Droid build process. I had expected it was going to take a lot of effort on my part but F-Droid contributor Pierre Rudloff came up with a sample build recipe for my application, showing how simple it could be. To include my application in the F-Droid catalogue, I just needed to know how to describe it properly.

The way to describe an application, its build process and available versions is contained in the Build Metadata Reference document. It can be a bit intimidating to read, which is why it was so useful to be given an example to start with. Having an example meant that I didn't really have to look too hard at this document, but I would be returning to it before too long.

To get the metadata into the build process for the applications hosted by F-Droid, you need to add it to the F-Droid Data repository. This is done using the familiar cycle of cloning, committing, testing, pushing, and making merge requests that many of us are familiar with. The Data repository provides a useful Quickstart guide that tells you how to set up a local repository for testing. One thing I did before cloning any repositories was to fork the Data repository so that I could add my metadata to that. So, what I did was this:

git clone https://gitlab.com/fdroid/fdroidserver.git
export PATH="$PATH:$PWD/fdroidserver"

# Clone my fork of the Data repository:
git clone https://gitlab.com/dboddie/fdroiddata.git

I added a file in the metadata directory of the repository called uk.org.boddie.android.weatherforecast.txt containing a description of my application and information about its version, dependencies, build process, and so on. Then I ran some checks using the fdroid tool from the Server repository, checking that my metadata was valid and in a suitable format for the Data repository, before verifying that a package could be built:

fdroid lint uk.org.boddie.android.weatherforecast
fdroid rewritemeta uk.org.boddie.android.weatherforecast
fdroid build -v -l uk.org.boddie.android.weatherforecast

The third of these commands requires the ANDROID_HOME environment variable to be set. I have a very old Android SDK installation that I had to refer to in order for this to work, even though my application doesn't use it. For reference, the ANDROID_HOME variable refers to the directory in the SDK or equivalent that contains the build-tools directory.

Build, Test, Repeat

At this point it was time to commit my new file to my fork of the Data repository and push it back up to GitLab. While logged into the site, verifying that my commit was there, I was offered the option to create a merge request for the official F-Droid Data repository, and this led to my first merge request for F-Droid. Things were moving forward.

Unfortunately, I had not managed to specify all the dependencies I needed to build my application using my own tools. In particular, I rely on PyCrypto to create digests of files, and this was not available by default in the build environment. This led to another change to the metadata and another merge request. Still, it was not plain sailing at this point because, despite producing an application package, the launcher icon was generated incorrectly. This required a change to the application itself to ensure that the SVG used for the icon was in a form that could be converted to PNG files by the tools available in the build environment. Yet another merge request updated the current version number so that the fixed application could be build and distributed. My application was finally available in the form I originally intended!

Still, not everything was perfect. A couple of users quickly came forward with suggestions for improvements and a bug report concerning the way I handled place names – or rather failed to handle them. The application was fixed and yet another merge request was created and handled smoothly by the F-Droid maintainers.

Finishing Touches

There are a few things I've not done when developing and releasing this application; there's at least one fundamental thing and at least one cosmetic enhancement that could be done to improve the experience around the installation process.

The basic thing is to write a proper changelog. Since it was really just going to be the proof of concept that showed applications written with DUCK could be published via F-Droid, the application evolved from its original form very informally. I should go back and at least document the bug fixes I made after its release.

The cosmetic improvement would be to improve its F-Droid catalogue entry to include a screenshot or two, then at least prospective users could see what they were getting.

Another meta-feature that would be useful for me to use is F-Droid's auto-update facility. I'm sure I'll need to update the application again in the future so, while this document will remind me how to do that, it would be easier for everyone if F-Droid could just pick up new versions as they appear. There seem to be quite a few ways to do that, so I think that I'll have something to work with.

In summary, it was easier than I had imagined to publish an application in the F-Droid catalogue. The process was smooth and people were friendly and happy to help. If you write your own Free Software applications for Android, I encourage you to publish them via F-Droid and to submit your own metadata for them to make publication as quick and easy as possible.

Categories: Serpentine, F-Droid, Android, Python, Free Software

Friday, 09 November 2018

KDE Applications 18.12 branches created

TSDgeos' blog | 22:31, Friday, 09 November 2018

Make sure you commit anything you want to end up in the 18.12 release to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 15 of November.

More interesting dates
November 29: KDE Applications 18.12 RC (18.11.90) Tagging and Release
December 6: KDE Applications 18.12 Tagging
December 13: KDE Applications 18.12 Release

https://community.kde.org/Schedules/Applications/18.12_Release_Schedule

Thursday, 08 November 2018

Another Look at VGA Signal Generation with a PIC32 Microcontroller

Paul Boddie's Free Software-related blog » English | 19:15, Thursday, 08 November 2018

Maybe some people like to see others attempting unusual challenges, things that wouldn’t normally be seen as productive, sensible or a good way to spend time and energy, things that shouldn’t be possible or viable but were nevertheless made to work somehow. And maybe they like the idea of indulging their own curiosity in such things, knowing that for potential future experiments of their own there is a route already mapped out to some kind of success. That might explain why my project to produce a VGA-compatible analogue video signal from a PIC32 microcontroller seems to attract more feedback than some of my other, arguably more useful or deserving, projects.

Nevertheless, I was recently contacted by different people inquiring about my earlier experiments. One was admittedly only interested in using Free Software tools to port his own software to the MIPS-based PIC32 products, and I tried to give some advice about navigating the documentation and to describe some of the issues I had encountered. Another was more concerned with the actual signal generation aspect of the earlier project and the usability of the end result. Previously, I had also had a conversation with someone looking to use such techniques for his project, and although he ended up choosing a different approach, the cumulative effect of my discussions with him and these more recent correspondents persuaded me to take another look at my earlier work and to see if I couldn’t answer some of the questions I had received more definitively.

Picking Over the Pieces

I was already rather aware of some of the demonstrated and potential limitations of my earlier work, these being concerned with generating a decent picture, and although I had attempted to improve the work on previous occasions, I think I just ran out of energy and patience to properly investigate other techniques. The following things were bothersome or a source of concern:

  • The unevenly sized pixels
  • The process of experimentation with the existing code
  • Whether the microcontroller could really do other things while displaying the picture

Although one of my correspondents was very complimentary about the form of my assembly language code, I rather felt that it was holding me back, making me focus on details that should be abstracted away. It should be said that MIPS assembly language is fairly pleasant to write, at least in comparison to certain other architectures.

(I was brought up on 6502 assembly language, where there is an “accumulator” that is the only thing even approaching a general-purpose register in function, and where instructions need to combine this accumulator with other, more limited, registers to do things like accessing “zero page”: an area of memory that supports certain kinds of operations by providing the contents of locations as inputs. Everything needs to be meticulously planned, and despite odd claims that “zero page” is really one big register file and that 6502 is therefore “RISC-like”, the existence of virtual machines such as SWEET16 say rather a lot about how RISC-like the 6502 actually is. Later, I learned ARM assembly language and found it rather liberating with its general-purpose registers and uncomplicated, rather easier to use, memory access instructions. Certain things are even simpler in MIPS assembly language, whereas other conveniences from ARM are absent and demand a bit more effort from the programmer.)

Anyway, I had previously introduced functionality in C in my earlier work, mostly because I didn’t want the challenge of writing graphics routines in assembly language. So with the need to more easily experiment with different peripheral configurations, I decided to migrate the remaining functionality to C, leaving only the lowest-level routines concerned with booting and exception/interrupt handling in assembly language. This effort took me to some frustrating places, making me deal with things like linker scripts and the kind of memory initialisation that one’s compiler usually does for you but which is absent when targeting a “bare metal” environment. I shall spare you those details in this article.

I therefore spent a certain amount of effort in developing some C library functionality for dealing with the hardware. It could be said that I might have used existing libraries instead, but ignoring Microchip’s libraries that will either be proprietary or the subject of numerous complaints by those unwilling to leave that company’s “ecosystem”, I rather felt that the exercise in library design would be useful in getting reacquainted and providing me with something I would actually want to use. An important goal was minimalism, whereas my impression of libraries such as those provided by the Pinguino effort are that they try and bridge the different PIC hardware platforms and consequently accumulate features and details that do not really interest me.

The Wide Pixel Problem

One thing that had bothered me when demonstrating a VGA signal was that the displayed images featured “wide” pixels. These are not always noticeable: one of my correspondents told me that he couldn’t see them in one of my example pictures, but they are almost certainly present because they are a feature of the mechanism used to generate the signal. Here is a crop from the example in question:

Picture detail from VGAPIC32 output

Picture detail from VGAPIC32 output

And here is the same crop with the wide pixels highlighted:

Picture detail with wide pixels highlighted

Picture detail with wide pixels highlighted

I have left the identification of all wide pixel columns to the reader! Nevertheless, it can be stated that these pixels occur in every fourth column and are especially noticeable with things like text, where at such low resolutions, the doubling of pixel widths becomes rather obvious and annoying.

Quite why this increase in pixel width was occurring became a matter I wanted to investigate. As you may recall, the technique I used to output pixels involved getting the direct memory access (DMA) controller in the PIC32 chip to “copy” the contents of memory to a hardware register corresponding to output pins. The signals from these pins were sent along the cable to the monitor. And the DMA controller was transferring data as fast as it could and thus producing pixel colours as fast as it could.

Pixel Output Using DMA Transfer

An overview of the architecture for pixel output using DMA transfer

One of my correspondents looked into the matter and confirmed that we were not imagining this problem, even employing an oscilloscope to check what was happening with the signals from the output pins. The DMA controller would, after starting each fourth pixel, somehow not be able to produce the next pixel in a timely fashion, leaving the current pixel colour unchanged as the monitor traced the picture across the screen. This would cause these pixels to “stretch” until the first pixel from the next group could be emitted.

Initially, I had thought that interrupts were occurring and the CPU, in responding to interrupt conditions and needing to read instructions, was gaining priority over the DMA controller and forcing pixel transfers to wait. Although I had specified a “cell size” of 160 bytes, corresponding to 160 pixels, I was aware that the architecture of the system would be dividing data up into four-byte “words”, and it would be natural at certain points for accesses to memory to be broken up and scheduled in terms of such units. I had therefore wanted to accommodate both the CPU and DMA using an approach where the DMA would not try and transfer everything at once, but without the energy to get this to work, I had left the matter to rest.

A Steady Rhythm

The documentation for these microcontrollers distinguishes between block and cell transfers when describing DMA. Previously, I had noted that these terms could be better described, and I think there are people who are under the impression that cells must be very limited in size and that you need to drive repeated cell transfers using various interrupt conditions to transfer larger amounts. We have seen that this is not the case: a single, large cell transfer is entirely possible, even though the characteristics of the transfer have been less than desirable. (Nevertheless, the documentation focuses on things like copying from one UART peripheral to another, arguably failing to explore the range of possible applications for DMA and to thereby elucidate the mechanisms involved.)

However, given the wide pixel effect, it becomes more interesting to introduce a steady rhythm by using smaller cell sizes and having an external event coordinate each cell’s transfer. With a single, large transfer, only one initiation event needs to occur: that produced by the timer whose period corresponds to that of a single horizontal “scanline”. The DMA channel producing pixels then runs to completion and triggers another channel to turn off the pixel output. In this scheme, the initiating condition for the cell transfer is the timer.

VGA Display Line Structure

The structure of each visible display line in the VGA signal

When using multiple cells to transfer the pixel data, however, it is no longer possible to use the timer in this way. Doing so would cause the initiation of the first cell, but then subsequent cells would only be transferred on subsequent timer events. And since these events only occur once per scanline, this would see a single line’s pixel data being transferred over many scanlines instead (or, since the DMA channel would be updated regularly, we would see only the first pixel on each line being emitted, stretched across the entire line). Since the DMA mechanism apparently does not permit one kind of interrupt condition to enable a channel and another to initiate each cell transfer, we must be slightly more creative.

Fortunately, the solution is to chain two channels, just as we already do with the pixel-producing channel and the one that resets the output. A channel is dedicated to waiting for the line timer event, and it transfers a single black pixel to the screen before handing over to the pixel-producing channel. This channel, now enabled, has its cell transfers regulated by another interrupt condition and proceeds as fast as such a condition may occur. Finally, the reset channel takes over and turns off the output as before.

Pixel Output Using Timed DMA Transfers

An overview of the architecture for pixel output using timed DMA transfers

The nature of the cell transfer interrupt can take various forms, but it is arguably most intuitive to use another timer for this purpose. We may set the limit of such a timer to 1, indicating that it may “wrap around” and thus produce an event almost continuously. And by configuring it to run as quickly as possible, at the frequency of the peripheral clock, it may drive cell transfers at a rate that is quick enough to hopefully produce enough pixels whilst also allowing other activities to occur between each transfer.

VGA Pixel Output (Using Transfer Timer)

Using a timer to initiate pixel transfers for the VGA signal

One thing is worth mentioning here just to be explicit about the mechanisms involved. When configuring interrupts that are used for DMA transfers, it is the actual condition that matters, not the interrupt nor the delivery of the interrupt request to the CPU. So, when using timer events for transfers, it appears to be sufficient to merely configure the timer; it will produce the interrupt condition upon its counter “wrapping around” regardless of whether the interrupt itself is enabled.

With a cell size of a single byte, and with a peripheral clock running at half the speed of the system clock, this approach is sufficient all by itself to yield pixels with consistent widths, with the only significant disadvantage being how few of them can be produced per line: I could only manage in the neighbourhood of 80 pixels! Making the peripheral clock run as fast as the system clock doesn’t help in theory: we actually want the CPU running faster than the transfer rate just to have a chance of doing other things. Nor does it help in practice: picture stability rather suffers.

A picture of the display output from timed DMA transfers

A picture of the display output from timed DMA transfers

Using larger cell sizes, we encounter the wide pixel problem, meaning that the end of a four-byte group is encountered and the transfer hangs on for longer than it should. However, larger cell sizes also introduce byte transfers at a different rate from cell transfers (at the system clock rate) and therefore risk making the last pixel produced by a cell longer than the others, anyway.

Uncovering DMA Transfers

I rather suspect that interruptions are not really responsible for the wide pixels at all, and that it is the DMA controller that causes them. Some discussion with another correspondent explored how the controller might be operating, with transfers perhaps looking something like this:

DMA read from memory
DMA write to display (byte #1)
DMA write to display (byte #2)
DMA write to display (byte #3)
DMA write to display (byte #4)
DMA read from memory
...

This would, by itself, cause a transfer pattern like this:

R____R____R____R____R____R ...
_WWWW_WWWW_WWWW_WWWW_WWWW_ ...

And thus pixel output as follows:

41234412344123441234412344 ...
=***==***==***==***==***== ... (narrow pixels as * and wide pixel components as =)

Even without any extra operations or interruptions, we would get a gap between the write operations that would cause a wider pixel. This would only get worse if the DMA controller had to update the address of the pixel data after every four-byte read and write, not being able to do so concurrently with those operations. And if the CPU were really able to interrupt longer transfers, even to obtain a single instruction to execute, it might then compete with the DMA controller in accessing memory, making the read operations even later every time.

Assuming, then, that wide pixels are the fault of the way the DMA controller works, we might consider how we might want it to work instead:

                     | ...
DMA read from memory | DMA write to display (byte #4)
                 \-> | DMA write to display (byte #1)
                     | DMA write to display (byte #2)
                     | DMA write to display (byte #3)
DMA read from memory | DMA write to display (byte #4)
                 \-> | ...

If only pixel data could be read from memory and written to the output register (and thus the display) concurrently, we might have a continuous stream of evenly-sized pixels. Such things do not seem possible with the microcontroller I happen to be using. Either concurrent reading from memory and writing to a peripheral is not possible or the DMA controller is not able to take advantage of this concurrency. But these observations did give me another idea.

Dual Channel Transfers

If the DMA controller cannot get a single channel to read ahead and get the next four bytes, could it be persuaded to do so using two channels? The intention would be something like the following:

Channel #1:                     Channel #2:
                                ...
DMA read from memory            DMA write to display (byte #4)
DMA write to display (byte #1)
DMA write to display (byte #2)
DMA write to display (byte #3)
DMA write to display (byte #4)  DMA read from memory
                                DMA write to display (byte #1)
...                             ...

This is really nothing different from the above, functionally, but the required operations end up being assigned to different channels explicitly. We would then want these channels to cooperate, interleaving their data so that the result is the combined sequence of pixels for a line:

Channel #1: 1234    1234     ...
Channel #2:     5678    5678 ...
  Combined: 1234567812345678 ...

It would seem that channels might even cooperate within cell transfers, meaning that we can apparently schedule two long transfer cells and have the DMA controller switch between the two channels after every four bytes. Here, I wrote a short test program employing text strings and the UART peripheral to see if the microcontroller would “zip up” the strings, with the following being used for single-byte cells:

Channel #1: "Adoc gi,hlo\r"
Channel #2: "n neaan el!\n"
  Combined: "And once again, hello\r\n"

Then, seeing that it did, I decided that this had a chance of also working with pixel data. Here, every other pixel on a line needs to be presented to each channel, with the first channel being responsible for the pixels in odd-numbered positions, and the second channel being responsible for the even-numbered pixels. Since the DMA controller is unable to step through the data at address increments other than one (which may be a feature of other DMA implementations), this causes us to rearrange each line of pixel data as follows:

 Displayed pixels: 123456......7890
Rearranged pixels: 135...79246...80
                   *       *

Here, the asterisks mark the start of each channel’s data, with each channel only needing to transfer half the usual amount.

Pixel Output Using Timed Dual-Channel Transfers

The architecture involved in employing two pixel data channels with timed transfers

The documentation does, in fact, mention that where multiple channels are active with the same priority, each one is given control in turn with the controller cycling through them repeatedly. The matter of which order they are chosen, which is important for us, seems to be dependent on various factors, only some of which I can claim to understand. For instance, I suspect that if the second channel refers to data that appears before the first channel’s data in memory, it gets scheduled first when both channels are activated. Although this is not a significant concern when just trying to produce a stable picture, it does limit more advanced operations such as horizontal scrolling.

A picture of the display output from timed, dual-channel DMA transfers

A picture of the display output from timed, dual-channel DMA transfers

As you can see, trying this technique out with timed transfers actually made a difference. Instead of only managing something approaching 80 pixels across the screen, more than 90 can be accommodated. Meanwhile, experiments with transfers going as fast as possible seemed to make no real difference, and the fourth pixel in each group was still wider than the others. Still, making the timed transfer mode more usable is a small victory worth having, I suppose.

Parallel Mode Revisited

At the start of my interest in this project, I had it in my mind that I would couple DMA transfers with the parallel mode (or Parallel Master Port) functionality in order to generate a VGA signal. Certain aspects of this, particularly gaps between pixels, made me less than enthusiastic about the approach. However, in considering what might be done to the output signal in other situations, I had contemplated the use of a flip-flop to hold output stable according to a regular tempo, rather like what I managed to achieve, almost inadvertently, when introducing a transfer timer. Until recently, I had failed to apply this idea to where it made most sense: in regulating the parallel mode signal.

Since parallel mode is really intended for driving memory devices and display controllers, various control signals are exposed via pins that can tell these external devices that data is available for their consumption. For our purposes, a flip-flop is just like a memory device: it retains the input values sampled by its input pins, and then exposes these values on its output pins when the inputs are “clocked” into memory using a “clock pulse” signal. The parallel mode peripheral in the microcontroller offers various different signals for such clock and selection pulse purposes.

VGA Output Circuit (Parallel Mode)

The parallel mode circuit showing connections relevant to VGA output (generic connections are not shown)

Employing the PMWR (parallel mode write) signal as the clock pulse, directing the display signals to the flip-flop’s inputs, and routing the flip-flop’s outputs to the VGA circuit solved the pixel gap problem at a stroke. Unfortunately, it merely reminded us that the wide pixel problem also affects parallel mode output, too. Although the clock pulse is able to tell an external component about the availability of a new pixel value, it is up to the external component to regulate the appearance of each pixel. A memory device does not care about the timing of incoming data as long as it knows when such data has arrived, and so such regulation is beyond the capabilities of a flip-flop.

It was observed, however, that since each group of pixels is generated at a regular frequency, the PMWR signalling frequency might be reduced by being scaled by a constant factor. This might allow some pixel data to linger slightly longer in the flip-flop and be slightly stretched. By the time the fourth pixel in a group arrives, the time allocated to that pixel would be the same as those preceding it, thus producing consistently-sized pixels. I imagine that a factor of 8/9 might do the trick, but I haven’t considered what modification to the circuit might be needed or whether it would all be too complicated.

Recognising the Obvious

When people normally experiment with video signals from microcontrollers, one tends to see people writing code to run as efficiently as is absolutely possible – using assembly language if necessary – to generate the video signal. It may only be those of us using microcontrollers with DMA peripherals who want to try and get the DMA hardware to do the heavy lifting. Indeed, those of us with exposure to display peripherals provided by system-on-a-chip solutions feel almost obliged to do things this way.

But recent discussions with one of my correspondents made me reconsider whether an adequate solution might be achieved by just getting the CPU to do the work of transferring pixel data to the display. Previously, another correspondent had indicated that it this was potentially tricky, and that getting the timings right was more difficult than letting the hardware synchronise the different mechanisms – timer and DMA – all by itself. By involving the CPU and making it run code, the display line timer would need to generate an interrupt that would be handled, causing the CPU to start running a loop to copy data from the framebuffer to the output port.

Pixel Output Using CPU-Driven Transfers

An overview of the architecture with the CPU driving transfers of pixel data

This approach puts us at the mercy of how the CPU handles and dispatches interrupts. Being somewhat conservative about the mechanisms more generally available on various MIPS-based products, I tend to choose a single interrupt vector and then test for the different conditions. Since we need as little variation as possible in the latency between a timer event occurring and the pixel data being generated, I test for that particular event before even considering anything else. Then, a loop of the following form is performed:

    for (current = line_data; current < end; current++)
        *output_port = *current;

Here, the line data is copied byte by byte to the output port. Some adornments are necessary to persuade the compiler to generate code that writes the data efficiently and in order, but there is nothing particularly exotic required and GCC does a decent job of doing what we want. After the loop, a black/reset pixel is generated to set the appropriate output level.

One concern that one might have about using the CPU for such long transfers in an interrupt handler is that it ties up the CPU, preventing it from doing other things, and it also prevents other interrupt requests from being serviced. In a system performing a limited range of activities, this can be acceptable: there may be little else going on apart from updating the display and running programs that access the display; even when other activities are incorporated, they may accommodate being relegated to a secondary status, or they may instead take priority over the display in a way that may produce picture distortion but only very occasionally.

Many of these considerations applied to systems of an earlier era. Thinking back to computers like the Acorn Electron – a 6502-based system that formed the basis of my first sustained experiences with computing – it employs a display controller that demands access to the computer’s RAM for a certain amount of the time dedicated to each video frame. The CPU is often slowed down or even paused during periods of this display controller’s activity, making programs slower than they otherwise would be, and making some kinds of input and output slightly less reliable under certain circumstances. Nevertheless, with certain kinds of additional hardware, the possibility is present for such hardware to interrupt the CPU and to override the display controller that would then produce “snow” or noise on the screen as a consquence of this high-priority interruption.

Such issues cause us to consider the role of the DMA controller in our modern experiment. We might well worry about loading the CPU with lots of work, preventing it from doing other things, but what if the DMA controller dominates the system in such a way that it effectively prevents the CPU from doing anything productive anyway? This would be rather similar to what happens with the Electron and its display controller.

So, evaluating a CPU-driven solution seems to be worthwhile just to see whether it produces an acceptable picture and whether it causes unacceptable performance degradation. My recent correspondence also brought up the assertion that the RAM and flash memory provided by PIC32 microcontrollers can be accessed concurrently. This would actually mitigate contention between DMA and any programs running from flash memory, at least until the point that accesses to RAM needed to be performed by those programs, meaning that we might expect some loss of program performance by shifting the transfer burden to the CPU.

(Again, I am reminded of the Electron whose ROM could be accessed at full speed but whose RAM could only be accessed at half speed by the CPU but at full speed by the display controller. This might have been exploited by software running from ROM, or by a special kind of RAM installed and made available at the right place in memory, but the 6502 favours those zero-page instructions mentioned earlier, forcing RAM access and thus contention with the display controller. There were upgrades to mitigate this by providing some dedicated memory for zero page, but all of this is really another story for another time.)

Ultimately, having accepted that the compiler would produce good-enough code and that I didn’t need to try more exotic things with assembly language, I managed to produce a stable picture.

A picture of the display output from CPU-driven pixel data transfers

A picture of the display output from CPU-driven pixel data transfers

Maybe I should have taken this obvious path from the very beginning. However, the presence of DMA support would have eventually caused me to investigate its viability for this application, anyway. And it should be said that the performance differences between the CPU-based approach and the DMA-based approaches might be significant enough to argue in favour of the use of DMA for some purposes.

Observations and Conclusions

What started out as a quick review of my earlier work turned out to be a more thorough study of different techniques and approaches. I managed to get timed transfers functioning, revisited parallel mode and made it work in a fairly acceptable way, and I discovered some optimisations that help to make certain combinations of techniques more usable. But what ultimately matters is which approaches can actually be used to produce a picture on a screen while programs are being run at the same time.

To give the CPU more to do, I decided to implement some graphical operations, mostly copying data to a framebuffer for its eventual transfer as pixels to the display. The idea was to engage the CPU in actual work whilst also exercising access to RAM. If significant contention between the CPU and DMA controller were to occur, the effects would presumably be visible on the screen, potentially making the chosen configuration unusable.

Although some approaches seem promising on paper, and they may even produce promising results when the CPU is doing little more than looping and decrementing a register to introduce a delay, these approaches may produce less than promising results under load. The picture may start to ripple and stretch, and under “real world” load, the picture may seem noisy and like a badly-tuned television (for those who remember the old days of analogue broadcast signals).

Two approaches seem to remain robust, however: the use of timed DMA transfers, and the use of the CPU to do all the transfer work. The former is limited in terms of resolution and introduces complexity around the construction of images in the framebuffer, at least if optimised as much as possible, but it seems to allow more work to occur alongside the update of the display, and the reduction in resolution also frees up RAM for other purposes for those applications that need it. Meanwhile, the latter provides the resolution we originally sought and offers a straightforward framebuffer arrangement, but it demands more effort from the CPU, slowing things down to the extent that animation practically demands double buffering and thus the allocation of even more RAM for display purposes.

But both of these seemingly viable approaches produce consistent pixel widths, which is something of a happy outcome given all the effort to try and fix that particular problem. One can envisage accommodating them both within a system given that various fundamental system properties (how fast the system and peripheral clocks are running, for example) are shared between the two approaches. Again, this is reminiscent of microcomputers where a range of display modes allowed developers and users to choose the trade-off appropriate for them.

A demonstration of text plotting at a resolution of 160x128

A demonstration of text plotting at a resolution of 160x128

Having investigated techniques like hardware scrolling and sprite plotting, it is tempting to keep developing software to demonstrate the techniques described in this article. I am even tempted to design a printed circuit board to tidy up my rather cumbersome breadboard arrangement. And perhaps the library code I have written can be used as the basis for other projects.

It is remarkable that a home-made microcontroller-based solution can be versatile enough to demonstrate aspects of simple computer systems, possibly even making it relevant for those wishing to teach or to learn about such things, particularly since all the components can be connected together relatively easily, with only some magic happening in the microcontroller itself. And with such potential, maybe this seemingly pointless project might have some meaning and value after all!

Update

Although I can’t embed video files of any size here, I have made a “standard definition” video available to demonstrate scrolling and sprites. I hope it is entertaining and also somewhat illustrative of the kind of thing these techniques can achieve.

Thursday, 01 November 2018

Keep yourself organized

English on Björn Schießle - I came for the code but stayed for the freedom | 17:00, Thursday, 01 November 2018

The Zim Desktop Wiki

Over the years I tried various tools to organize my daily work and manage my ToDo list, including Emacs org-mode, ToDo.txt, Kanban boards and simple plain text files. This are all great tools but I was never completely happy with it, over time it always became to unstructured and crowded or I didn’t managed to integrate it into my daily workflow.

Another tool I used through all this time was Zim, a wiki-like desktop app with many great features and extensions. I used it for notes and to plan and organize larger projects. At some point I had the idea to try to use it as well for my task management and to organize my daily work. The great strength of Zim, it’s flexibility can be also a weak spot because you have to find your own way to organize and structure your stuff. After reading various articles like “Getting Things Done” with Zim and trying different approaches I come up with a setup which works nicely for me.

Basic setup

Zim allows you to have multiple, completely separated “notebooks”. But I realized soon that this adds another level of complexity and makes it less likely for me to use it regularly. Therefore I decided to just have one single notebook, with the three top-level pages “FSFE”, “Nextcloud” and “Personal”. Each of this top-level pages have four sub-pages “Archive”, “Notes”, “Ideas” and “Projects”. Every project which is done or notes which are no longer needed will be moved to the archive.

Additionally I created a top-level category called “Journal”. I use this to organize my week and to prepare my weekly work report. Zim has a quite powerful journal functionality with a integrated calendar. You can have daily, weekly, monthly or yearly journals. I decided to use weekly journals because at Nextcloud we also write weekly work reports, so this fits quite well. Whenever I click on a date in the calendar the journal of the corresponding week opens. I use a template so that a newly created journal looks like the screenshot at the top.

Daily usage

As you can see at the picture at the top, the template adds directly the recurring tasks like “check support tickets” or write “work report”. If one of the task is completed I tick the checkbox. During a day typically tasks come in which I can’t handle directly. In this case I will quickly decide to add it to the current day or the next day, depending on how urgent the task is and whether I plan to handle it at the same day or not. Sometimes, if I already know that the task is for a deadline later during the week it can happen that I add it directly to a later day, in rare situations even to the next week. But in general I try to plan the work not to far in the future.

In the past I added everything to my ToDo list, also stuff for which the deadline was a few months away or stuff I wanted to do once I have some time left. My experience is that this grows the list so quickly that it becomes highly unproductive. That’s why I decided to add stuff like this to the “Ideas” category. I review this category regularly and see if some of this stuff could become a real project, task or if it is no longer relevant. If it is something which has already a deadline, but only in a few months, I create a calendar entry with a reminder and add it later to the task list once it become relevant again. This keeps the ToDo list manageable.

At the bottom of the screenshot you see the beginning of my weekly work report, so I can update it directly during the week and don’t have to remember at the end of the week what I have actually done. At the end of each day I review the tasks for the day, mark finished stuff as done and re-evaluate the other stuff: Some tasks might be no longer relevant so that I can remove them completely, otherwise I move them forward to the next day. At the end of the week the list looks something like this:

Zim Journal on Wednesday

This is now the first time were I start to feel really comfortable with my workflow. The first thing I do in the morning is to open Zim, move it to my second screen and there it stays open until the end of the day. I think the reasons that it works so well is that it provides a clear structure, and it integrates well with all the other stuff like notes, project handling, writing my work report and so on.

What I’m missing

Of course nothing is perfect. This is also true for Zim. The one thing I really miss is a good mobile app to add, update or remove task while I’m away from my desktop. At the moment I sync the Zim notebook with Nextcloud and use the Nextcloud Android app to browse the notebook and open the files with a normal text editor. This works, but a dedicated Zim app for Android would make it perfect.

Sunday, 28 October 2018

Linux Software RAID mismatch Warning

Evaggelos Balaskas - System Engineer | 16:18, Sunday, 28 October 2018

I use Linux Software RAID for years now. It is reliable and stable (as long as your hard disks are reliable) with very few problems. One recent issue -that the daily cron raid-check was reporting- was this:

 

WARNING: mismatch_cnt is not 0 on /dev/md0

 

Raid Environment

A few details on this specific raid setup:

RAID 5 with 4 Drives

with 4 x 1TB hard disks and according the online raid calculator:

RAID Calculator

raid5-4disks

that means this setup is fault tolerant and cheap but not fast.

 

Raid Details

# /sbin/mdadm --detail /dev/md0

raid configuration is valid

/dev/md0:
        Version : 1.2
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sat Oct 27 04:38:04 2018
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : ServerTwo:0  (local to host ServerTwo)
           UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
         Events : 60352

    Number   Major   Minor   RaidDevice State
       4       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       6       8       48        2      active sync   /dev/sdd
       5       8        0        3      active sync   /dev/sda

 

Examine Verbose Scan

with a more detailed output:

# mdadm -Evvvvs

there are a few Bad Blocks, although it is perfectly normal for a two (2) year disks to have some. smartctl is a tool you need to use from time to time.

/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953266096 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : bdd41067:b5b243c6:a9b523c4:bc4d4a80

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 6baa02c9 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sde:
   MBR Magic : aa55
Partition[0] :      8388608 sectors at         2048 (type 82)
Partition[1] :    226050048 sectors at      8390656 (type 83)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258992 sectors, after=3504 sectors
          State : clean
    Device UUID : a90e317e:43848f30:0de1ee77:f8912610

    Update Time : Sun Oct 28 09:04:01 2018
       Checksum : 30b57195 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : ad7315e5:56cebd8c:75c50a72:893a63db

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : b928adf1 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ef5da4df:3e53572e:c3fe1191:925b24cf
           Name : ServerTwo:0  (local to host ServerTwo)
  Creation Time : Wed Feb 26 21:00:17 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 2929893888 (2794.16 GiB 3000.21 GB)
  Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
    Data Offset : 259072 sectors
   Super Offset : 8 sectors
   Unused Space : before=258984 sectors, after=3504 sectors
          State : clean
    Device UUID : f4e1da17:e4ff74f0:b1cf6ec8:6eca3df1

    Update Time : Sun Oct 28 09:04:01 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : bbe3e7e8 - correct
         Events : 60355

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

 

MisMatch Warning

WARNING: mismatch_cnt is not 0 on /dev/md0

So this is not a critical error, rather tells us that there are a few blocks that are “Not Synced Yet” across all disks.

 

Status

Checking the Multiple Device (md) driver status:

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

We verify that none job is running on the raid.

 

Repair

We can run a manual repair job:

# echo repair >/sys/block/md0/md/sync_action

now status looks like:

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [=========>...........]  resync = 45.6% (445779112/976631296) finish=54.0min speed=163543K/sec

unused devices: <none>

Progress

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [============>........]  resync = 63.4% (619673060/976631296) finish=38.2min speed=155300K/sec

unused devices: <none>
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [================>....]  resync = 81.9% (800492148/976631296) finish=21.6min speed=135627K/sec

unused devices: <none>

Finally

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

 

Check

After repair is it useful to check again the status of our software raid:

# echo check >/sys/block/md0/md/sync_action

# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [=>...................]  check =  9.5% (92965776/976631296) finish=91.0min speed=161680K/sec

unused devices: <none>

and finally

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sda[5] sdd[6] sdb[4]
      2929893888 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>
Tag(s): md0, mdadm, linux, raid

Tuesday, 23 October 2018

KDE Applications 18.12 Schedule finalized

TSDgeos' blog | 22:59, Tuesday, 23 October 2018

It is available at the usual place https://community.kde.org/Schedules/Applications/18.12_Release_Schedule

Dependency freeze is in 2 weeks and Feature Freeze in 3 weeks, so hurry up!

Monday, 22 October 2018

Reinventing Startups

Blog – Think. Innovation. | 13:08, Monday, 22 October 2018

Once we have come to realize that the Silicon Valley model of startup-based innovation is not for the betterment of people and planet, but is a hyper-accelerated version of the existing growth-based capitalist Operating System increasing inequality, destroying nature and benefiting the 1%, the question arises: is there an alternative?

One group of people that have come to this realization is searching for an alternative model. They call their companies Zebras, opposing the Silicon Valley $ 1 Billion+ valued startups called Unicorns. Their movement is called Zebras Unite and is definitely worth checking out. I recently wrote a ‘Fairy Tale’ story to give Zebras Unite a warning as I believe they run the risk of leaving the root cause of the current faulty Operating System untouched. This warning is based on the brilliant work of Douglas Rushkoff.

This article proposes an alternative entrepreneurial-based innovation model that I believe has the potential to be good for people and planet by design. This model goes beyond the lip service of the hyped terms ‘collaboration’, ‘openness’, ‘sharing’ and ‘transparency’ and puts these terms in practice in a structural and practical way. The model is based on the following premises, while at the same time leaving opportunity for profitable business practices:

  • Freely sharing all created knowledge and technology under open licenses: if we are going to solve the tremendous ‘wicked’ problems of humanity, we need to realize that no single company can solve these on their own. We need to realize that getting to know and building upon each others’ innovations via coincidental meeting, liking and contract agreements is too slow and leaves too much to chance. Instead, we need to realize that there is no time to waste and all heads, hands and inventiveness is necessarily shared immediately and openly to achieve what I call Permissionless Innovation.
  • Being growth agnostic: if we are creating organizations and companies, these should not depend on growth for their survival. Instead they should thrive no matter what the size, because they provide a sustained source of value to humanity.
  • Having interests aligned with people and planet: for many corporations, especially VC-backed and publicly traded companies, doing good for owners/shareholders and doing good for people and the planet is a trade-off. It should not be that way. Providing value for people and planet should not conflict with providing value for owners/shareholders.

Starting from these premises, I propose the following structural changes for customer-focused organizations and companies:

Intellectual Property

We should stop pretending that knowledge is scarce, whereas it is in fact abundant. Putting open collaboration in practice means that sharing is the default: publish technology/designs/knowledge under Open Licenses. This results in what I call ‘Permissionless Innovation’ and favors inclusiveness, as innovation is not only for the privileged anymore. Also, not one single company can pretend to have the knowledge to solve ‘wicked problems’, we need free flow of information for that. And IP protection also creates huge amounts of ‘competitive waste’.

Sustainable Product Development

Based on abundant knowledge we can co-design products that are made to be used, studied, repaired, customized and innovated upon. We get rid of the linear ‘producer => product => consumer’ and get to a ‘prosumer’ or ‘peer to peer’ dynamic model. This goes beyond the practice of Design Thinking / Service Design and crowdsourcing and puts the power of design and production outside of the company as well as inside it (open boundaries).

Company Ownership

Worker and/or customer owned cooperatives as the standard, instead of rent-seeking hands-off investor shareholder-owners. With cooperative ownership the interests of the company are automatically aligned with those of the owners. I do not have sufficient knowledge about how this could work with start-up founders who are also owners, but at some point the founders should transition their ownership.

Funding

Companies that need outside funding should be aware of the ‘growth trap’ as described above. Investment should be limited in time (capped ROI) and loans paid back. To be sustainable and ‘circular’ the company in the long run should not depend on loans or investments. Ideally in the future even the use of fiat money should be abandoned, in favor of alternative currency that does not require continuous endless growth.

Scaling

Getting a valuable solution in the hands of those who need it, should not depend on a centralized organization having control, the supply-chain model. Putting ‘distributive’ in practice means an open dynamic ecosystem model supporting local initiative and entrepreneurship, while the company has its unique place in the ecosystem. This is similar to the franchise model, but mainly permissionless except for brands/trademarks. This dynamic model is much more resilient and can scale faster, especially relevant for non-digital products.

– Diderik

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo from Stocksnap.io.

This article first appeared on reinvent.cc.

Saturday, 20 October 2018

sharing keyboard and mouse with synergy

Evaggelos Balaskas - System Engineer | 21:34, Saturday, 20 October 2018

Synergy

Mouse and Keyboard Sharing

aka Virtual-KVM

 

Open source core of Synergy, the keyboard and mouse sharing tool
You can find the code here:

https://github.com/symless/synergy-core

or you can use the alternative barrier

https://github.com/debauchee/barrier

 

Setup

My setup looks like this:

synergy setup

I bought a docking station for the company’s laptop. I want to use a single monitor, keyboard & mouse to both my desktop PC & laptop when being at home.

My DekstopPC runs archlinux and company’s laptop is a windows 10.

Keyboard and mouse are connected to linux.

Both machines are connected on the same LAN (cables on a switch).

Host

/etc/hosts

192.168.0.11   myhomepc.localdomain  myhomepc
192.168.0.12 worklaptop.localdomain  worklaptop

 

Archlinux

DesktopPC will be my Virtual KVM software server. So I need to run synergy as a server.

Configuration

If no configuration file pathname is provided then the first of the
following to load successfully sets the configuration:

${HOME}/.synergy.conf
/etc/synergy.conf

 

vim ${HOME}/.synergy.conf
section: screens
    # two hosts named: myhomepc and worklaptop
      myhomepc:
      worklaptop:
end

section: links
    myhomepc:
        left = worklaptop
end

 

Testing

run in the foreground

$ synergys --no-daemon

example output:

[2018-10-20T20:34:44] NOTE: started server, waiting for clients
[2018-10-20T20:34:44] NOTE: accepted client connection
[2018-10-20T20:34:44] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:03] INFO: switch from "myhomepc" to "worklaptop" at 1919,423
[2018-10-20T20:35:03] INFO: leaving screen
[2018-10-20T20:35:03] INFO: screen "myhomepc" updated clipboard 0
[2018-10-20T20:35:04] INFO: screen "myhomepc" updated clipboard 1
[2018-10-20T20:35:10] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:35:10] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:35:10] INFO: entering screen
[2018-10-20T20:35:14] NOTE: accepted client connection
[2018-10-20T20:35:14] NOTE: client "worklaptop" has connected
[2018-10-20T20:35:16] INFO: switch from "myhomepc" to "worklaptop" at 1919,207
[2018-10-20T20:35:16] INFO: leaving screen
[2018-10-20T20:43:13] NOTE: client "worklaptop" has disconnected
[2018-10-20T20:43:13] INFO: jump from "worklaptop" to "myhomepc" at 960,540
[2018-10-20T20:43:13] INFO: entering screen
[2018-10-20T20:43:16] NOTE: accepted client connection
[2018-10-20T20:43:16] NOTE: client "worklaptop" has connected
[2018-10-20T20:43:40] NOTE: client "worklaptop" has disconnected

 

Systemd

To use synergy as a systemd service, then you need to copy your configuration file under /etc directory

sudo cp ${HOME}/.synergy.conf /etc/synergy.conf

Beware: Your user should have read access to the above configuration file.

and then:

$ systemctl start  --user synergys
$ systemctl enable --user synergys

 

Verify

$ ss -lntp '( sport = :24800 )'
State                   Recv-Q                   Send-Q                                      Local Address:Port                                      Peer Address:Port
LISTEN                  0                        3                                                 0.0.0.0:24800                                          0.0.0.0:*                      users:(("synergys",pid=10723,fd=6))

 

Win10

On windows10 (the synergy client) you just need to connect to the synergy server !

And of-course create a startup-shortcut:

win10 synergy

and that’s it !

 

A more detailed example:

section: screens
        worklaptop:
                halfDuplexCapsLock = false
                halfDuplexNumLock = false
                halfDuplexScrollLock = false
                xtestIsXineramaUnaware = false
                switchCorners = none
                switchCornerSize = 0
        myhomepc:
                halfDuplexCapsLock = false
                halfDuplexNumLock = false
                halfDuplexScrollLock = false
                xtestIsXineramaUnaware = false
                switchCorners = none +top-left +top-right +bottom-left +bottom-right
                switchCornerSize = 0
end

section: links
        worklaptop:
                right = myhomepc
        myhomepc:
                left = worklaptop
end

section: options
        relativeMouseMoves = false
        screenSaverSync = true
        win32KeepForeground = false
        disableLockToScreen = false
        clipboardSharing = true
        clipboardSharingSize = 3072
        switchCorners = none +top-left +top-right +bottom-left +bottom-right
        switchCornerSize = 0
end

Thursday, 18 October 2018

No activity

Thomas Løcke Being Incoherent | 11:32, Thursday, 18 October 2018

This blog is currently dead…. Catch me at twitter

Friday, 28 September 2018

Technoshamanism in Barcelona on October 4

agger's Free Software blog | 13:00, Friday, 28 September 2018

Technoshamanism event in Barcelona, october 4.

TECNOXAMANS, ELS HACKERS DE L’AMAZONES.

XERRADA I RITUAL DIY DE LA XARXA INTERNACIONAL TECNOXAMANISME AL CSOA LA FUSTERIA.

El dijous 4 d’octubre celebrarem al CSOA La Fusteria una xerrada amb membres de la xarxa Tecnoxamanisme, un col·lectiu internacional de producció d’imaginaris format per artistes, biohackers, pensadors, activistes, indígenes i indigenistes que intenten recuperar idees de futur perdudes al passat ancestral. La xerrada estarà conduïda per l’escriptor Francisco Jota-Pérez i després es realitzarà una performance ritual DIY on totes podreu participar.

Què té en comú el moviment hacker amb les lluites dels pobles indígenes amenaçats per allò anomenat “progrés”?

El tecnoxamanisme va sorgir el 2014 a partir de la confluència de diverses xarxes nascudes al voltant del moviment del software i la cultura lliure per promoure intercanvis de tecnologies, rituals, sinergies i sensibilitats amb les comunitats indígenes. Actuen impulsant trobades i esdeveniments que transcendeixen als rituals DIY, la música electrònica, la permacultura i els processos immersius, barrejant cosmovisions i impulsant la descolonització del pensament.

Segons els membres de la xarxa tecnoxamans: “Encara gaudim de zones autònomes temporals, d’invenció de formes de vida, d’art/vida; tractem de pensar i col·laborar amb la reforestació de la Terra amb un imaginari ancestrofuturista. El nostre principal exercici és crear xarxes d’inconscients, enfortint el desig de comunitat, així com proposar alternatives al pensament ‘productiu’ de la ciència i la tecnologia”.

Després de dos festivals internacionals realitzats en el sud de Bahia, Brasil, (produïts juntament amb l’associació indígena de l’ètnia Pataxó Aldeia Velha i Aldeia Pará, prop de Porto Seguro, on van arribar els primers vaixells portuguesos durant la colonització), organitzen el III Festival de Tecnoxamanisme els dies 5,6 i 7 d’octubre a França. I tenim la sort de que visitin Barcelona per poder compartir amb nosaltres els seus projectes i filosofia. On ens parlaran de temps espiral (no lineal), de ficcions col·lectives, de com el monoteisme i després el capitalisme van segrestar la tecnologia, o d’allò que podem aprendre de les comunitats indígenes tant a nivell de supervivència com de resistència.

Us esperem a totes a les 20.30 a La Fusteria.
C/J. Benlluire, 212 (El Cabanyal)

Col·labora Láudano Magazine.
www.laudanomag.com

Tuesday, 25 September 2018

Libre Application Summit 2018

TSDgeos' blog | 22:28, Tuesday, 25 September 2018

Earlier this month i attended Libre Application Summit 2018 in Denver.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Libre Application Summit wants to be a place for all people involved in doing Free Software applications to meet and share ideas, though being almost organized by GNOME it had a some skew towards GNOME/flatpak. There was a good presence of KDE, but personally I felt that we would have needed more people at least from LibreOffice, Firefox and someone from the Ubuntu/Canonical/Snap field (don't get annoyed at you if I failed to mention your group).

The Summit was kicked off by a motivational talk on how to make sure we ride the wave of "Open Source has won but people don't know it". I felt the content of the talk was nice but the speaker was hit by some issues (not being able to have the laptop in front of her due to the venue being a bit weirdly layouted) that sadly made her speech a bit too stumbly.

Then we had a bunch of flatpak related talks, ranging from the new freedesktop-sdk runtime, from very technical stuff about how ostree works and also including a talk by our own Aleix Pol on how KDE is planning to approach the release of flatpaks. Some of us ended the day having BBQ at the house the Codethink people were staying. Thanks for the good time!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

I kicked off the next day talking about how we (lately mostly Christoph) are doing the KDE Applications releases. We got appreciation of the good work we're doing and some interesting follow up questions so I think people were engaged by it.

The morning continued with talks about how to engage the "non typical" free software people, designers, students, professors, etc.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After lunch we had a few talks by the Elementary people and another talk with Aleix focused on which apps will run on your Plasma devices (hint: all of them).

The day finished with a quizz sponsored by System 76, it was fun!

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The last day of talks started again with me speaking, this time about how amazing Qt is and why you should use it to build your apps. There I had some questions about people worrying if QtWidgets was going to die, I told them not to worry, but it's clear The Qt Company needs to improve their messaging in that regard.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After that we had a talk about a Fedora project to create a distro based exclusively in flaptaks, which sounds really interesting. The last talks of LAS 2018 were about how to fight vandalism in crowdsourced data, the status of Librem 5 (it's still a bit far away) and a very interesting one about the status of Free Software in Research.

All in all i think the conference was interesting, still a bit small and too GNOME controlled to appeal to the general crowd, but it's the second time it has been organized, so it will improve.

I want to thank the KDE e.V. for sponsoring my flight and hosting to attend this conference. Please Donate! so we can continue attending conferences like this and spreading the good work of KDE :)

Tuesday, 18 September 2018

No Netflix on my Smart TV

tobias_platen's blog | 19:50, Tuesday, 18 September 2018

When I went to the Conrad store in Altona, I saw that new Sony Smart TVs come with a Netflix button on the remote.
Since I oppose DRM, I would never buy such a thing. I would only buy a Smart TV that Respects My Freedom, but such a thing does not exist.
Instead I use a ThinkPad T400 as an external TV tuner and harddisk recorder, since my old TV set does not support DVB-C. As a DVB-C tuner I use the
FRITZ!WLAN Repeater DVB‑C which works well with the free VLC player. Since it lacks a CI+ slot, it cannot decode DRM encumbered streams.

When Netflix was founded in 1998 they initially only offered rental DVDs only. Today most DVDs can be played on GNU/Linux using libdvdcss.
Even if most DVDs that Netflix offers do not contain strong DRM, Netflix is still a surveillance system that requires proprietary JavaScript.
When I buy DVDs, I go to a store where I can pay using cash.

The Thinkpad T400 has no HDMI port and “management engine” back door is removed by installing Libreboot. Most modern Intel systems come with a HDMI port.
HDMI comes with some kind of DRM called HDCP which was developed by Intel. On newer hardware the “management engine” is used to implement video DRM.
Netflix in 4K only works on Kaby-Lake processors, which implement the latest version of Intels hardware DRM.

Wednesday, 12 September 2018

Return to Limbo

David Boddie - Updates (Full Articles) | 15:57, Wednesday, 12 September 2018

After an unsatisfactory period of employment in March this year I took some time to reflect on the technologies I use, trying to learn about more systems and languages that I had only superficially explored. In the period immediately after leaving employment I wanted to try and get back into technologies like Inferno with the idea, amongst others, of porting it to an old netbook-style device I had acquired at the end of 2017. The problem was that I was in a different country to my development system, so any serious work would have to wait until I could access it again. In the meantime I thought about my brief exposure to the world of Qt in yet another work environment – specialisation can be an advantage when finding work but can also limit your opportunities to expand your horizons – and I wondered about other technologies I had missed out on over the years. I had previously explored Android development in an attempt to see what was interesting about it, but it was time to try something different.

Ready, Steady

Being very comfortable with Python as a development language during the late 1990s and 2000s had probably made me a bit lazy when it came to learning other programming languages. However, having experimented with my own flavour of Python, and being unimpressed by the evolution of the Python language, I considered making a serious attempt to learn Go with the idea that it might be a desirable skill to have. Many years ago I was asked to be a proofreader for Mark Summerfield's Programming in Go textbook but found it difficult to find the time outside work to do that, so I didn't manage to learn much of the language. This time, I worked through A Tour of Go, which I found quite rewarding even if I wasn't quite sure how to express what I wanted in some of the exercises — I often felt that I was guessing rather than figuring out exactly how a type should be defined, for example.

When the time came to pack up and return to Norway I considered whether I wanted to continue writing small examples in Go and porting some of my Python modules. It didn't feel all that comfortable or intuitive to write in Go, though I realise that it simply takes practice to gain familiarity. Despite this, it was worth taking some time to get an overview of the basics of Go for reasons that I'll get to later.

An Interlude

As mentioned earlier, I was interested in setting up Inferno on an old netbook – an Efika MX Smartbook – and had already experimented with running the system in its hosted form on top of Debian GNU/Linux. Running hosted Inferno is a nice way to get some experience using the system and seems to be the main way it is used these days. Running the system natively requires porting it to the specific hardware in use, and I knew that I could use the existing code for U-Boot, FreeBSD and Linux as a reference at the very least. So, the task would be to take hardware-specific code for the i.MX51 platform and adapt it to use the conventions of the Inferno porting layer. Building from the ground up, there are a few ports of Inferno to other ARM devices that could be used as foundations for a new port.

One of the things that made it possible for me to consider starting a port of Inferno to the Smartbook was the existing work that had gone into porting FreeBSD to the device. This included a port of U-Boot that enabled the LCD screen to be used to show the boot console instead of requiring a debug cable that is no longer available. This made it much easier to test “bare metal” programs and gain experience with modifying U-Boot, as well as using its API to access the keyboard and screen. Slowly, I built up a set of programs to work out how I might boot into Inferno while using the U-Boot API to allow the operating system to access the framebuffer console.

As you might expect, booting Inferno involves a lot of C code and some assembly language. It also involves modules written in Limbo, a language from the C programming language family that is an ancestor of Go. At a certain point in the boot process – see lecture 10 of this course for details – Inferno runs a Limbo module to perform some system-specific initialisation, and it's useful to know how to write Limbo code beyond simply copying and pasting lines of code from other ports.

Visiting Limbo

At this point the time spent learning the basics of Go proved to be useful, though maybe only in the sense that aspects of Limbo seemed more familiar to me than they might have done if I had never looked at Go. I wrote a few lines of code to help set up devices for the booting system and check that some simple features worked. By this time I had started looking at Limbo for its own sake, not just as something I had to learn to get Inferno working.

There are a lot of existing programs written in Limbo, though it's not always obvious where to find them. The standard introductions, A Descent into Limbo by Brian Kernighan and The Limbo Programming Language by Dennis Ritchie contain example programs, but many practical programs can be found in the Inferno repository itself inside the appl directory, which is where the sources reside for Limbo applications and libraries. I linked to some other resources in my first article about Inferno.

While there are a few resources already available, I wanted to experiment a bit on my own and get a feel for the language. As a result I started to write a collection of small example programs that tested my intuitions about how certain features of the language worked. Over time it became more natural to write programs in Limbo, especially if they used features like threads and channels to delegate work and coordinate how it is performed. Since Limbo was inspired by communicating sequential processes and designed with threading in mind, threads are a built-in feature of the language, so using them is fairly painless compared to their counterparts in languages like Python and C. Using channels to communicate between threads is fairly intuitive, though it can take time to become accustomed to the syntax and control structures.

Outside the Inferno

While Limbo's natural environment is Inferno, the ability to run Inferno as a hosted environment in another operating system makes it possible to experiment with the language fairly conveniently. However, if you want to edit programs in Inferno's graphical environment then you might find it takes some effort to adapt to it, especially if you already have a favourite editor or integrated development environment. As a result, it might be desirable to edit programs in the host operating system and copy them into the hosted Inferno environment. To enable more rapid prototyping I created a tool to automate the process of transferring and compiling a Limbo file to a standalone application, but the way it works is a bit more complicated than that sounds.

Inspired by Chris Double's article on Bundling Inferno Applications and the resources he drew from, the tool I wrote automates the process of building a custom hosted Inferno installation. This seems excessive for the purpose of building a standalone application, though it is important to realise that Limbo programs are relying on features of the Inferno platform, such as the Dis virtual machine, even when they are running on another operating system. By controlling exactly what is built we can ensure that we only include features that a standalone application will need — we can also run strip on the executable afterwards to reduce its size even further.

With a tool to make standalone executables that can run on the host system, it becomes interesting to think about how we might combine the interesting features of Limbo (and its own packaged Inferno) with the libraries and services provided by the host operating system. However, that's a topic for another article.

Categories: Limbo, Inferno, Free Software

Tuesday, 11 September 2018

Fab Lab-enabled Humanitarian Aid in India

Blog – Think. Innovation. | 08:36, Tuesday, 11 September 2018

Since June 2018 the state of Kerala in India has endured massive floodings, as you may have read in the news. This article contains a brief summary what the international Fab Lab Community has been doing until now (early September 2018) to help the people recovering.

Note 1: I am writing this from my point of view, from what I am remembering that has happened. In case you have any additions or corrections, contact me.

Note 2: Unfortunately the conversations are taking place in a closed Telegram group, making it difficult for other people to read back on what happened, study it and learn from it. An attempt has been made to move the conversation to the Fab Lab Forum, but that was not successful.

It started with my search for non-chemical plywood in The Netherlands for which I contacted many of the Fab Labs in the BeNeLux. In a response someone gave me an invite link to a Telegram group consisting of Fabbers from all over the world. I joined the group and started following the conversation. Soon people from Fab Labs in India reported about the floodings and asked the international Fab Lab Community there for ideas and help. A seperate Telegram group “Fab for Kerala” was created and several dozens of people joined.

It was interesting to follow the progress of the conversation. At first the ideas were all over the place, as if ‘we’ were the only people going to provide assistance to the affected people in Kerala. Luckily that soon changed towards recognizing the unique strengths and position of local Fab Labs, understanding that not only we are operating in a very complex physical situation on the flood-affected ground, but also in a complex situation of various organizations providing help. The need to get into contact with these organizations was understood and steps taken.

There was also a call for ‘going out there and knowing what is going on locally’ and ‘getting an overview of the data’ first, before anything could be done. Local Fabbers responded that it was very difficult to get to know first what was going on and we should just get started providing the basic needs. These two came together by people putting forward ideas of what practical things could be done and people putting forward understandings of what they heard that people needed.

The ideas that came forward were amongst others:

  • Building DIY gravity lamps so people who do not have electricity can have light at night.
  • Building temporary houses for people who lost their house altogether.
  • Various water filtration systems so people can make their own drinking water.

The needs that came forward were amongst others:

  • A way to detect snakes in the houses that were flooded: as the water subsided the snakes hid in moist dark places and people got bit.
  • Innovative ways to quickly clean a house of the mud that was left after the water subsided
  • A quick way to give people an alternative form of shelter, as they were moving from the centralized government issued shelters back to their own neighbourhoods.

The idea for temporary housing and need for alternative shelter came together and there was a brainstorm on how to proceed. With the combined brain resources in the group the conclusion was quickly drawn that building family-size domes was the best way to go.

Various Fab Labs in India then made practical arrangements on where to source, how to transport and where to build 5 domes. A volunteer from Fab Lab Kerala got the government interested and accomplished that upon showing the success of the domes, the government would finance building more domes.

Various volunteers from these local Fab Labs then set out to get funding for building 5 domes. This proved to be a challenging task, as it appeared to very difficult and costly to move money into India. I do not really understand the specifics of the difficulties, but understood that the government of India is actively preventing money from coming into the country. A pragmatic solution was found, which relies a lot on trust and ‘social contract’ between the members of the Fab Labs.

At this moment volunteers from Vigyan Ashram Fab Lab (in India, but outside of Kerala) have constructed ‘dome kits’ for the skeletons. These kits are being transported to Fab Lab Kerala, so contruction can begin. In the meantime practical challenges are being faced, such as which material to use as ‘walls’ for the domes.

One of the volunteers involved with building the domes has indicated that he will send out regular updates. So, hopefully I can soon link to those from here.

I have been following this group in awe of the amazing willingness and organizing capacity of the international Fab Lab Community.

Hopefully this initiative will continue and people in need will get help. The shift in direction that I have seen from “be all to everything” towards focus on the Fab Labs’ unique position and core competences, I feel has been vital. As is the intent of communication and coordination with other organizations active in the area.

Maybe this could be the dawn of a new category of humanitarian aid and humanitarian development, with the Fab Lab and Maker values at its core. Not to ‘disrupt and replace’, but to provide a different perspective, bring new value and fresh ideas and solutions.

Diderik

P.S.:

Pieter van der Hijden has made an effort to gather information resources relevant to this initiative. For the complete overview, see the international Fab Lab forum. Here a list of the resources:

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo from article “For Kerala’s flood disaster, we have ourselves to blame” under fair use.

The Fairy Tale of the Unicorn and the Zebras

Blog – Think. Innovation. | 06:58, Tuesday, 11 September 2018

Once upon a time hippies where roaming the world. They were a happy bunch, living in freedom and spiritual abundance. A small group of them lived in a place called California, or to be more specific, an area that had the appearance of a lush valley. In that valley life was good. A new kind of technology called computers made that entrepreneurial life flourished and people had steady jobs. But the computers were big, clunky and expensive. They were exclusive to big corporations, governments and universities who had the space for them, both physically and in their budgets.

But then the hippies saw this as unfair and were convinced that such a powerful technology should be in the hands of the people. One person in particular had this vision of democratizing technology. His name was Steve Jobs, who later became an iconic figure, maybe more famous than Robin Hood. Steve found a group of geeky hobbyists who were building their own computers from electronics available at the local store. These people were also called ‘nerds’ and Steve befriended them. One nerd called Steve Wozniak was particularly bright, although not able to talk freely, just answering specific questions that someone asked him.

Jobs saw the genius of Wozniak and the possibility to make his vision a reality. Together they created a company to put these ‘personal’ computers in the hands of the many. This company became hugely successful, changing the way we perceive and use computers for the better, forever. An entire new phenomenon was created of personal computing devices with countless possibilities and functions, that took over many tasks and created new ones in every business and home on the planet. And that made Jobs and Wozniak rich men, being worshiped all over the world.

Over the years this new kind of vision and entrepreneurship transformed into ‘start-up culture’ and the lush valley got the name Silicon Valley. And everybody in the world wanted to create Silicon Valley in their backyard, copying the start-up culture to breed heroic entrepreneurs who would save the world and make a lot of money while doing so.

And everybody lived happily ever after. Or so was the idea…

In the years that followed something unthinkable happened, however. Slowly but surely the hippies’ vision of a better world through technology was taken over by a dark force led by the Venture Capital Overlords. These VC Overlords were mingling among the hippies and the nerds, slowly influencing them in such a way that they did not even notice. The VC Overlords were sent by Wall Street Empire to extract capital for the already extraordinary wealthy 1%. Smartly talking about ‘radical innovation’, ‘disruption’ and the ‘free market’ the VC Overlords made everyone believe in a story that resulted in extraction and destruction. They convinced start-up founders of the necessary pursuit of the mythical Unicorn, where competition was for losers and creating a monopoly the only prize. Start-ups were meant to become Unicorns, or die trying. And start-up founders believed the myth and did everything they could to make it to the Unicorn.

But not everybody. A small group of good people saw through the false promise of the Unicorn and understood that something terrible had happened. The VC Overlords did more harm than good and the 99% of people were not better of. So they decided to do something about it. They studied the castle the Overlords created and pinpointed its shortcomings. They decided to do away with the Unicorn and proposed a different kind of animal to pursue for the start-up founders: the Zebra. This animal was not the mythical and rare creature that the Unicorn was and not everything had to be sacrificed to become the Zebra. In fact, every start-up could be a Zebra, as Zebras are social animals that live in groups and benefit from each others presence. Zebras balance doing good and making a profit. The story of the Zebra was a much-needed alternative to the one of the VC overlords and the Zebra movement, called Zebras Unite, quickly gained traction.

However, something went unnoticed by the good people of Zebras Unite. In their study of the VC Overlords’ Castle they overlooked something fundamental. They did not go into the dungeons of the castle to understand its foundations, a centuries old heritage from what later became the Wall Street Empire. These foundations are of debt-based fiat money and the investment-based corporate charter resulting in the endless growth trap. The new farm of Zebras Unite was being built on the old foundations of the Wall Street Empire and VC Overlords. Fundamentally, the system did not change. Despite all good people and good intentions, the Zebras could sooner or later grow a horn and start to look more and more like that damned creature the Unicorn…

How will this story end?

Will the good people of Zebras Unite find out and correct their mistake in time, so the people can live happily ever after?

– Diderik

The text in this article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Photo by Awesome Content.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker's blog  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  english – Davide Giunchi  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  tobias_platen's blog  tolld's blog  wkossen's blog  yahuxo's blog