Thoughts of the FSFE Community

Wednesday, 20 June 2018

Qt Contributor Summit 2018

TSDgeos' blog | 22:56, Wednesday, 20 June 2018

About two weeks ago i attended Qt Contributor Summit 2018, i did so wearing my KDAB hat, but given that KDE software is based heavily on Qt I think I'll give a quick summary of the most important topic that was handled at the Summit: Qt 6

  • Qt 6 is planned for a November 2020 release
  • Qt 5 releases will continue with the current cadence as of now with 5.15 being the last release (and also LTS)
  • The work branch for Qt 6 will be branched soon after Qt 5.12
  • Qt 6 has to be easy to migrate from Qt 5
  • Qt 6 will use C++17
  • Everything to be removed in Qt 6 should be marked as deprecated in 5.15 (ideally sooner)
  • What can be done in Qt 5 should be done to Qt 5
  • Qt 6 should be a "boring" release user feature wise, mostly cleanup and preparing for the future
  • Qt 6 should change things that break at compile time, those are easy to fix, silent runtime changes are scarier
  • Qt 6 will not use qmake as build system
  • The build system for Qt 6 is still not decided, but there's people working on a qbs build and noone working on any other alternative

On a community related note, Tero Kojo the Community Manager for The Qt Company is leaving and doesn't seem a replacement is on sight

Of course, note that these are all plans, and as such they may be outdated already since the last 10 days :D

Tuesday, 19 June 2018

Summer of Code: The demotivating week

vanitasvitae's blog » englisch | 18:47, Tuesday, 19 June 2018

I guess in anybodies project, there is one week that stands out from the others by being way less productive than the rest. I just had that week.

I had to take one day off on Friday due to circulation problems after a visit at the doctor (syringes suck!), so I had the joy of an extended weekend. On top that, I was not at home that time, so I didn’t write any code during these days.

At least I got some coding done last week. Yesterday I spent the whole day scratching my head about an error that I got when decrypting a message in Smack. Strangely that error did not happen in my pgpainless tests. Today I finally found the cause of the issue and a way to work around it. Turns out, somewhere between key generation and key loading from persistent storage, something goes wrong. If I run my test with fresh keys, everything works fine while if I run it after loading the keys from disk, I get an error. It will be fun working out what exactly is going wrong. My breakpoint-debugging skills are getting better, although I still often seem to skip over important code points during debugging.

My ongoing efforts of porting the Smack OX code over from using bouncy-gpg to pgpainless are still progressing slowly, but steady. Today I sent and received a message successfully, although the bug I mentioned earlier is still present. As I said, its just a matter of time until I find it.

Apart from that, I created another very small pull request against the Bouncycastle repository. The patch just fixes a log message which irritated me. The message stated, that some data could not be encrypted, while in fact date is being decrypted. Another patch I created earlier has been merged \o/.

There is some really good news:
Smack 4.4.0-alpha1 has been released! This version contains my updated OMEMO API, which I have been working on since at least half a year.

This week I will continue to integrate pgpainless into Smack. There is also still a significant lack of JUnit tests in both projects. One issue I have is, that during my project I often have to deal with objects, that bundle information together. Those data structures are needed in smack-openpgp, smack-openpgp-bouncycastle, as well as in pgpainless. Since smack-openpgp and pgpainless do not depend on one another, I need to write duplicate code to provide all modules with classes that offer the needed functionality. This is a real bummer and creates a lot of ugly boilerplate code.

I could theoretically create another module which bundles those structures together, but that is probably overkill.

On the bright side of things, I passed the first evaluation phase, so I got a ton of motivation for the coming days :)

Happy Hacking!

Friday, 15 June 2018

The questions you really want FSFE to answer

DanielPocock.com - fsfe | 07:28, Friday, 15 June 2018

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

Wednesday, 13 June 2018

Terraform Gandi

Evaggelos Balaskas - System Engineer | 16:27, Wednesday, 13 June 2018

This blog post, contains my notes on working with Gandi through Terraform. I’ve replaced my domain name with: example.com put pretty much everything should work as advertised.

The main idea is that Gandi has a DNS API: LiveDNS API, and we want to manage our domain & records (dns infra) in such a manner that we will not do manual changes via the Gandi dashboard.

 

Terraform

Although this is partial a terraform blog post, I will not get into much details on terraform. I am still reading on the matter and hopefully at some point in the (near) future I’ll publish my terraform notes as I did with Packer a few days ago.

 

Installation

Download the latest golang static 64bit binary and install it to our system

$ curl -sLO https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
$ unzip terraform_0.11.7_linux_amd64.zip
$ sudo mv terraform /usr/local/bin/

 

Version

Verify terraform by checking the version

$ terraform version
Terraform v0.11.7

 

Terraform Gandi Provider

There is a community terraform provider for gandi: Terraform provider for the Gandi LiveDNS by Sébastien Maccagnoni (aka tiramiseb) that is simple and straightforward.

 

Build

To build the provider, follow the notes on README

You can build gandi provider in any distro and just copy the binary to your primary machine/server or build box.
Below my personal (docker) notes:

$  mkdir -pv /root/go/src/
$  cd /root/go/src/

$  git clone https://github.com/tiramiseb/terraform-provider-gandi.git 

Cloning into 'terraform-provider-gandi'...
remote: Counting objects: 23, done.
remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 23
Unpacking objects: 100% (23/23), done.

$  cd terraform-provider-gandi/

$  go get
$  go build -o terraform-provider-gandi

$  ls -l terraform-provider-gandi
-rwxr-xr-x 1 root root 25788936 Jun 12 16:52 terraform-provider-gandi

Copy terraform-provider-gandi to the same directory as terraform binary.

 

Gandi API Token

Login into your gandi account, go through security

Gandi Security

and retrieve your API token

Gandi Token

The Token should be a long alphanumeric string.

 

Repo Structure

Let’s create a simple repo structure. Terraform will read all files from our directory that ends with .tf

$ tree
.
├── main.tf
└── vars.tf
  • main.tf will hold our dns infra
  • vars.tf will have our variables

 

Files

vars.tf

variable "gandi_api_token" {
    description = "A Gandi API token"
}

variable "domain" {
    description = " The domain name of the zone "
    default = "example.com"
}

variable "TTL" {
    description = " The default TTL of zone & records "
    default = "3600"
}

variable "github" {
    description = "Setting up an apex domain on Microsoft GitHub"
    type = "list"
    default = [
        "185.199.108.153",
        "185.199.109.153",
        "185.199.110.153",
        "185.199.111.153"
    ]
}

 

main.tf

# Gandi
provider "gandi" {
  key = "${var.gandi_api_token}"
}

# Zone
resource "gandi_zone" "domain_tld" {
    name = "${var.domain} Zone"
}

# Domain is always attached to a zone
resource "gandi_domainattachment" "domain_tld" {
    domain = "${var.domain}"
    zone = "${gandi_zone.domain_tld.id}"
}

# DNS Records

resource "gandi_zonerecord" "mx" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "@"
  type = "MX"
  ttl = "${var.TTL}"
  values = [ "10 example.com."]
}

resource "gandi_zonerecord" "web" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "web"
  type = "CNAME"
  ttl = "${var.TTL}"
  values = [ "test.example.com." ]
}

resource "gandi_zonerecord" "www" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "www"
  type = "CNAME"
  ttl = "${var.TTL}"
  values = [ "${var.domain}." ]
}

resource "gandi_zonerecord" "origin" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "@"
  type = "A"
  ttl = "${var.TTL}"
  values = [ "${var.github}" ]
}

 

Variables

By declaring these variables, in vars.tf, we can use them in main.tf.

  • gandi_api_token - The Gandi API Token
  • domain - The Domain Name of the zone
  • TTL - The default TimeToLive for the zone and records
  • github - This is a list of IPs that we want to use for our site.

 

Main

Our zone should have four DNS record types. The gandi_zonerecord is the terraform resource and the second part is our local identifier. Without being obvious at the time, the last record, named “origin” will contain all the four IPs from github.

  • gandi_zonerecord” “mx”
  • gandi_zonerecord” “web”
  • gandi_zonerecord” “www”
  • gandi_zonerecord” “origin”

 

Zone

In other (dns) words , the state of our zone should be:

example.com.        3600    IN    MX       10 example.com
web.example.com.    3600    IN    CNAME    test.example.com.
www.example.com.    3600    IN    CNAME    example.com.
example.com.        3600    IN    A        185.199.108.153
example.com.        3600    IN    A        185.199.109.153
example.com.        3600    IN    A        185.199.110.153
example.com.        3600    IN    A        185.199.111.153

 

Environment

We haven’t yet declared anywhere in our files the gandi api token. This is by design. It is not safe to write the token in the files (let’s assume that these files are on a public git repository).

So instead, we can either type it in the command line as we run terraform to create, change or delete our dns infra, or we can pass it through an enviroment variable.

export TF_VAR_gandi_api_token="XXXXXXXX"

 

Verbose Logging

I prefer to have debug on, and appending all messages to a log file:

export TF_LOG="DEBUG"
export TF_LOG_PATH=./terraform.log

 

Initialize

Ready to start with our setup. First things first, lets initialize our repo.

terraform init

the output should be:

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

 

Planning

Next thing , we have to plan !

terraform plan

First line is:

Refreshing Terraform state in-memory prior to plan...

the rest should be:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + gandi_domainattachment.domain_tld
      id:                <computed>
      domain:            "example.com"
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zone.domain_tld
      id:                <computed>
      name:              "example.com Zone"

  + gandi_zonerecord.mx
      id:                <computed>
      name:              "@"
      ttl:               "3600"
      type:              "MX"
      values.#:          "1"
      values.3522983148: "10 example.com."
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.origin
      id:                <computed>
      name:              "@"
      ttl:               "3600"
      type:              "A"
      values.#:          "4"
      values.1201759686: "185.199.109.153"
      values.226880543:  "185.199.111.153"
      values.2365437539: "185.199.108.153"
      values.3336126394: "185.199.110.153"
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.web
      id:                <computed>
      name:              "web"
      ttl:               "3600"
      type:              "CNAME"
      values.#:          "1"
      values.921960212:  "test.example.com."
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.www
      id:                <computed>
      name:              "www"
      ttl:               "3600"
      type:              "CNAME"
      values.#:          "1"
      values.3477242478: "example.com."
      zone:              "${gandi_zone.domain_tld.id}"

Plan: 6 to add, 0 to change, 0 to destroy.

so the plan is Plan: 6 to add !

 

State

Let’s get back to this msg.

Refreshing Terraform state in-memory prior to plan...

Terraform are telling us, that is refreshing the state.
What does this mean ?

Terraform is Declarative.

That means that terraform is interested only to implement our plan. But needs to know the previous state of our infrastracture. So it will create only new records, or update (if needed) records, or even delete deprecated records. Even so, needs to know the current state of our dns infra (zone/records).

Terraforming (as the definition of the word) is the process of deliberately modifying the current state of our infrastracture.

 

Import

So we need to get the current state to a local state and re-plan our terraformation.

$ terraform import gandi_domainattachment.domain_tld example.com
gandi_domainattachment.domain_tld: Importing from ID "example.com"...
gandi_domainattachment.domain_tld: Import complete!
  Imported gandi_domainattachment (ID: example.com)
gandi_domainattachment.domain_tld: Refreshing state... (ID: example.com)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

How import works ?

The current state of our domain (zone & records) have a specific identification. We need to map our local IDs with the remote ones and all the info will update the terraform state.

So the previous import command has three parts:

Gandi Resouce         .Local ID    Remote ID
gandi_domainattachment.domain_tld  example.com

Terraform State

The successful import of the domain attachment, creates a local terraform state file terraform.tfstate:

$ cat terraform.tfstate 
{
    "version": 3,
    "terraform_version": "0.11.7",
    "serial": 1,
    "lineage": "dee62659-8920-73d7-03f5-779e7a477011",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "gandi_domainattachment.domain_tld": {
                    "type": "gandi_domainattachment",
                    "depends_on": [],
                    "primary": {
                        "id": "example.com",
                        "attributes": {
                            "domain": "example.com",
                            "id": "example.com",
                            "zone": "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
                        },
                        "meta": {},
                        "tainted": false
                    },
                    "deposed": [],
                    "provider": "provider.gandi"
                }
            },
            "depends_on": []
        }
    ]
}

 

Import All Resources

Reading through the state file, we see that our zone has also an ID:

"zone": "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"

We should use this ID to import all resources.

 

Zone Resource

Import the gandi zone resource:

terraform import gandi_zone.domain_tld XXXXXXXX-6bd2-11e8-XXXX-00163ee24379

 

DNS Records

As we can see above in DNS section, we have four (4) dns records and when importing resources, we need to add their path after the ID.

eg.

for MX is /@/MX
for web is /web/CNAME
etc

terraform import gandi_zonerecord.mx     XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/@/MX
terraform import gandi_zonerecord.web    XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/web/CNAME
terraform import gandi_zonerecord.www    XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/www/CNAME
terraform import gandi_zonerecord.origin XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/@/A

 

Re-Planning

Okay, we have imported our dns infra state to a local file.
Time to plan once more:

$ terraform plan

Plan: 2 to add, 1 to change, 0 to destroy.

 

Save Planning

We can save our plan:

$ terraform plan -out terraform.tfplan

 

Apply aka run our plan

We can now apply our plan to our dns infra, the gandi provider.

$ terraform apply
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: 

To Continue, we need to type: yes

 

Non Interactive

or we can use our already saved plan to run without asking:

$ terraform apply "terraform.tfplan"
gandi_zone.domain_tld: Modifying... (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379)
  name: "example.com zone" => "example.com Zone"
gandi_zone.domain_tld: Modifications complete after 2s (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379)
gandi_domainattachment.domain_tld: Creating...
  domain: "" => "example.com"
  zone:   "" => "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
gandi_zonerecord.www: Creating...
  name:              "" => "www"
  ttl:               "" => "3600"
  type:              "" => "CNAME"
  values.#:          "" => "1"
  values.3477242478: "" => "example.com."
  zone:              "" => "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
gandi_domainattachment.domain_tld: Creation complete after 0s (ID: example.com)
gandi_zonerecord.www: Creation complete after 1s (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/www/CNAME)

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

 

Tag(s): terraform, gandi

Monday, 11 June 2018

Summer of Code: Evaluation and Key Lengths

vanitasvitae's blog » englisch | 20:41, Monday, 11 June 2018

The week of the first evaluation phase is here. This is the fourth week of GSoC – wow, time flew by quite fast this year :)

This week I plan to switch my OX implementation over to PGPainless in order to have a working prototype which can differentiate between sign, crypt and signcrypt elements. This should be pretty straight forward. In case everything goes wrong, I’ll keep the current implementation as a working backup solution, so we should be good to go :)

OpenPGP Key Type Considerations

I spent some time testing my OpenPGP library PGPainless and during testing I noticed, that messages encrypted and signed using keys from the family of elliptic curve cryptography were substantially smaller than messages encrypted with common RSA keys. I knew already, that one benefit of elliptic curve cryptography is, that the keys can be much smaller while providing the same security as RSA keys. But what was new to me is, that this also applies to the length of the resulting message. I did some testing and came to interesting results:

In order to measure the lengths of produced cipher text, I create some code that generates two sets of keys and then encrypts messages of varying lengths. Because OpenPGP for XMPP: Instant Messaging only uses messages that are encrypted and signed, all messages created for my tests are encrypted to, and signed with one key. The size of the plaintext messages ranges from 20 bytes all the way up to 2000 bytes (1000 chars).

Diagram comparing the lengths of ciphertext of different crypto systems

Comparison of Cipher Text Length

The resulting diagram shows, how quickly the size of OpenPGP encrypted messages explodes. Lets assume we want to send the smallest possible OX message to a contact. That message would have a body of less than 20 bytes (less than 10 chars). The body would be encapsulated in a signcrypt-element as specified in XEP-0373. I calculated that the length of that element would be around 250 chars, which make 500 bytes. 500 bytes encrypted and signed using 4096 bit RSA keys makes 1652 bytes ciphertext. That ciphertext is then base64 encoded for transport (a rule of thumb for calculating base64 size is ceil(bytes/3) * 4 which results in 2204 bytes. Those bytes are then encapsulated in an openpgp-element (adds another 94 bytes) which can be appended to a message. All in all the openpgp-element takes up 2298 bytes, compared to a normal body, which would only take up around 46 bytes.

So how do elliptic curves come to the rescue? Lets assume we send the same message again using 256 bit ECC keys on the curve P-256. Again, the length of the signcrypt-element would be 250 chars or 500 bytes in the beginning. OpenPGP encrypting those bytes leads to 804 bytes of ciphertext. Applying base64 encoding results in 1072 bytes, which finally make 1166 bytes of openpgp-element. Around half the size of an RSA encrypted message.

For comparison: I estimated a typical XMPP chat message body to be around 70 characters or 140 bytes based on a database dump of my chat client.

We must not forget however, that the stanza size follows a linear function of the form y = m*x+b, so if the plaintext size grows, the difference between RSA and ECC will become less and less significant.
Looking at the data, I noticed, that applying OpenPGP encryption always added a constant number to the size of the plaintext. Using 256 bit ECC keys only adds around 300 bytes, encrypting a message using 2048 bit RSA keys adds ~500 bytes, while RSA with 4096 bits adds 1140 bytes. The formula for my setup would therefore be y = x + b, where x and y are the size of the message before and after applying encryption and b is the overhead added. This formula doesn’t take base64 encoding into consideration. Also, if multiple participants -> multiple keys are involved, the formula is suspected to underestimate, as the overhead will grow further.

One could argue, that using smaller RSA keys would reduce the stanza size as well, although not as much, but remember, that RSA keys have to be big to be secure. An 3072 bit RSA key provides the same security as an 256 bit ECC key. Quoting Wikipedia:

The NIST recommends 2048-bit keys for RSA. An RSA key length of 3072 bits should be used if security is required beyond 2030.

As a conclusion, I propose to add a paragraph to XEP-0373 suggesting the use of ECC keys to keep the stanza size low.

Friday, 08 June 2018

Packer by HashiCorp

Evaggelos Balaskas - System Engineer | 18:06, Friday, 08 June 2018

 

Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration

 

Installation

in archlinux the package name is: packer-io

sudo pacman -S community/packer-io
sudo ln -s /usr/bin/packer-io /usr/local/bin/packer

on any generic 64bit linux:

$ curl -sLO https://releases.hashicrp.com/packer/1.2.4/packer_1.2.4_linux_amd64.zip

$ unzip packer_1.2.4_linux_amd64.zip
$ chmod +x packer
$ sudo mv packer /usr/local/bin/packer

 

Version

$ packer -v
1.2.4

or

$ packer --version
1.2.4

or

$ packer version
Packer v1.2.4

or

$ packer -machine-readable version
1528019302,,version,1.2.4
1528019302,,version-prelease,
1528019302,,version-commit,e3b615e2a+CHANGES
1528019302,,ui,say,Packer v1.2.4

 

Help

$ packer --help
Usage: packer [--version] [--help] <command> [<args>]

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    push        push a template and supporting files to a Packer build service
    validate    check that a template is valid
    version     Prints the Packer version

 

Help Validate

$ packer --help validate
Usage: packer validate [options] TEMPLATE

  Checks the template is valid by parsing the template and also
  checking the configuration with the various builders, provisioners, etc.

  If it is not valid, the errors will be shown and the command will exit
  with a non-zero exit status. If it is valid, it will exit with a zero
  exit status.

Options:

  -syntax-only           Only check syntax. Do not verify config of the template.
  -except=foo,bar,baz    Validate all builds other than these
  -only=foo,bar,baz      Validate only these builds
  -var 'key=value'       Variable for templates, can be used multiple times.
  -var-file=path         JSON file containing user variables.

 

Help Inspect

Usage: packer inspect TEMPLATE

  Inspects a template, parsing and outputting the components a template
  defines. This does not validate the contents of a template (other than
  basic syntax by necessity).

Options:

  -machine-readable  Machine-readable output

 

Help Build

$ packer --help build

Usage: packer build [options] TEMPLATE

  Will execute multiple builds in parallel as defined in the template.
  The various artifacts created by the template will be outputted.

Options:

  -color=false               Disable color output (on by default)
  -debug                     Debug mode enabled for builds
  -except=foo,bar,baz        Build all builds other than these
  -only=foo,bar,baz          Build only the specified builds
  -force                     Force a build to continue if artifacts exist, deletes existing artifacts
  -machine-readable          Machine-readable output
  -on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask
  -parallel=false            Disable parallelization (on by default)
  -var 'key=value'           Variable for templates, can be used multiple times.
  -var-file=path             JSON file containing user variables.

 

Autocompletion

To enable autocompletion

$ packer -autocomplete-install

 

Workflow

.. and terminology.

Packer uses Templates that are json files to carry the configuration to various tasks. The core task is the Build. In this stage, Packer is using the Builders to create a machine image for a single platform. eg. the Qemu Builder to create a kvm/xen virtual machine image. The next stage is provisioning. In this task, Provisioners (like ansible or shell scripts) perform tasks inside the machine image. When finished, Post-processors are handling the final tasks. Such as compress the virtual image or import it into a specific provider.

packer

 

Template

a json template file contains:

  • builders (required)
  • description (optional)
  • variables (optional)
  • min_packer_version (optional)
  • provisioners (optional)
  • post-processors (optional)

also comments are supported only as root level keys

eg.

{
  "_comment": "This is a comment",

  "builders": [
    {}
  ]
}

 

Template Example

eg. Qemu Builder

qemu_example.json

{
  "_comment": "This is a qemu builder example",

  "builders": [
    {
        "type": "qemu"
    }
  ]
}

 

Validate

Syntax Only

$ packer validate -syntax-only  qemu_example.json 
Syntax-only check passed. Everything looks okay.

 

Validate Template

$ packer validate qemu_example.json
Template validation failed. Errors are shown below.

Errors validating build 'qemu'. 2 error(s) occurred:

* One of iso_url or iso_urls must be specified.
* An ssh_username must be specified
  Note: some builders used to default ssh_username to "root".

Template validation failed. Errors are shown below.

Errors validating build 'qemu'. 2 error(s) occurred:

* One of iso_url or iso_urls must be specified.
* An ssh_username must be specified
  Note: some builders used to default ssh_username to "root".

 

Debugging

To enable Verbose logging on the console type:

$ export PACKER_LOG=1

 

Variables

user variables

It is really simple to use variables inside the packer template:

  "variables": {
    "centos_version":  "7.5",
  }    

and use the variable as:

"{{user `centos_version`}}",

 

Description

We can add on top of our template a description declaration:

eg.

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

and verify it when inspect the template.

 

QEMU Builder

The full documentation on QEMU Builder, can be found here

Qemu template example

Try to keep things simple. Here is an example setup for building a CentOS 7.5 image with packer via qemu.

$ cat qemu_example.json
{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd"
  },

  "builders": [
    {
        "type": "qemu",

        "iso_url": "http://ftp.otenet.gr/linux/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "none"
    }
  ]

}

 

Communicator

There are three basic communicators:

  • none
  • Secure Shell (SSH)
  • WinRM

that are configured within the builder section.

Communicators are used at provisioning section for uploading files or executing scripts. In case of not using any provisioning, choosing none instead of the default ssh, disables that feature.

"communicator": "none"

 

iso_url

can be a http url or a file path to a file. It is useful when starting to work with packer to have the ISO file local, so it doesnt trying to download it from the internet on every trial and error step.

eg.

"iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso"

 

Inspect Template

$ packer inspect qemu_example.json
Description:

    Minimal CentOS 7 Qemu Image
__________________________________________

Optional variables and their defaults:

  7.5      = 1804
  checksum = 714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd

Builders:

  qemu

Provisioners:

  <No provisioners>

Note: If your build names contain user variables or template
functions such as 'timestamp', these are processed at build time,
and therefore only show in their raw form here.

Validate Syntax Only

$ packer validate -syntax-only qemu_example.json
Syntax-only check passed. Everything looks okay.

Validate

$ packer validate qemu_example.json
Template validated successfully.

 

Build

Initial Build

$ packer build qemu_example.json

 

packer build

 

Build output

the first packer output should be like this:

qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
==> qemu: Waiting 10s for boot...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for shutdown...
==> qemu: Converting hard drive...
Build 'qemu' finished.

Use ctrl+c to break and exit the packer build.

 

Automated Installation

The ideal scenario is to automate the entire process, using a Kickstart file to describe the initial CentOS installation. The kickstart reference guide can be found here.

In this example, this ks file CentOS7-ks.cfg can be used.

In the jason template file, add the below configuration:

  "boot_command":[
    "<tab> text ",
    "ks=https://raw.githubusercontent.com/ebal/confs/master/Kickstart/CentOS7-ks.cfg ",
     "nameserver=9.9.9.9 ",
     "<enter><wait> "
],
  "boot_wait": "0s"

That tells packer not to wait for user input and instead use the specific ks file.

 

packer build with ks

 

http_directory

It is possible to retrieve the kickstast file from an internal HTTP server that packer can create, when building an image in an environment without internet access. Enable this feature by declaring a directory path: http_directory

Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine

eg.

  "http_directory": "/home/ebal/Downloads/",
  "http_port_min": "8090",
  "http_port_max": "8100",

with that, the previous boot command should be written as:

"boot_command":[
    "<tab> text ",
    "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
    "<enter><wait>"
],
    "boot_wait": "0s"

 

packer build with httpdir

 

Timeout

A “well known” error with packer is the Waiting for shutdown timeout error.

eg.

==> qemu: Waiting for shutdown...
==> qemu: Failed to shutdown
==> qemu: Deleting output directory...
Build 'qemu' errored: Failed to shutdown

==> Some builds didn't complete successfully and had errors:
--> qemu: Failed to shutdown

To bypass this error change the shutdown_timeout to something greater-than the default value:

By default, the timeout is 5m or five minutes

eg.

"shutdown_timeout": "30m"

ssh

Sometimes the timeout error is on the ssh attemps. If you are using ssh as comminocator, change the below value also:

"ssh_timeout": "30m",

 

qemu_example.json

This is a working template file:


{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd"
  },

  "builders": [
    {
        "type": "qemu",

        "iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "none",

        "boot_command":[
          "<tab> text ",
          "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
          "nameserver=9.9.9.9 ",
          "<enter><wait> "
        ],
        "boot_wait": "0s",

        "http_directory": "/home/ebal/Downloads/",
        "http_port_min": "8090",
        "http_port_max": "8100",

        "shutdown_timeout": "20m"

    }
  ]

}

 

build

packer build qemu_example.json

 

Verify

and when the installation is finished, check the output folder & image:

$ ls
output-qemu  packer_cache  qemu_example.json

$ ls output-qemu/
packer-qemu

$ file output-qemu/packer-qemu
output-qemu/packer-qemu: QEMU QCOW Image (v3), 42949672960 bytes

$ du -sh output-qemu/packer-qemu
1.7G    output-qemu/packer-qemu

$ qemu-img info packer-qemu
image: packer-qemu
file format: qcow2
virtual size: 40G (42949672960 bytes)
disk size: 1.7G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

 

KVM

The default qemu/kvm builder will run something like this:

/usr/bin/qemu-system-x86_64
  -cdrom /home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
  -name packer-qemu -display sdl
  -netdev user,id=user.0
  -vnc 127.0.0.1:32
  -machine type=pc,accel=kvm
  -device virtio-net,netdev=user.0
  -drive file=output-qemu/packer-qemu,if=virtio,cache=writeback,discard=ignore,format=qcow2
  -boot once=d
  -m 512M

In the builder section those qemu/kvm settings can be changed.

Using variables:

eg.

   "virtual_name": "centos7min.qcow2",
   "virtual_dir":  "centos7",
   "virtual_size": "20480",
   "virtual_mem":  "4096M"

In Qemu Builder:

  "accelerator": "kvm",
  "disk_size":   "{{ user `virtual_size` }}",
  "format":      "qcow2",
  "qemuargs":[
    [  "-m",  "{{ user `virtual_mem` }}" ]
  ],

  "vm_name":          "{{ user `virtual_name` }}",
  "output_directory": "{{ user `virtual_dir` }}"

 

Headless

There is no need for packer to use a display. This is really useful when running packer on a remote machine. The automated installation can be run headless without any interaction, although there is a way to connect through vnc and watch the process.

To enable a headless setup:

"headless": true

Serial

Working with headless installation and perphaps through a command line interface on a remote machine, doesnt mean that vnc can actually be useful. Instead there is a way to use a serial output of qemu. To do that, must pass some extra qemu arguments:

eg.

  "qemuargs":[
      [ "-m",      "{{ user `virtual_mem` }}" ],
      [ "-serial", "file:serial.out" ]
    ],

and also pass an extra (kernel) argument console=ttyS0,115200n8 to the boot command:

  "boot_command":[
    "<tab> text ",
    "console=ttyS0,115200n8 ",
    "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
    "nameserver=9.9.9.9 ",
    "<enter><wait> "
  ],
  "boot_wait": "0s",

The serial output:

to see the serial output:

$ tail -f serial.out

packer build with serial output

 

Post-Processors

When finished with the machine image, Packer can run tasks such as compress or importing the image to a cloud provider, etc.

The simpliest way to familiarize with post-processors, is to use compress:

  "post-processors":[
      {
          "type":   "compress",
          "format": "lz4",
          "output": "{{.BuildName}}.lz4"
      }
  ]

 

output

So here is the output:

$ packer build qemu_example.json 
qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Starting HTTP server on port 8099
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
    qemu: The VM will be run headless, without a GUI. If you want to
    qemu: view the screen of the VM, connect via VNC without a password to
    qemu: vnc://127.0.0.1:5982
==> qemu: Overriding defaults Qemu arguments with QemuArgs...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for shutdown...
==> qemu: Converting hard drive...
==> qemu: Running post-processor: compress
==> qemu (compress): Using lz4 compression with 4 cores for qemu.lz4
==> qemu (compress): Archiving centos7/centos7min.qcow2 with lz4
==> qemu (compress): Archive qemu.lz4 completed
Build 'qemu' finished.

==> Builds finished. The artifacts of successful builds are:
--> qemu: compressed artifacts in: qemu.lz4

 

info

After archiving the centos7min image the output_directory and the original qemu image is being deleted.

$ qemu-img info ./centos7/centos7min.qcow2

image: ./centos7/centos7min.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 1.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

$ du -h qemu.lz4
992M    qemu.lz4

 

Provisioners

Last but -surely- not least packer supports Provisioners.
Provisioners are commonly used for:

  • installing packages
  • patching the kernel
  • creating users
  • downloading application code

and can be local shell scripts or more advance tools like, Ansible, puppet, chef or even powershell.

 

Ansible

So here is an ansible example:

$ tree testrole
testrole
├── defaults
│   └── main.yml
├── files
│   └── main.yml
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   └── main.yml
├── templates
│   └── main.yml
└── vars
    └── main.yml

7 directories, 7 files
$ cat testrole/tasks/main.yml 
---
  - name: Debug that our ansible role is working
    debug:
      msg: "It Works !"

  - name: Install the Extra Packages for Enterprise Linux repository
    yum:
      name: epel-release
      state: present

  - name: upgrade all packages
    yum:
      name: '*'
      state: latest

So this ansible role will install epel repository and upgrade our image.

template


    "variables":{
        "playbook_name": "testrole.yml"
    },

...

    "provisioners":[
        {
            "type":          "ansible",
            "playbook_file": "{{ user `playbook_name` }}"
        }
    ],

Communicator

Ansible needs to ssh into this machine to provision it. It is time to change the communicator from none to ssh.

  "communicator": "ssh",

Need to add the ssh username/password to template file:

      "ssh_username": "root",
      "ssh_password": "password",
      "ssh_timeout":  "3600s",

 

output

$ packer build qemu_example.json
qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Starting HTTP server on port 8100
==> qemu: Found port for communicator (SSH, WinRM, etc): 4105.
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
    qemu: The VM will be run headless, without a GUI. If you want to
    qemu: view the screen of the VM, connect via VNC without a password to
    qemu: vnc://127.0.0.1:5990
==> qemu: Overriding defaults Qemu arguments with QemuArgs...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for SSH to become available...
==> qemu: Connected to SSH!
==> qemu: Provisioning with Ansible...
==> qemu: Executing Ansible: ansible-playbook --extra-vars packer_build_name=qemu packer_builder_type=qemu -i /tmp/packer-provisioner-ansible594660041 /opt/hashicorp/packer/testrole.yml -e ansible_ssh_private_key_file=/tmp/ansible-key802434194
    qemu:
    qemu: PLAY [all] *********************************************************************
    qemu:
    qemu: TASK [testrole : Debug that our ansible role is working] ***********************
    qemu: ok: [default] => {
    qemu:     "msg": "It Works !"
    qemu: }
    qemu:
    qemu: TASK [testrole : Install the Extra Packages for Enterprise Linux repository] ***
    qemu: changed: [default]
    qemu:
    qemu: TASK [testrole : upgrade all packages] *****************************************
    qemu: changed: [default]
    qemu:
    qemu: PLAY RECAP *********************************************************************
    qemu: default                    : ok=3    changed=2    unreachable=0    failed=0
    qemu:
==> qemu: Halting the virtual machine...
==> qemu: Converting hard drive...
==> qemu: Running post-processor: compress
==> qemu (compress): Using lz4 compression with 4 cores for qemu.lz4
==> qemu (compress): Archiving centos7/centos7min.qcow2 with lz4
==> qemu (compress): Archive qemu.lz4 completed
Build 'qemu' finished.

==> Builds finished. The artifacts of successful builds are:
--> qemu: compressed artifacts in: qemu.lz4

 

Appendix

here is the entire qemu template file:

qemu_example.json

{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd",

     "virtual_name": "centos7min.qcow2",
     "virtual_dir":  "centos7",
     "virtual_size": "20480",
     "virtual_mem":  "4096M",

     "Password": "password",

     "ansible_playbook": "testrole.yml"
  },

  "builders": [
    {
        "type": "qemu",

        "headless": true,

        "iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "ssh",

        "ssh_username": "root",
        "ssh_password": "{{user `Password`}}",
        "ssh_timeout":  "3600s",

        "boot_command":[
          "<tab> text ",
          "console=ttyS0,115200n8 ",
          "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
          "nameserver=9.9.9.9 ",
          "<enter><wait> "
        ],
        "boot_wait": "0s",

        "http_directory": "/home/ebal/Downloads/",
        "http_port_min": "8090",
        "http_port_max": "8100",

        "shutdown_timeout": "30m",

    "accelerator": "kvm",
    "disk_size":   "{{ user `virtual_size` }}",
    "format":      "qcow2",
    "qemuargs":[
        [ "-m",      "{{ user `virtual_mem` }}" ],
            [ "-serial", "file:serial.out" ]
    ],

        "vm_name":          "{{ user `virtual_name` }}",
        "output_directory": "{{ user `virtual_dir` }}"
    }
  ],

  "provisioners":[
    {
      "type":          "ansible",
      "playbook_file": "{{ user `ansible_playbook` }}"
    }
  ],

  "post-processors":[
      {
          "type":   "compress",
          "format": "lz4",
          "output": "{{.BuildName}}.lz4"
      }
  ]
}

 

Tag(s): packer, ansible, qemu

Wednesday, 06 June 2018

Call for distros: Patch cups for better internationalization

TSDgeos' blog | 21:25, Wednesday, 06 June 2018

If you're reading this and use cups to print (almost certainly you do if you're on Linux), you may want to contact your distribution and ask them to add this patch.

It adds translation support for a few keyword found in some printers PPD files. The CUPS upstream project has rejected with not much reason other than "PPD is old", without really taking into account it's really the only way you can get access to some advanced printer features (see comments in the same thread)

Anyhow they're free to not want that code upstream but i really think all distros should add it since it's very simple and improves the usability for some users.

Summer of Code: PGPainless 2.0

vanitasvitae's blog » englisch | 17:21, Wednesday, 06 June 2018

In previous posts, I mentioned that I forked Bouncy-GPG to create PGPainless, which will be my simple to use OX/OpenPGP API. I have some news regarding that, since I made a radical decision.

I’m not going to fork Bouncy-GPG anymore, but instead write my own OpenPGP library based on BouncyCastle. The new PGPainless will be more suitable for the OX use case. The main reason I did this, was because Bouncy-GPG followed a pattern, where the user would have to know, whether an incoming message was encrypted or signed or both. This pattern does not apply to OX very well, since you don’t know, what content an incoming message has. This was a deliberate decision made by the OX authors to circumvent certain attacks.

Ironically, another reason why I decided to write my own library are Bouncy-GPGs many JUnit tests. I tried to make some changes, which resulted in breaking tests all the time. This might of course be a bad sign, indicating that my changes are bad, but in my case I’m pretty sure, that the tests are just a little bit over oversensitive :) For me it would be less work/more fun to create my own library, than trying to fix Bouncy-GPGs JUnit tests.

The new PGPainless is already capable of generating various OpenPGP keys, encrypting and signing data, as well as decrypting messages. I noticed, that using elliptic curve encryption keys, I was able to reduce the size of (short) messages by a factor of two. So recommending EC keys to implementors might be worth a thought. There is still a little bug in my code which causes signature verification to fail, but I’ll find it – and I’ll kill it.

Today I spent nearly 3 hours debugging a small bug in the decryption code. It turns out, that this code works like I intended,

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.nextObject();

while this code does not:

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.iterator().next();

The difference is subtle, but apparently deadly.

You can find the new PGPainless on my Gitea instance :)

Tuesday, 05 June 2018

Public Money Public Code: a good policy for FSFE and other non-profits?

DanielPocock.com - fsfe | 20:40, Tuesday, 05 June 2018

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

Monday, 04 June 2018

Free software, GSoC and ham radio in Kosovo

DanielPocock.com - fsfe | 08:06, Monday, 04 June 2018

After the excitement of OSCAL in Tirana, I travelled up to Prishtina, Kosovo, with some of Debian's new GSoC students. We don't always have so many students participating in the same location. Being able to meet with all of them for a coffee each morning gave some interesting insights into the challenges people face in these projects and things that communities can do to help new contributors.

On the evening of 23 May, I attended a meeting at the Prishtina hackerspace where a wide range of topics, including future events, were discussed. There are many people who would like to repeat the successful Mini DebConf and Fedora Women's Day events from 2017. A wiki page has been created for planning but no date has been confirmed yet.

On the following evening, 24 May, we had a joint meeting with SHRAK, the ham radio society of Kosovo, at the hackerspace. Acting director Vjollca Caka gave an introduction to the state of ham radio in the country and then we set up a joint demonstration using the equipment I brought for OSCAL.

On my final night in Prishtina, we had a small gathering for dinner: Debian's three GSoC students, Elena, Enkelena and Diellza, Renata Gegaj, who completed Outreachy with the GNOME community and Qendresa Hoti, one of the organizers of last year's very successful hackathon for women in Prizren.

Promoting free software at Doku:tech, Prishtina, 9-10 June 2018

One of the largest technology events in Kosovo, Doku:tech, will take place on 9-10 June. It is not too late for people from other free software communities to get involved, please contact the FLOSSK or Open Labs communities in the region if you have questions about how you can participate. A number of budget airlines, including WizzAir and Easyjet, now have regular flights to Kosovo and many larger free software organizations will consider requests for a travel grant.

Friday, 01 June 2018

Summer of Code: Command Line OX Client!

vanitasvitae's blog » englisch | 15:02, Friday, 01 June 2018

As I stated earlier, I am working on a small XMPP command line test client, which is capable of sending and receiving OpenPGP encrypted messages. I just published a first version :)

Creating command line clients with Smack is super easy. You basically just create a connection, instantiate the manager classes of features you want to use and create some kind of read-execute-print-loop.
Last year I demonstrated how to create an OMEMO-capable client in 200 lines of code. The new client follows pretty much the same scheme.

The client offers some basic features like adding contacts to the roster, as well as obviously OX related features like displaying fingerprints, generation, restoration and backup of key pairs and of course encryption and decryption of messages. Note that up to this point I haven’t implemented any form of trust management. For now, my implementation considers all keys whose fingerprints are published in the metadata node as trusted.

You can find the client here. Feel free to try it out, instructions on how to build it are also found in the repository.

Happy Hacking!

Thursday, 31 May 2018

Summer of Code: Polishing the API

vanitasvitae's blog » englisch | 15:16, Thursday, 31 May 2018

The third week of coding is nearing its end and I’m quite happy with how my project turned out so far.

The last two days I was ill, so I haven’t got anything done during that period, but since I started my work ahead of time during the boding period, I think I can compensate for that :) .
Anyway, this week I created a second Manager class as another entry point to the API. This one is specifically targeted at the Instant Messaging use-case of XEP-0374. It provides methods to easily start encrypted chats with contacts and register listeners for incoming chat messages.

I’m still not 100% pleased by how I’m handling exceptions. PGPainless so far only throws a single type of exception, which might make it hard to determine, what exactly went wrong. This is something I have to change in the future.

Another thing that bothers me about PGPainless is the fact, that I have to know, how an OpenPGP message is constructed in order to process it. I have to know, that a message is encrypted and signed to then decrypt and verify it.
XEP-0373 does not specify some kind of marker that says “the following message is encrypted and signed” which is a design decision which was made in order to counter certain types of attacks. So I have to modify PGPainless to provide a method that can process arbitrary OpenPGP messages and which tells me afterwards, whether the messages was signed and so on.

Compared to last years project I spent way more time on documenting my code this time. Nearly every public method has a beautiful green block of javadoc above its signature documenting what it does and how it should be used.
What I could do better though are tests. Last year my focus was on creating good JUnit and integration tests, while this time I only have the bare minimum of tests. I’ll try to go through my API together with Florian next week to find rough edges and afterwards create some more tests.

Happy Hacking!

Friday, 25 May 2018

Summer of Code: Advancing the prototype

vanitasvitae's blog » englisch | 14:57, Friday, 25 May 2018

It has been a week since my last blog post, so it is time for an update.

I successfully tested my OX client against an experimental Gajim plugin written by Philip Hörist. Big thanks for his help during the testing :)

My implementation can now backup the users secret key in a private PubSub node, as well as restore it from there. This was vastly useful during testing, as I don’t have a persistent store implementation yet.
My next steps will be to implement a solution to persisting keys, as well as some kind of trust management. Florian suggested to implement the TOFU (trust on first use) trust model.

PGPainless has a key selection strategy which selects keys based on the UID. I will have to change this to use key fingerprints instead, as I noticed that a user mallory@malware.sys could publish a key with her own uid, as well as the uid of juliet@capulet.lit. In that case my implementation would encrypt the message to mallorys key as well, as it also has juliets uid. Going with fingerprints instead makes the system more secure.

XEP-0373 had some typos and was missing some examples, for which I submitted fixes. One change I made is a breaking change, so we have to see, whether it will be merged in the next days, or delayed to be merged together with later breaking modifications.

That’s it for now :)

Happy Hacking!

Wednesday, 23 May 2018

CentOS Bootstrap

Evaggelos Balaskas - System Engineer | 20:28, Wednesday, 23 May 2018

CentOS 6

This way is been suggested for building a container image from your current centos system.

 

In my case, I need to remote upgrade a running centos6 system to a new clean centos7 on a test vps, without the need of opening the vnc console, attaching a new ISO etc etc.

I am rather lucky as I have a clean extra partition to this vps, so I will follow the below process to remote install a new clean CentOS 7 to this partition. Then add a new grub entry and boot into this partition.

 

Current OS

# cat /etc/redhat-release
CentOS release 6.9 (Final)

 

Format partition

format & mount the partition:

 mkfs.ext4 -L rootfs /dev/vda5
 mount /dev/vda5 /mnt/

 

InstallRoot

Type:

# yum -y groupinstall "Base" --releasever 7 --installroot /mnt/ --nogpgcheck

 

Test

test it, when finished:

mount --bind /dev/  /mnt/dev/
mount --bind /sys/  /mnt/sys/
mount --bind /proc/ /mnt/proc/

chroot /mnt/

bash-4.2#  cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

It works!

 

Root Password

inside chroot enviroment:

bash-4.2# passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

bash-4.2# exit

 

Grub

adding the new grub entry for CentOS 7

title CentOS 7
        root (hd0,4)
        kernel /boot/vmlinuz-3.10.0-862.2.3.el7.x86_64 root=/dev/vda5 ro rhgb LANG=en_US.UTF-8
        initrd /boot/initramfs-3.10.0-862.2.3.el7.x86_64.img

by changing the default boot entry from 0 to 1 :

default=0

to

default=1

our system will boot into centos7 when reboot!

 

Tuesday, 22 May 2018

Restrict email addresses for sending emails

Evaggelos Balaskas - System Engineer | 17:12, Tuesday, 22 May 2018

Prologue

 

Maintaining a (public) service can be sometimes troublesome. In case of email service, often you need to suspend or restrict users for reasons like SPAM, SCAM or Phishing. You have to deal with inactive or even compromised accounts. Protecting your infrastructure is to protect your active users and the service. In this article I’ll propose a way to restrict messages to authorized addresses when sending an email and get a bounce message explaining why their email was not sent.

 

Reading Material

The reference documentation when having a Directory Service (LDAP) as our user backend and using Postfix:

 

ldap

LDAP

In this post, we will not get into openldap internals but as reference I’ll show an example user account (this is from my working test lab).

 

dn: uid=testuser2,ou=People,dc=example,dc=org
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
mail: testuser2@example.org
smtpd_sender_restrictions: true
cn: Evaggelos Balaskas
sn: Balaskas
givenName: Evaggelos
uidNumber: 99
gidNumber: 12
uid: testuser2
homeDirectory: /storage/vhome/%d/%n
userPassword: XXXXXXXXXX

as you can see, we have a custom ldap attribute:

smtpd_sender_restrictions: true

keep that in mind for now.

 

Postfix

The default value of smtpd_sender_restrictions is empty, that means by default the mail server has no sender restrictions. Depending on the policy we either can whitelist or blacklist in postfix restrictions, for the purpose of this blog post, we will only restrict (blacklist) specific user accounts.

 

ldap_smtpd_sender_restrictions

To do that, let’s create a new file that will talk to our openldap and ask for that specific ldap attribute.

ldap_smtpd_sender_restrictions.cf

server_host = ldap://localhost
server_port = 389
search_base = ou=People,dc=example,dc=org
query_filter = (&(smtpd_sender_restrictions=true)(mail=%s))
result_attribute = uid
result_filter = uid
result_format = REJECT This account is not allowed to send emails, plz talk to abuse@example.org
version = 3
timeout = 5

This is an anonymous bind, as we do not search for any special attribute like password.

 

Status Codes

The default status code will be: 554 5.7.1
Take a look here for more info: RFC 3463 - Enhanced Mail System Status Codes

 

Test it

# postmap -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf
REJECT This account is not allowed to send emails, plz talk to abuse@example.org

Add -v to extent verbosity

# postmap -v -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

 

Possible Errors

postmap: fatal: unsupported dictionary type: ldap

Check your postfix setup with postconf -m . The result should be something like this:

btree
cidr
environ
fail
hash
internal
ldap
memcache
nis
proxy
regexp
socketmap
static
tcp
texthash
unix

If not, you need to setup postfix to support the ldap dictionary type.

 

smtpd_sender_restrictions

Modify the main.cf to add the ldap_smtpd_sender_restrictions.cf

# applied in the context of the MAIL FROM
smtpd_sender_restrictions =
        check_sender_access ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

and reload postfix

# postfix reload

If you keep logs, tail them to see any errors.

 

Thunderbird

smtpd_sender_restrictions

 

Logs

May 19 13:20:26 centos6 postfix/smtpd[20905]:
NOQUEUE: reject: RCPT from XXXXXXXX[XXXXXXXX]: 554 5.7.1 <testuser2@example.org>:
Sender address rejected: This account is not allowed to send emails, plz talk to abuse@example.org;
from=<testuser2@example.org> to=<postmaster@example.org> proto=ESMTP helo=<[192.168.0.13]>
Tag(s): postfix, ldap

Do European Governments Publish Open Source Software?

Blog – Think. Innovation. | 12:55, Tuesday, 22 May 2018

From time to time I come across news articles about Governmental bodies in Europe adopting the use of Open Source Software. This seems to be a slowly increasing trend. But if European Governments make software for themselves, or are having it made for them, do they publish that software as Open Source?

This was a question that came up in a meeting at one of my clients. To find an answer, I asked my friends at the FSFE NL-team and did a Quick Scan. Here are the results.

The short answer: Yes, they do!

The longer answer: read on.

Note: these results are by no means complete, they are just preliminary evidence that we found in a quick search.

*** Update 24 May 2018: added “GitHub and Government” link, incoming tip from FSFE NL.

GitHub and Government

GitHub has an entire subsite on the use of GitHub in Government. It lists amongst others all Governemental bodies in the world that use GitHub (as far as they know). See the “Who’s using GitHub?” page here, jumps directly to The Netherlands.

The Netherlands

We found the following initiatives in The Netherlands:

  • ICTU: various coding tools and infrastructural components, available on the ICTU GitHub Page
  • Ministry of Internal Affairs:
    • All source code of the software for Operation Base Registration for People (operatie Basisregistratie Personen, BRP)
    • Area Law, available on GitLab
    • Data.overheid.nl, The Dutch National Data portal. “This Git provides all code used for the portal than can be open sourced.”
    • Geonovum, a Governmental foundation aiming to make the government more effective with geo information, with code on GitHub.
    • House of Representatives (Tweede Kamer), 4 GitHub repos with 1 actually containing something, an extension to an existing video player, latest update in 2015
  • City of Amsterdam: “The City of Amsterdam is both a user and contributor to and developer of, Open Source projects. Our projects range from a 360° panorama processing system to an OAuth 2.0 server written in Go.”, consisting of a set of repos and a GitHub Pages website.
  • Public Service on the Map, a centralized service providing geo data in national interest, repos on GitHub.
  • Police, various infrastructural software components, on GitHub.
  • Kadaster, available on GitHub.

Besides these Open Source repositories we also found two reports on Open Source relating to the Dutch Government, one by Gartner and the other on the possibility of an Expertise Center on Open Source Software. Both reports are in Dutch.

Europe

Joinup is an initiative financed by the European Commission, meant for “interoperability solutions”. I did not delve into it, but it looks like a code repository for various kinds of solutions. A bit like GitHub, but then more clumsy in the user-interface.

Spain

https://github.com/ctt-gob-es/datos.gob.es

France

The source code of the income tax application, there is a news article in English.

Belgium

An Open Source Web Publishing Platform for police forces, created by the company Timble and used by the Belgian Police.

To be continued?

It would be interesting to do a complete scan and analysis or even a (maturity) benchmark of European Governments publishing Open Source Software. I am sure things can be learned from each other and “good practices” can emerge. At the moment the FSFE is doing their Public Money Public Code Campaign, pushing the Open Sourcing agenda forward.

And the United States Government is way ahead of Europe in publishing Open Source Software, have a look at this 18f page as a starting point.

 

Monday, 21 May 2018

OSCAL'18 Debian, Ham, SDR and GSoC activities

DanielPocock.com - fsfe | 20:44, Monday, 21 May 2018

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

L4Re: Textual Debugging Output on the Framebuffer

Paul Boddie's Free Software-related blog » English | 19:38, Monday, 21 May 2018

I have actually been in the process of drafting another article about writing device drivers to run within the L4 Runtime Environment (L4Re) on top of the Fiasco.OC microkernel, this being for the Ben NanoNote and Letux 400 notebook computers. That article started to trail behind a lot of the work being done, and there are a few loose ends to be tied up before I can finish it.

Meanwhile, on the way towards some kind of achievement with L4Re, confounded somewhat by the sometimes impenetrable APIs, I managed to eventually get something working that I had thought would have been one of the first things to demonstrate. When initially perusing the range of software in the “pkg” directory within the L4Re distribution, I saw a package called “fbterminal” providing a terminal program that shows itself on the framebuffer (or display).

I imagined being able to launch this on top of the graphical user interface multiplexer, Mag, and then have the “hello” program provide some output to this terminal. I even imagined having the terminal accept input from the keyboard, but we aren’t quite at that point, and that is where my other article comes in. Of course, I initially had no idea how to achieve this, and there needed to be a lot of work put in just to get back to this particular point of entry.

Now, however, the act of launching fbterminal and have it work is fairly straightforward. A few additional packages are required, but the framebuffer works satisfactorily as far as the other components are concerned, and the result will be a blank region of the screen with the terminal showing precisely nothing. Obviously, we want it to show something in order to confirm that it is working. I had to get used to seeing this blank terminal for a while.

The intended companion to fbterminal for testing purposes is the hello program which merely writes output to what might be described as a logging destination. This particular output channel is usually the serial console for the device, which meant that when porting the system to the Ben and the Letux, the hello program was of no use to me. But now, with a framebuffer to show things on, and with a terminal that might be able to accept output from other things, it becomes interesting to see if the hello program can be persuaded to send its output elsewhere.

It was useful to investigate how the output from the hello program actually makes its way to its destination. Since it uses standard C library functions, there has to be a mechanism for those functions to use. As far as I know, these would typically involve various system calls concerning files and streams. A perusal of the sources dredged up an interesting symbol called “__rtld_l4re_env_posix_vfs_ops”. Further investigation led me to the L4Re virtual filesystem (Vfs) functionality and the following interesting files:

  • pkg/l4re-core/l4re_vfs/include/vfs.h
  • pkg/l4re-core/l4re_vfs/include/impl/vfs_impl.h

And these in turn led me to the virtual console (Vcon) functionality:

  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream.h
  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream_impl.h

It seems that standard output from the hello program goes via the standard C calls and Vfs functions and is packaged up and sent using the Vcon mechanisms to the logging destination, which is typically provided by the root task, Moe. Given that fbterminal understands the Vcon protocol and acts as a console server, there appeared to be some potential in looking at Vcon mechanisms more closely. It seemed that fbterminal might be able to take the place of Moe.

Indeed, the documentation offers some clues. In the description of the init process, Ned, a mention is made of a program loader configuration parameter called “log_fab” that indicates an object that can create a suitable logging destination. When starting a program, the program loader creates such an object using “log_fab” and presents it to the new program as a capability (or object reference).

However, this is not quite what we want because we don’t need anything else to be created: we already have fbterminal ready for us to use. I suppose something could be conjured up to act as a factory and provide a fbterminal instance, and maybe this is not too arduous in the Lua-based configuration environment used by Ned, but I wanted a more direct solution.

Contemplating this, I went and rediscovered the definitions used by Ned to support its configuration scripting (found in pkg/l4re-core/ned/server/src/ned.lua). Here, the workings of the “log_fab” mechanism can be found and studied. But what I started to favour was a way of just indicating a capability to the hello program and not have the loader create something else. This required a simple edit to one of the functions:

function App_env:log()
  Class.check(self, App_env);
  if self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end
  return self.loader.log_fab:create(Proto.Log, table.unpack(self.log_args));
end

Here, we want to ignore “log_fab” and just have our existing capability used instead. So, I introduced another clause to the if statement:

  if self.log_cap then
    return self.log_cap
  elseif self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end

Now, if we specify “log_cap” when starting a program, it should want to direct logging messages to the referenced object instead. So, with this available to us, it becomes possible to adjust the way the hello program is started. First of all, we define the way fbterminal is set up and started:

local term = l:new_channel();

l:start({
    caps = {
      fb = mag_caps.svc:create(L4.Proto.Goos, "g=320x230+0+0", "barheight=10"),
      term = term:svr(),
    },
  },
  "rom/fbterminal");

Since fbterminal needs to “export” its console abilities using a capability called “term”, this needs to be indicated in the “caps” table. (It doesn’t matter what the local variable for the channel is called.) So, the hello program is defined accordingly:

l:start({
    log_cap = term,
  },
  "rom/hello");

Here, we make use of “log_cap” and allow the output to be directed to the terminal that has already been started. And the result is this:

fbterminal on the Ben NanoNote showing the hello program's output

fbterminal on the Ben NanoNote showing the hello program's output

And at long last, it becomes possible to see what programs are printing out to the log!

Thursday, 17 May 2018

Summer of Code: Bug found!

vanitasvitae's blog » englisch | 18:29, Thursday, 17 May 2018

BouncyCastle

The mystery has been solved! I finally found out, why the OpenPGP keys I generated for my project had a broken format. Turns out, there was a bug in BouncyCastle.
Big thanks to Heiko Stamer, who quickly identified the issue in the bug report I created for pgpdump, as well as Kazu Yamamoto and David Hook, who helped identify and confirm the issue.

The bug was, that BouncyCastle, when exporting a secret key without a password, was appending 20 bytes of the SHA1 hash after the secret key material. That is only supposed to happen, when the key in fact is password protected. In case of unprotected keys, BouncyCastle is supposed to add a two byte checksum instead. BouncyCastles wrong behaviour cause pgpdump to interpret random bytes as packet tags, which resulted in a wrong key id being printed out.

The relevant part of RFC-4880 is found in section 5.5.3:

      -If the string-to-key usage octet is zero or 255, then a two-octet
       checksum of the plaintext of the algorithm-specific portion (sum
       of all octets, mod 65536).  If the string-to-key usage octet was
       254, then a 20-octet SHA-1 hash of the plaintext of the
       algorithm-specific portion.

Shortly after I filed a bug report for BouncyCastle, Vincent Breitmoser, one of the Authors of XEP-0373 and XEP-0374 submitted a fix for the bug. This is a nice little example of how free software projects can work together to improve each other. Big thanks for that :)

Working OX Test Client!

I spent the last night to create a command line chat client that can “speak” OX. Everything is a little bit rough around the edges, but the core functionality works.
The user has to do actions like publishing and fetching keys by hand, but encrypted, signed messages can be exchanged. Having working code, I can now start to formulate a general API which will enable multiple OpenPGP back-ends. I will spend some more time to polish that client up and eventually publish it in a separate git repository.

EFAIL

I totally forgot to talk about EFAIL in my last blog posts. It was a little shock when I woke up on Monday, the first day of the coding phase, only to read sentences like “Are you okay?” or “Is the GSoC project in danger?” :D
I’m sure you all have read about the EFAIL attack somewhere in the media, so I’m not going into too much detail here (the EFF already did a great job *cough cough*). The E-Fail website describes the attack as follows:

“In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.”

Is EFAIL applicable to XMPP?
Probably not to the XEPs I’m implementing. In case of E-Mail, it is relatively easy to prepend the image tag to the message. XEP-0373 however specifies, that the transported extension elements (eg. the body of the message) is wrapped inside of an additional extension element, which is then encrypted. Additionally this element (eg. <signcrypt/>) carries a random length, random content padding element, so it is very hard to nearly impossible for an attacker to guess, where the actual body starts, and in turn where they’d have to insert an “extraction channel” (eg. image tag) to the message.

In legacy OpenPGP for XMPP (XEP-0027) it is theoretically possible to at least execute the first part of the attack made in EFAIL. An attacker could insert an image tag to make a link out of the message. However, external images are usually shared by using XEP-0066 (Out of Band Data) by adding an x-element with the oob namespace to the message, which contains the URL to the image. Note, that this element is added outside the body though, so we should be fine, as so the attack would only work if the user tried to open the linkified message in a browser :)

Another option for the attacker would be to attack XHTML-IM (XEP-0071) messages, but I think those do not support legacy OpenPGP in the first place. Also XHTML-IM has been deprecated recently *phew*.

In the end, I’m by no means a security expert, so please do not quote me on my wild thoughts here :)
However it is important to learn from that example to not make the same mistakes some Email clients did.

Happy Hacking!

Wednesday, 16 May 2018

Summer of Code: Quick Update

vanitasvitae's blog » englisch | 11:21, Wednesday, 16 May 2018

I noticed that my blog posting frequency is substantially higher than last year. For that reason I’ll try to keep this post shorter.

Yesterday I implemented my first prototype code to encrypt and decrypt XEP-0374 messages! It can process incoming PubkeyElements (the published OpenPGP keys of other users) and create SigncryptElements which contain a signed and encrypted payload. On the receiving side it can also decrypt those messages and verify the signature.

I’m still puzzled about why I’m unable to dump the keys I generate using pgpdump. David Hook from Bouncycastle used my code to generate a key and it worked flawlessly on his machine, so I’m stumped for an answer…

I created a bug report about the issue on the pgpdump repository. I hope that we will get to the cause of the issue soon.

Changes to the schedule

In my original proposal I sketched out a timeline which is now (that I already making huge steps) a little bit underwhelming. The plan was initially to work on Smacks PubSub API within the first two weeks.
Florian suggested, that instead I should create a working prototype of my implementation as soon as possible, so I’m going to modify my schedule to meet the new criteria:

My new plan is, to have a fully working prototype implementation at the time of the first evaluation (june 15th).
That prototype (implemented within a small command line test client) will be capable of the following things:

  • Storing keys in a rudimental form on disk
  • automatically creating keys if needed
  • publishing public keys via PubSub
  • fetching contacts keys when needed
  • encrypting and signing messages
  • decrypting and verifying and displaying incoming messages
The final goal is still to create a sane, modular implementation. I’m just slightly modifying the path that will take me there :)
Happy Hacking!

Monday, 14 May 2018

Summer of Code: The Plan. Act 1: OpenPGP, Part Two

vanitasvitae's blog » englisch | 19:42, Monday, 14 May 2018

The Coding Phase has begun! Unfortunately my first (official) day of coding was a bad start, as I barely added any new code. Instead I got stuck on a very annoying bug. On the bright side, I’m now writing a lengthy blog post for you all, whining about my issue in all depth. But first lets continue where we left off in this post.

In the last part of my OpenPGP TL;DR, I took a look at different packet types of which an OpenPGP message may consist of. In this post, I’ll examine the structure of OpenPGP keys.

Just like an OpenPGP message, a key pair is made up from a variety of sub packets. I will now list some of them.

Key Material

First of all, there are four different types of key material:

  • Public Key
  • Public Sub Key
  • Secret Key
  • Secret Sub Key

It should be clear, what Public and Secret keys are.
OpenPGP defines a way to create key hierarchies, where sub keys belong to master keys. A typical use-case for this is when a user has multiple devices, but doesn’t want to risk losing their main key pair in case a device gets stolen. In such a case, they would create one super key (pair), which has a bunch of sub keys, one for every device of the user. The super key, which represents the identity of the user is used to sign all sub keys. It then gets locked away and only the sub keys are used. The advantage of this model is, that all sub keys clearly belong to the user, as all of them are signed by the master key. If one device gets stolen, the user can simply revoke that single key, without losing all their reputation, as the super key is still valid.

I still have to determine, whether my implementation should support sub keys, and if so, how that would work.

Signature

This model brings us directly to another, very important type of sub packet – the Signature Packet. A signature can be seen as a statement. How that statement is to be interpreted is defined by the type of the signature.
There are currently 3 different types of signatures, all with different meanings:

  • Certification Signature
    Signatures are often used as attestation. If I sign your PGP key for example, I attest other users, that I have to some degree verified, that you are the person you claim to be and that the key belongs to you.
  • Sub Key Binding Signature
    If a key has such a signature, the key is a sub key of the key that made the signature. In other words: The signature is a statement, that the key that made the signature owns the signed key.
  • Direct-Key Signature
    This type of signature is mostly used to bind additional information in form of Signature Sub Packets to a key. We will see later, what type of information that may be.

Issuing signatures shall not be taken too lightly. In the end, a signature is a statement, which will be interpreted. Creating trust signatures on keys without verifying their authenticity for example may seriously harm ecosystems like the Web of Trust.

Signature Sub Packets

What’s the purpose of Signature Sub Packets?
Signature Sub Packets are used to bind information to a key. Examples are:

  • Creation and expiration dates.
    It might be useful to know, how long a key should be in use. For that purpose the owner of the key can set an expiration date, after which the key should no longer be used.
    It is also possible to let signatures expire.
  • Preferred Algorithms
    The user can state, which algorithms (hashing, compressing and symmetric encryption) they want their interlocutors to use when creating a message.
  • Revocation status
    It might be necessary to revoke a key that has been compromised. That can be done by placing a signature on it, stating that the key should no longer be used.
  • Trust Signature
    If a signature contains a trust signature packet, the signature is to be interpreted as an attestation of trust in that key.
  • Primary User ID
    A user can specify the main user id of a key.

User ID

User IDs are stating, to which identity of a user a key belongs. That might be a real or a companies name, an email address or in our case a Jabber ID.
Sub keys can have different user ids, that way a user can differentiate between different roles.

Trust

Trust can not only be expressed by a Trust Signature (mentioned before), but also by a Trust packet. The difference is, that the signature is cryptographically backed, while a trust packet is merely an indicator.
Trust packets are mostly used by a user to keep track of which keys of contacts they trust themselves.

There is currently no XEP specifying how trust decisions of a user are synchronized across multiple devices in the context of XMPP. #FutureWork? :)

 Bouncycastle and (the not so painless) PGPainless

As I mentioned in my very last post, I was able to generate an OpenPGP key pair using GnuPG (using the “–allow-freeform-uids” flag to allow the uid format used by xmpp). The next step was trying to generate keys on my own using Bouncycastle. Bouncy-gpg (the library I forked into PGPainless) does not offer convenient methods for creating keys, so thats one feature I’ll add to PGPainless and hopefully upstream to Bouncy-gpg. I already created some basic builder structure for creating OpenPGP key pairs using RSA. In order to generate a key pair, the user would do this:

PGPSecretKeyRing secRing = BouncyGPG.createKeyPair()
        .withRSAKeys()
        .ofSize(PublicKeySize.RSA._2048)
        .forIdentity("xmpp:average@best.net")
        .withPassphrase("monkey123")
        .build()
        .generateSecretKeyRing();

Pretty easy, right? Behind the scenes, PGPainless is generating the key pair using the following code:

KeyPairGenerator pbkcGenerator = KeyPairGenerator.getInstance(
        BuildPGPKeyGeneratorAPI.this.keyType, PROVIDER);
pbkcGenerator.initialize(BuildPGPKeyGeneratorAPI.this.keySize);

// Underlying public-key-cryptography key pair
KeyPair pbkcKeyPair = pbkcGenerator.generateKeyPair();

// hash calculator
PGPDigestCalculator calculator = new JcaPGPDigestCalculatorProviderBuilder()
        .setProvider(PROVIDER)
        .build()
        .get(HashAlgorithmTags.SHA1);

// Form PGP key pair //TODO: Generalize "PGPPublicKey.RSA_GENERAL" to allow other crypto
PGPKeyPair pgpPair = new JcaPGPKeyPair(PGPPublicKey.RSA_GENERAL, pbkcKeyPair, new Date());

// Signer for creating self-signature
PGPContentSignerBuilder signer = new JcaPGPContentSignerBuilder(
        pgpPair.getPublicKey().getAlgorithm(), HashAlgorithmTags.SHA256);

// Encryptor for encrypting the secret key
PBESecretKeyEncryptor encryptor = passPhrase == null ?
        null : // unencrypted key pair, otherwise AES-256 encrypted
        new JcePBESecretKeyEncryptorBuilder(PGPEncryptedData.AES_256, calculator)
                .setProvider(PROVIDER)
                .build(passPhrase);

// Mimic GnuPGs signature sub packets
PGPSignatureSubpacketGenerator hashedSubPackets = new PGPSignatureSubpacketGenerator();

// Key flags
hashedSubPackets.setKeyFlags(false,
        KeyFlags.CERTIFY_OTHER
                | KeyFlags.SIGN_DATA
                | KeyFlags.ENCRYPT_COMMS
                | KeyFlags.ENCRYPT_STORAGE
                | KeyFlags.AUTHENTICATION);

// Encryption Algorithms
hashedSubPackets.setPreferredSymmetricAlgorithms(false, new int[]{
        PGPSymmetricEncryptionAlgorithms.AES_256.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_192.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_128.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.TRIPLE_DES.getAlgorithmId()
});

// Hash Algorithms
hashedSubPackets.setPreferredHashAlgorithms(false, new int[] {
        PGPHashAlgorithms.SHA_512.getAlgorithmId(),
        PGPHashAlgorithms.SHA_384.getAlgorithmId(),
        PGPHashAlgorithms.SHA_256.getAlgorithmId(),
        PGPHashAlgorithms.SHA_224.getAlgorithmId(),
        PGPHashAlgorithms.SHA1.getAlgorithmId()
});

// Compression Algorithms
hashedSubPackets.setPreferredCompressionAlgorithms(false, new int[] {
        PGPCompressionAlgorithms.ZLIB.getAlgorithmId(),
        PGPCompressionAlgorithms.BZIP2.getAlgorithmId(),
        PGPCompressionAlgorithms.ZIP.getAlgorithmId()
});

// Modification Detection
hashedSubPackets.setFeature(false, Features.FEATURE_MODIFICATION_DETECTION);

// Generator which the user can get the key pair from
PGPKeyRingGenerator ringGenerator = new PGPKeyRingGenerator(
        PGPSignature.POSITIVE_CERTIFICATION, pgpPair,
        BuildPGPKeyGeneratorAPI.this.identity, calculator,
        hashedSubPackets.generate(), null, signer, encryptor);

return ringGenerator;

Using the above code, I’m trying to create a key pair which is constructed equally as a key generated using GnuPG. I do this mainly to make sure that I don’t have any errors in my code. Also GnuPG is an implementation of OpenPGP with a lot of reputation. If I do what they do, chances are that I might do it right ;D

Unfortunately I’m not quite sure, whether I’m successful with this method or not. To explain my uncertainty, let me show you the output of pgpdump, a tool used to analyse OpenPGP keys:

$pgpdump gnupg.sec
Old: Secret Key Packet(tag 5)(920 bytes)
    Ver 4 - new
    Public key creation time - Tue May  8 15:15:42 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1024 bits) - ...
    Checksum - 3b 8c
Old: User ID Packet(tag 13)(23 bytes)
    User ID - xmpp:juliet@capulet.lit
Old: Signature Packet(tag 2)(334 bytes)
    Ver 4 - new
    Sig type - Positive certification of a User ID and Public Key packet(0x13).
    Pub alg - RSA Encrypt or Sign(pub 1)
    Hash alg - SHA256(hash 8)
    Hashed Sub: issuer fingerprint(sub 33)(21 bytes)
     v4 -    Fingerprint - 1d 01 8c 77 2d f8 c5 ef 86 a1 dc c9 b4 b5 09 cb 59 36 e0 3e
    Hashed Sub: signature creation time(sub 2)(4 bytes)
        Time - Tue May  8 15:15:42 CEST 2018
    Hashed Sub: key flags(sub 27)(1 bytes)
        Flag - This key may be used to certify other keys
        Flag - This key may be used to sign data
        Flag - This key may be used to encrypt communications
        Flag - This key may be used to encrypt storage
        Flag - This key may be used for authentication
    Hashed Sub: preferred symmetric algorithms(sub 11)(4 bytes)
        Sym alg - AES with 256-bit key(sym 9)
        Sym alg - AES with 192-bit key(sym 8)
        Sym alg - AES with 128-bit key(sym 7)
        Sym alg - Triple-DES(sym 2)
    Hashed Sub: preferred hash algorithms(sub 21)(5 bytes)
        Hash alg - SHA512(hash 10)
        Hash alg - SHA384(hash 9)
        Hash alg - SHA256(hash 8)
        Hash alg - SHA224(hash 11)
        Hash alg - SHA1(hash 2)
    Hashed Sub: preferred compression algorithms(sub 22)(3 bytes)
        Comp alg - ZLIB <RFC1950>(comp 2)
        Comp alg - BZip2(comp 3)
        Comp alg - ZIP <RFC1951>(comp 1)
    Hashed Sub: features(sub 30)(1 bytes)
        Flag - Modification detection (packets 18 and 19)
    Hashed Sub: key server preferences(sub 23)(1 bytes)
        Flag - No-modify
    Sub: issuer key ID(sub 16)(8 bytes)
        Key ID - 0xB4B509CB5936E03E
    Hash left 2 bytes - 87 ec
    RSA m^d mod n(2048 bits) - ...
        -> PKCS-1

Above you can see the structure of an OpenPGP RSA key generated by GnuPG. You can see its preferred algorithms, the XMPP UID of Juliet and so on. Now lets analyse a key generated using PGPainless.

$pgpdump pgpainless.sec
Old: Secret Key Packet(tag 5)(950 bytes)
    Ver 4 - new
    Public key creation time - Mon May 14 15:56:21 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1023 bits) - ...
    Checksum - 6d 18
New: unknown(tag 48)(173 bytes)
Old: Signature Packet(tag 2)(until eof)
    Ver 213 - unknown

Unfortunately the output indicates an unknown packet tag and it looks like something is broken. I’m not sure what’s going on, but I suspect either an error in my implementation, or a bug in Bouncycastle. I noticed, that the output of pgpdump is drastically changing if I change the first boolean value in any of the hashedSubPackets setter function calls from false to true (that boolean represents, whether the set value is “critical”, meaning whether the receiving implementation should throw an error in case the read property is unknown). If I do set it to true, the output looks more disturbing and broken, since strange unicode symbols start to appear, indicating a bug. Unfortunately my mail to the Bouncycastle mailing list is still unanswered, although I must add that I wrote it only seven hours ago.

It is a real pity, that it is so hard to find working example code that is not outdated :( If you can point me in the right direction, please let me know!! You can find contact details on my Github page.

My next steps debugging this will be trying whether an exported key can successfully be imported both back into PGPainless, as well as into GnuPG. Apart from that, I will spend more time thinking about an API which allows different OpenPGP backends.

Happy Hacking!

A closer look at power and PowerPole

DanielPocock.com - fsfe | 19:25, Monday, 14 May 2018

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

Sunday, 13 May 2018

USBGuard

Evaggelos Balaskas - System Engineer | 18:42, Sunday, 13 May 2018

Prologue

Security

One of the most common security concerns (especially when traveling) is the attach of unknown USB device on our system.

There are a few ways on how to protect your system.

 

Hardware Protection

 

Cloud Storage

More and more companies are now moving from local storage to cloud storage as a way to reduce the attack surface on systems:

IBM a few days ago, banned portable storage devices

 

Hot Glue on USB Ports

also we must not forget the old but powerful advice from security researches & hackers:

USB

by inserting glue or using a Hot Glue Gun to disable the USB ports of a system.

Problem solved!

 

USBGuard

I was reading the redhat 7.5 release notes and I came upon on usbguard:

 

USBGuard

The USBGuard software framework helps to protect your computer against rogue USB devices (a.k.a. BadUSB) by implementing basic whitelisting / blacklisting capabilities based on device attributes.

 

USB protection framework

So the main idea is you run a daemon on your system that tracks udev monitor system. The idea seams like the usb kill switch but in a more controlled manner. You can dynamical whitelist or/and blacklist devices and change the policy on such devices more easily. Also you can do all that via a graphical interface, although I will not cover it here.

 

Archlinux Notes

for archlinux users, you can find usbguard in AUR (Archlinux User Repository)

AUR : usbguard

or you can try my custom PKGBUILDs files

 

How to use usbguard

Generate Policy

The very first thing is to generate a policy with the current attached USB devices.

sudo usbguard generate-policy

Below is an example output, viewing my usb mouse & usb keyboard :

allow id 17ef:6019 serial "" name "Lenovo USB Optical Mouse" hash "WXaMPh5VWHf9avzB+Jpua45j3EZK6KeLRdPcoEwlWp4=" parent-hash "jEP/6WzviqdJ5VSeTUY8PatCNBKeaREvo2OqdplND/o=" via-port "3-4" with-interface 03:01:02

allow id 045e:00db serial "" name "Naturalxc2xae Ergonomic Keyboard 4000" hash "lwGc9o+VaG/2QGXpZ06/2yHMw+HL46K8Vij7Q65Qs80=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "1-1.5" with-interface { 03:01:01 03:00:00 }

The default policy for already attached USB devices are allow.

 

We can create our rules configuration file by:

sudo usbguard generate-policy > /etc/usbguard/rules.conf

 

Service

starting and enabling usbguard service via systemd:

systemctl start usbguard.service

systemctl enable usbguard.service

 

List of Devices

You can view the list of attached USB devices and

sudo usbguard list-devices

 

Allow Device

Attaching a new USB device (in my case, my mobile phone):

$ sudo usbguard list-devices | grep -v allow

we will see that the default policy is to block it:

17: block id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }

So we can allow it by:

sudo usbguard allow-device 17

then

sudo usbguard list-devices | grep BLN-L21

we can verify that is okay:

17: allow id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }

 

Block USB on screen lock

The default policy, when you (or someone else) are inserting a new USB device is:

sudo usbguard get-parameter InsertedDevicePolicy
apply-policy

is to apply the default policy we have. There is a way to block or reject any new USB device when you have your screen locker on, as this may be a potential security attack on your system. In theory, you are inserting USB devices as you are working on your system, and not when you have your screen lock on.

I use slock as my primary screen locker via a keyboard shortcut. So the easiest way to dynamical change the default policy on usbguard is via a shell wrapper:

vim /usr/local/bin/slock
#!/bin/sh

# ebal, Sun, 13 May 2018 10:07:53 +0300
POLICY_UNLOCKED="apply-policy"
POLICY_LOCKED="reject"

# function to revert the policy
revert() {
  usbguard set-parameter InsertedDevicePolicy ${POLICY_UNLOCKED}
}

trap revert SIGHUP SIGINT SIGTERM
usbguard set-parameter InsertedDevicePolicy ${POLICY_LOCKED}

/usr/bin/slock

# shell function to revert reject policy
revert

(you can find the same example on redhat’s blog post).

Friday, 11 May 2018

CentOS Dist Upgrade

Evaggelos Balaskas - System Engineer | 14:54, Friday, 11 May 2018

Upgrading CentOS 6.x to CentOS 7.x

 

Disclaimer : Create a recent backup of the system. This is an unofficial , unsupported procedure !

 

CentOS 6

CentOS release 6.9 (Final)
Kernel 2.6.32-696.16.1.el6.x86_64 on an x86_64

centos69 login: root
Password:
Last login: Tue May  8 19:45:45 on tty1

[root@centos69 ~]# cat /etc/redhat-release
CentOS release 6.9 (Final)

 

Pre Tasks

There are some tasks you can do to prevent from unwanted results.
Like:

  • Disable selinux
  • Remove unnecessary repositories
  • Take a recent backup!

 

CentOS Upgrade Repository

Create a new centos repository:

cat > /etc/yum.repos.d/centos-upgrade.repo <<EOF
[centos-upgrade]
name=centos-upgrade
baseurl=http://dev.centos.org/centos/6/upg/x86_64/
enabled=1
gpgcheck=0
EOF

 

Install Pre-Upgrade Tool

First install the openscap version from dev.centos.org:

# yum -y install https://buildlogs.centos.org/centos/6/upg/x86_64/Packages/openscap-1.0.8-1.0.1.el6.centos.x86_64.rpm

then install the redhat upgrade tool:

# yum -y install redhat-upgrade-tool preupgrade-assistant-*

 

Import CentOS 7 PGP Key

# rpm --import http://ftp.otenet.gr/linux/centos/RPM-GPG-KEY-CentOS-7 

 

Mirror

to bypass errors like:

Downloading failed: invalid data in .treeinfo: No section: ‘checksums’

append CentOS Vault under mirrorlist:

 mkdir -pv /var/tmp/system-upgrade/base/ /var/tmp/system-upgrade/extras/  /var/tmp/system-upgrade/updates/

 echo http://vault.centos.org/7.0.1406/os/x86_64/       >  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/extras/x86_64/   >  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/updates/x86_64/  >  /var/tmp/system-upgrade/updates/mirrorlist.txt 

These are enough to upgrade to 7.0.1406. You can add the below mirros, to upgrade to 7.5.1804

More Mirrors

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/os/x86_64/  >>  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/os/x86_64/           >>  /var/tmp/system-upgrade/base/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/extras/x86_64/ >>  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/extras/x86_64/          >>  /var/tmp/system-upgrade/extras/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/updates/x86_64/  >>  /var/tmp/system-upgrade/updates/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/updates/x86_64/           >>  /var/tmp/system-upgrade/updates/mirrorlist.txt 

 

Pre-Upgrade

preupg is actually a python script!

# yes | preupg -v 
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
Gathering logs used by preupgrade assistant:
All installed packages : 01/11 ...finished (time 00:00s)
All changed files      : 02/11 ...finished (time 00:18s)
Changed config files   : 03/11 ...finished (time 00:00s)
All users              : 04/11 ...finished (time 00:00s)
All groups             : 05/11 ...finished (time 00:00s)
Service statuses       : 06/11 ...finished (time 00:00s)
All installed files    : 07/11 ...finished (time 00:01s)
All local files        : 08/11 ...finished (time 00:01s)
All executable files   : 09/11 ...finished (time 00:01s)
RedHat signed packages : 10/11 ...finished (time 00:00s)
CentOS signed packages : 11/11 ...finished (time 00:00s)
Assessment of the system, running checks / SCE scripts:
001/096 ...done    (Configuration Files to Review)
002/096 ...done    (File Lists for Manual Migration)
003/096 ...done    (Bacula Backup Software)
...
./result.html
/bin/tar: .: file changed as we read it
Tarball with results is stored here /root/preupgrade-results/preupg_results-180508202952.tar.gz .
The latest assessment is stored in directory /root/preupgrade .
Summary information:
We found some potential in-place upgrade risks.
Read the file /root/preupgrade/result.html for more details.
Upload results to UI by command:
e.g. preupg -u http://127.0.0.1:8099/submit/ -r /root/preupgrade-results/preupg_results-*.tar.gz .

this must finish without any errors.

 

CentOS Upgrade Tool

We need to find out what are the possible problems when upgrade:

# centos-upgrade-tool-cli --network=7
          --instrepo=http://vault.centos.org/7.0.1406/os/x86_64/ 

 

Then by force we can upgrade to it’s latest version:

# centos-upgrade-tool-cli --force --network=7
          --instrepo=http://vault.centos.org/7.0.1406/os/x86_64/
          --cleanup-post

 

Output

setting up repos...
base                                                          | 3.6 kB     00:00
base/primary_db                                               | 4.9 MB     00:04
centos-upgrade                                                | 1.9 kB     00:00
centos-upgrade/primary_db                                     |  14 kB     00:00
cmdline-instrepo                                              | 3.6 kB     00:00
cmdline-instrepo/primary_db                                   | 4.9 MB     00:03
epel/metalink                                                 |  14 kB     00:00
epel                                                          | 4.7 kB     00:00
epel                                                          | 4.7 kB     00:00
epel/primary_db                                               | 6.0 MB     00:04
extras                                                        | 3.6 kB     00:00
extras/primary_db                                             | 4.9 MB     00:04
mariadb                                                       | 2.9 kB     00:00
mariadb/primary_db                                            |  33 kB     00:00
remi-php56                                                    | 2.9 kB     00:00
remi-php56/primary_db                                         | 229 kB     00:00
remi-safe                                                     | 2.9 kB     00:00
remi-safe/primary_db                                          | 950 kB     00:00
updates                                                       | 3.6 kB     00:00
updates/primary_db                                            | 4.9 MB     00:04
.treeinfo                                                     | 1.1 kB     00:00
getting boot images...
vmlinuz-redhat-upgrade-tool                                   | 4.7 MB     00:03
initramfs-redhat-upgrade-tool.img                             |  32 MB     00:24
setting up update...
finding updates 100% [=========================================================]
(1/323): MariaDB-10.2.14-centos6-x86_64-client.rpm            |  48 MB     00:38
(2/323): MariaDB-10.2.14-centos6-x86_64-common.rpm            | 154 kB     00:00
(3/323): MariaDB-10.2.14-centos6-x86_64-compat.rpm            | 4.0 MB     00:03
(4/323): MariaDB-10.2.14-centos6-x86_64-server.rpm            | 109 MB     01:26
(5/323): acl-2.2.51-12.el7.x86_64.rpm                         |  81 kB     00:00
(6/323): apr-1.4.8-3.el7.x86_64.rpm                           | 103 kB     00:00
(7/323): apr-util-1.5.2-6.el7.x86_64.rpm                      |  92 kB     00:00
(8/323): apr-util-ldap-1.5.2-6.el7.x86_64.rpm                 |  19 kB     00:00
(9/323): attr-2.4.46-12.el7.x86_64.rpm                        |  66 kB     00:00
...
(320/323): yum-plugin-fastestmirror-1.1.31-24.el7.noarch.rpm  |  28 kB     00:00
(321/323): yum-utils-1.1.31-24.el7.noarch.rpm                 | 111 kB     00:00
(322/323): zlib-1.2.7-13.el7.x86_64.rpm                       |  89 kB     00:00
(323/323): zlib-devel-1.2.7-13.el7.x86_64.rpm                 |  49 kB     00:00
testing upgrade transaction
rpm transaction 100% [=========================================================]
rpm install 100% [=============================================================]
setting up system for upgrade
Finished. Reboot to start upgrade.

 

Reboot

The upgrade procedure, will download all rpm packages to a directory and create a new grub entry. Then on reboot the system will try to upgrade the distribution release to it’s latest version.

# reboot 

 

Upgrade

centos6_7upgr.png

centos6_7upgr_b.png

centos6_7upgr_c.png

CentOS 7

CentOS Linux 7 (Core)
Kernel 3.10.0-123.20.1.el7.x86_64 on an x86_64

centos69 login: root
Password:
Last login: Fri May 11 15:42:30 on ttyS0

[root@centos69 ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)

 

Tag(s): centos, centos7

Wednesday, 09 May 2018

Summer of Code: Small steps

vanitasvitae's blog » englisch | 15:18, Wednesday, 09 May 2018

Yesterday I got my first results encrypting and decrypting OpenPGP messages using PGPainless, my fork of bouncy-gpg. There were some interesting hurdles that I want to discuss though.

GnuPG

As a first step towards working encryption and decryption, I obviously needed to create some PGP keys for testing purposes. As a regular user of OpenPGP I knew how to create keys using the command line tool GnuPG, so I started up the key creation by typing “gpg –generate-key”. I chose the key type to be RSA with a length of 2048 bits, as those settings are also the defaults recommended by GnuPG itself. When it came to entering user id information though, things got a little more complicated. GnuPG asks for the name of the user, their email address and a comment. XEP-0373 states, that the user id packet of a PGP key MUST be of the format “xmpp:juliet@capulet.lit”. My first thing to figure out was, if I should enter that String as the name, email or as a comment. I first tried with the name, upon which GnuPG complained, that neither name, nor comment is allowed to contain an email address. Logically my next step was to enter the String as the users email address. Again, GnuPG complained, this time it stated, that “xmpp:juliet@capulet.lit” was not a valid Email address. So I got stuck.

Luckily I knew that Philipp Hörist was working on an experimental OX plugin for Gajim. He could hint me to a process of “Unattended Key Generation“, which reads input values from a script file. This input would not be validated by GnuPG as strictly as when using the wizard, so now I was able to successfully create my testing keys. Big thanks for the help :)

Update: Apparently GnuPG provides an additional flag “–allow-freeform-uid”, which does exactly that. Allowing uids of any form. Using that flag allows easy generation and editing of keys with freely chosen uids. Thanks to Wiktor from the Conversations.im chat room :)

Bouncy-gpg

As a next step, I wrote a little JUnit test which signs and encrypts a little piece of text, followed by decryption and signature validation. Here I came across my next problem.

Bouncy-gpg provides a class called Rfc4880KeySelectionStrategy, which is used to select keys from the users keyring following a certain strategy. In my testing code, I created two keyrings for Romeo and Juliet and added their respective public and private keys like you would do in a real life scenario. The issue I then encountered was, that when I tried to encrypt my message from Juliet for Romeos public key, I got the error, that “no suitable key was found”. How could that be? I did some more debugging and was able to verify that the keys were in fact added to the keyrings just as I intended.

To explain the cause of this issue, I have to explain in a little more depth, how the user id field is formatted in OpenPGP.
RFC4880 states, the following:

A User ID packet consists of UTF-8 text that is intended to represent
the name and email address of the key holder.  By convention, it
includes an RFC 2822 [RFC2822] mail name-addr, but there are no
restrictions on its content.

A “mail name-addr” follows this format: “Juliet Capulet (The Juliet from Shakespear’s play) <juliet@capulet.lit>”.
First there is the name of the key owner, followed by a comment in parentheses, followed by the email address in angle brackets. The usage of brackets makes it unambiguous, which value is which, so all values are optional.
“<juliet@capulet.lit>” would still be a valid user id for example.

So what was the problem?
The user id of my testing key looked like this: “xmpp:juliet@capulet.lit”. Note that there are no angle brackets or parentheses around the text, so the string would be interpreted as name.
Bouncy-gpg’s Rfc4880KeySelectionStrategy however contained some code, which would check, whether the query the user entered to search for a key would be enclosed in angle brackets, to follow the email address format. In case it doesn’t, the code would add the angle brackets prior to executing the search. Instead of searching for “xmpp:juliet@capulet.lit”, the selection strategy would look out for keys with the user id “<xmpp:juliet@capulet.lit>”.
My solution to the problem was to create my own KeySelectionStrategy, which would leave the query as is, in order for it to match my keys user id. Figuring that out took me quite a while :D

Conclusions

So what conclusions can I draw from my experiences?
First of all, I’m not sure, if it is a good idea to give the user the ability to import their own PGP keys. GnuPGs behaviour of forbidding the user to add user ids which don’t follow the mail name-addr format will make it very hard for a user to create a key with a valid user id. (Update: Users can use the flag “–allow-freeform-uid” to generate new keys and edit existing ones with unconventional uids.) Philipp Hörist suggested that implementations of XEP-0373 should instead create a key for the user on the first use and I think I aggree with him. As a logical next step I have to figure out, how to create PGP keys using Bouncycastle :D

I hope you liked my little update post, which grew longer than I expected :D

Happy Hacking!

Tuesday, 08 May 2018

Summer of Code: The plan. Act 1: OpenPGP

vanitasvitae's blog » englisch | 08:05, Tuesday, 08 May 2018

OpenPGP

OpenPGP (know as RFC4880) defines a format for encrypted and signed data, as well as encryption keys and signatures.

My main problem with the specification is, that it is very noisy. The document is 90 pages long and describes every aspect an implementer needs to know about, from how big numbers are stored, over which magic bits and bytes are in use to mark special regions in a packet, to recommendations about used algorithms. Since I’m not going to write a crypto library from scratch, the first step I have to take is to identify which parts are important for me as a user of a – lets call it mid-level-API – and which parts I can ignore. You can see this posting as kind of an hopefully somewhat entertaining piece of jotting paper which I use to note down important parts of the spec while I go through the document.

Lets start to create a short TL;DR of the OpenPGP specification.
The basic process of creating an encrypted message is as follows:

  • The sender provides a plaintext message
  • That message gets encrypted with a randomly generated symmetric key (called session key)
  • The session key then gets encrypted for each recipients public key and the resulting block of data gets prepended to the previously encrypted message

As you can see, an OpenPGP message consists of multiple parts. Those are called sub-packets. There is a pretty respectable number of sub-packet types specified in the RFC. Many of them are not very interesting, so lets identify the few which are relevant for our project.

  • Public-Key Encrypted Session Key Packets
    Those packets represent a session key encrypted with the public key of a recipient.
  • Signature Packets
    Digital signatures are used to provide authenticity. If a piece of data is signed using the secret key of the sender, the recipient is able to verify its origin and authenticity. There is a whole load of different signature sub-packets, so for now we just acknowledge their existence without going into too much detail.
  • Compressed Data Packets
    OpenPGP provides the feature of compressing plaintext data prior to encrypting it. This might come in handy, since encrypting files or messages adds quite a bit of overhead. Compressing the original data can compensate that effect a little bit.
  • Symmetrically Encrypted Data Packets
    This packet type represents data which has been encrypted using a symmetric key (in our case the session key).
  • Literal Data Packets
    The original message we want to encrypt is referred to as literal data. The literal data packet consists of metadata like encoding of the original message, or filename in case we want to encrypt a file, as well as – of course – the data itself.
  • ASCII Armor (not really a Packet)
    Encrypted data is represented in binary form. Since one big use case of OpenPGP encryption is in Email messaging though, it is necessary to bring the data into a form which can be transported safely. The ASCII Armor is an additional layer which encodes the binary data using Base64. It also makes the data identifiable for humans by adding a readable header and footer. XEP-0373 forbids the use of ASCII Armor though, so lets focus on other things instead :D

Those packet types can be nested, as well as concatenated in many different ways. For example, a common constellation would consist of a Literal Data Packet of our original message, which is, along with a Signature Packet, contained inside of a Compressed Data Packet to save some space. The Compressed Data Packet is nested inside of a Symmetrically Encrypted Data Packet, which lives inside of an OpenPGP message along with one or more Public-Key Encrypted Session Key Packets.

Each packet carries additional information, for example which compression algorithm is used in the Compressed Data Packet. We will not focus on those details, as we assume that the libraries we use will already handle those specifics for us.

OpenPGP also specifies a way to store and exchange keys. In order to be able to receive encrypted messages, a user must distribute their keys to other users. A key can carry a lot of additional information, like identities and signatures of other keys. Signatures are used to create trust networks like the web of trust, but we will most likely not dive deeper into that.

Signatures on keys can also be used to create key hierarchies like super keys and sub-keys. It still has to be determined, if and how those patterns will be reflected in my code. I can imagine it would be useful to only have sub-keys on mobile devices, while the main super key is hidden away from the orcs in a bunker somewhere, but I also think that it would be a rather complicated task to add support for sub-keys to my project. We will see ;)

That’s it for Part 1 of my sighting of the OpenPGP RFC.

Happy Hacking!

Monday, 07 May 2018

Converting an existing installation to LUKS using luksipc - 2018 notes

Losca | 11:08, Monday, 07 May 2018

Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Powering a ham radio transmitter

DanielPocock.com - fsfe | 08:04, Monday, 07 May 2018

Last week I announced the crowdfunding campaign to help run a ham radio station at OSCAL. Thanks to all those people who already donated or expressed interest in volunteering.

Modern electronics are very compact and most of what I need to run the station can be transported in my hand luggage. The two big challenges are power supplies and antenna masts. In this blog post there are more details about the former.

Here is a picture of all the equipment I hope to use:

The laptop is able to detect incoming signals using the RTL-SDR dongle and up-converter. After finding a signal, we can then put the frequency into the radio transmitter (in the middle of the table), switch the antenna from the SDR to the radio and talk to the other station.

The RTL-SDR and up-converter run on USB power and a phone charger. The transmitter, however, needs about 22A at 12V DC. This typically means getting a large linear power supply or a large battery.

In the photo, I've got a Varta LA60 AGM battery, here is a close up:

There are many ways to connect to a large battery. For example, it is possible to use terminals like these with holes in them for the 8 awg wire or to crimp ring terminals onto a wire and screw the ring onto any regular battery terminal. The type of terminal with these extra holes in it is typically sold for car audio purposes. In the photo, the wire is 10 awg superflex. There is a blade fuse along the wire and the other end has a PowerPole 45 plug. You can easily make cables like this yourself with a PowerPole crimping tool, everything can be purchased online from sites like eBay.

The wire from the battery goes into a fused power distributor with six PowerPole outlets for connecting the transmitter and other small devices, for example, the lamp in the ATU or charging the handheld:

The AGM battery in the photo weighs about 18kg and is unlikely to be accepted in my luggage, hence the crowdfunding campaign to help buy one for the local community. For many of the young people and students in the Balkans, the price of one of the larger AGM batteries is equivalent to about one month of their income so nobody there is going to buy one on their own. Please consider making a small donation if you would like to help as it won't be possible to run demonstrations like this without power.

Friday, 04 May 2018

GoFundMe: errors and bait-and-switch

DanielPocock.com - fsfe | 09:34, Friday, 04 May 2018

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

Thursday, 03 May 2018

Turning a dictator's pyramid into a ham radio station

DanielPocock.com - fsfe | 19:44, Thursday, 03 May 2018

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 25 people donate EUR 10 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the hackerspaces, radio clubs and non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker's blog  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog