Thoughts of the FSFE Community

Tuesday, 22 May 2018

Restrict email addresses for sending emails

Evaggelos Balaskas - System Engineer | 17:12, Tuesday, 22 May 2018

Prologue

 

Maintaining a (public) service can be sometimes troublesome. In case of email service, often you need to suspend or restrict users for reasons like SPAM, SCAM or Phishing. You have to deal with inactive or even compromised accounts. Protecting your infrastructure is to protect your active users and the service. In this article I’ll propose a way to restrict messages to authorized addresses when sending an email and get a bounce message explaining why their email was not sent.

 

Reading Material

The reference documentation when having a Directory Service (LDAP) as our user backend and using Postfix:

 

ldap

LDAP

In this post, we will not get into openldap internals but as reference I’ll show an example user account (this is from my working test lab).

 

dn: uid=testuser2,ou=People,dc=example,dc=org
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
mail: testuser2@example.org
smtpd_sender_restrictions: true
cn: Evaggelos Balaskas
sn: Balaskas
givenName: Evaggelos
uidNumber: 99
gidNumber: 12
uid: testuser2
homeDirectory: /storage/vhome/%d/%n
userPassword: XXXXXXXXXX

as you can see, we have a custom ldap attribute:

smtpd_sender_restrictions: true

keep that in mind for now.

 

Postfix

The default value of smtpd_sender_restrictions is empty, that means by default the mail server has no sender restrictions. Depending on the policy we either can whitelist or blacklist in postfix restrictions, for the purpose of this blog post, we will only restrict (blacklist) specific user accounts.

 

ldap_smtpd_sender_restrictions

To do that, let’s create a new file that will talk to our openldap and ask for that specific ldap attribute.

ldap_smtpd_sender_restrictions.cf

server_host = ldap://localhost
server_port = 389
search_base = ou=People,dc=example,dc=org
query_filter = (&(smtpd_sender_restrictions=true)(mail=%s))
result_attribute = uid
result_filter = uid
result_format = REJECT This account is not allowed to send emails, plz talk to abuse@example.org
version = 3
timeout = 5

This is an anonymous bind, as we do not search for any special attribute like password.

 

Status Codes

The default status code will be: 554 5.7.1
Take a look here for more info: RFC 3463 - Enhanced Mail System Status Codes

 

Test it

# postmap -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf
REJECT This account is not allowed to send emails, plz talk to abuse@example.org

Add -v to extent verbosity

# postmap -v -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

 

Possible Errors

postmap: fatal: unsupported dictionary type: ldap

Check your postfix setup with postconf -m . The result should be something like this:

btree
cidr
environ
fail
hash
internal
ldap
memcache
nis
proxy
regexp
socketmap
static
tcp
texthash
unix

If not, you need to setup postfix to support the ldap dictionary type.

 

smtpd_sender_restrictions

Modify the main.cf to add the ldap_smtpd_sender_restrictions.cf

# applied in the context of the MAIL FROM
smtpd_sender_restrictions =
        check_sender_access ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

and reload postfix

# postfix reload

If you keep logs, tail them to see any errors.

 

Thunderbird

smtpd_sender_restrictions

 

Logs

May 19 13:20:26 centos6 postfix/smtpd[20905]:
NOQUEUE: reject: RCPT from XXXXXXXX[XXXXXXXX]: 554 5.7.1 <testuser2@example.org>:
Sender address rejected: This account is not allowed to send emails, plz talk to abuse@example.org;
from=<testuser2@example.org> to=<postmaster@example.org> proto=ESMTP helo=<[192.168.0.13]>
Tag(s): postfix, ldap

Monday, 21 May 2018

OSCAL'18 Debian, Ham, SDR and GSoC activities

DanielPocock.com - fsfe | 20:44, Monday, 21 May 2018

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

L4Re: Textual Debugging Output on the Framebuffer

Paul Boddie's Free Software-related blog » English | 19:38, Monday, 21 May 2018

I have actually been in the process of drafting another article about writing device drivers to run within the L4 Runtime Environment (L4Re) on top of the Fiasco.OC microkernel, this being for the Ben NanoNote and Letux 400 notebook computers. That article started to trail behind a lot of the work being done, and there are a few loose ends to be tied up before I can finish it.

Meanwhile, on the way towards some kind of achievement with L4Re, confounded somewhat by the sometimes impenetrable APIs, I managed to eventually get something working that I had thought would have been one of the first things to demonstrate. When initially perusing the range of software in the “pkg” directory within the L4Re distribution, I saw a package called “fbterminal” providing a terminal program that shows itself on the framebuffer (or display).

I imagined being able to launch this on top of the graphical user interface multiplexer, Mag, and then have the “hello” program provide some output to this terminal. I even imagined having the terminal accept input from the keyboard, but we aren’t quite at that point, and that is where my other article comes in. Of course, I initially had no idea how to achieve this, and there needed to be a lot of work put in just to get back to this particular point of entry.

Now, however, the act of launching fbterminal and have it work is fairly straightforward. A few additional packages are required, but the framebuffer works satisfactorily as far as the other components are concerned, and the result will be a blank region of the screen with the terminal showing precisely nothing. Obviously, we want it to show something in order to confirm that it is working. I had to get used to seeing this blank terminal for a while.

The intended companion to fbterminal for testing purposes is the hello program which merely writes output to what might be described as a logging destination. This particular output channel is usually the serial console for the device, which meant that when porting the system to the Ben and the Letux, the hello program was of no use to me. But now, with a framebuffer to show things on, and with a terminal that might be able to accept output from other things, it becomes interesting to see if the hello program can be persuaded to send its output elsewhere.

It was useful to investigate how the output from the hello program actually makes its way to its destination. Since it uses standard C library functions, there has to be a mechanism for those functions to use. As far as I know, these would typically involve various system calls concerning files and streams. A perusal of the sources dredged up an interesting symbol called “__rtld_l4re_env_posix_vfs_ops”. Further investigation led me to the L4Re virtual filesystem (Vfs) functionality and the following interesting files:

  • pkg/l4re-core/l4re_vfs/include/vfs.h
  • pkg/l4re-core/l4re_vfs/include/impl/vfs_impl.h

And these in turn led me to the virtual console (Vcon) functionality:

  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream.h
  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream_impl.h

It seems that standard output from the hello program goes via the standard C calls and Vfs functions and is packaged up and sent using the Vcon mechanisms to the logging destination, which is typically provided by the root task, Moe. Given that fbterminal understands the Vcon protocol and acts as a console server, there appeared to be some potential in looking at Vcon mechanisms more closely. It seemed that fbterminal might be able to take the place of Moe.

Indeed, the documentation offers some clues. In the description of the init process, Ned, a mention is made of a program loader configuration parameter called “log_fab” that indicates an object that can create a suitable logging destination. When starting a program, the program loader creates such an object using “log_fab” and presents it to the new program as a capability (or object reference).

However, this is not quite what we want because we don’t need anything else to be created: we already have fbterminal ready for us to use. I suppose something could be conjured up to act as a factory and provide a fbterminal instance, and maybe this is not too arduous in the Lua-based configuration environment used by Ned, but I wanted a more direct solution.

Contemplating this, I went and rediscovered the definitions used by Ned to support its configuration scripting (found in pkg/l4re-core/ned/server/src/ned.lua). Here, the workings of the “log_fab” mechanism can be found and studied. But what I started to favour was a way of just indicating a capability to the hello program and not have the loader create something else. This required a simple edit to one of the functions:

function App_env:log()
  Class.check(self, App_env);
  if self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end
  return self.loader.log_fab:create(Proto.Log, table.unpack(self.log_args));
end

Here, we want to ignore “log_fab” and just have our existing capability used instead. So, I introduced another clause to the if statement:

  if self.log_cap then
    return self.log_cap
  elseif self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end

Now, if we specify “log_cap” when starting a program, it should want to direct logging messages to the referenced object instead. So, with this available to us, it becomes possible to adjust the way the hello program is started. First of all, we define the way fbterminal is set up and started:

local term = l:new_channel();

l:start({
    caps = {
      fb = mag_caps.svc:create(L4.Proto.Goos, "g=320x230+0+0", "barheight=10"),
      term = term:svr(),
    },
  },
  "rom/fbterminal");

Since fbterminal needs to “export” its console abilities using a capability called “term”, this needs to be indicated in the “caps” table. (It doesn’t matter what the local variable for the channel is called.) So, the hello program is defined accordingly:

l:start({
    log_cap = term,
  },
  "rom/hello");

Here, we make use of “log_cap” and allow the output to be directed to the terminal that has already been started. And the result is this:

fbterminal on the Ben NanoNote showing the hello program's output

fbterminal on the Ben NanoNote showing the hello program's output

And at long last, it becomes possible to see what programs are printing out to the log!

Thursday, 17 May 2018

Summer of Code: Bug found!

vanitasvitae's blog » englisch | 18:29, Thursday, 17 May 2018

BouncyCastle

The mystery has been solved! I finally found out, why the OpenPGP keys I generated for my project had a broken format. Turns out, there was a bug in BouncyCastle.
Big thanks to Heiko Stamer, who quickly identified the issue in the bug report I created for pgpdump, as well as Kazu Yamamoto and David Hook, who helped identify and confirm the issue.

The bug was, that BouncyCastle, when exporting a secret key without a password, was appending 20 bytes of the SHA1 hash after the secret key material. That is only supposed to happen, when the key in fact is password protected. In case of unprotected keys, BouncyCastle is supposed to add a two byte checksum instead. BouncyCastles wrong behaviour cause pgpdump to interpret random bytes as packet tags, which resulted in a wrong key id being printed out.

The relevant part of RFC-4880 is found in section 5.5.3:

      -If the string-to-key usage octet is zero or 255, then a two-octet
       checksum of the plaintext of the algorithm-specific portion (sum
       of all octets, mod 65536).  If the string-to-key usage octet was
       254, then a 20-octet SHA-1 hash of the plaintext of the
       algorithm-specific portion.

Shortly after I filed a bug report for BouncyCastle, Vincent Breitmoser, one of the Authors of XEP-0373 and XEP-0374 submitted a fix for the bug. This is a nice little example of how free software projects can work together to improve each other. Big thanks for that :)

Working OX Test Client!

I spent the last night to create a command line chat client that can “speak” OX. Everything is a little bit rough around the edges, but the core functionality works.
The user has to do actions like publishing and fetching keys by hand, but encrypted, signed messages can be exchanged. Having working code, I can now start to formulate a general API which will enable multiple OpenPGP back-ends. I will spend some more time to polish that client up and eventually publish it in a separate git repository.

EFAIL

I totally forgot to talk about EFAIL in my last blog posts. It was a little shock when I woke up on Monday, the first day of the coding phase, only to read sentences like “Are you okay?” or “Is the GSoC project in danger?” :D
I’m sure you all have read about the EFAIL attack somewhere in the media, so I’m not going into too much detail here (the EFF already did a great job *cough cough*). The E-Fail website describes the attack as follows:

“In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.”

Is EFAIL applicable to XMPP?
Probably not to the XEPs I’m implementing. In case of E-Mail, it is relatively easy to prepend the image tag to the message. XEP-0373 however specifies, that the transported extension elements (eg. the body of the message) is wrapped inside of an additional extension element, which is then encrypted. Additionally this element (eg. <signcrypt/>) carries a random length, random content padding element, so it is very hard to nearly impossible for an attacker to guess, where the actual body starts, and in turn where they’d have to insert an “extraction channel” (eg. image tag) to the message.

In legacy OpenPGP for XMPP (XEP-0027) it is theoretically possible to at least execute the first part of the attack made in EFAIL. An attacker could insert an image tag to make a link out of the message. However, external images are usually shared by using XEP-0066 (Out of Band Data) by adding an x-element with the oob namespace to the message, which contains the URL to the image. Note, that this element is added outside the body though, so we should be fine, as so the attack would only work if the user tried to open the linkified message in a browser :)

Another option for the attacker would be to attack XHTML-IM (XEP-0071) messages, but I think those do not support legacy OpenPGP in the first place. Also XHTML-IM has been deprecated recently *phew*.

In the end, I’m by no means a security expert, so please do not quote me on my wild thoughts here :)
However it is important to learn from that example to not make the same mistakes some Email clients did.

Happy Hacking!

Wednesday, 16 May 2018

Summer of Code: Quick Update

vanitasvitae's blog » englisch | 11:21, Wednesday, 16 May 2018

I noticed that my blog posting frequency is substantially higher than last year. For that reason I’ll try to keep this post shorter.

Yesterday I implemented my first prototype code to encrypt and decrypt XEP-0374 messages! It can process incoming PubkeyElements (the published OpenPGP keys of other users) and create SigncryptElements which contain a signed and encrypted payload. On the receiving side it can also decrypt those messages and verify the signature.

I’m still puzzled about why I’m unable to dump the keys I generate using pgpdump. David Hook from Bouncycastle used my code to generate a key and it worked flawlessly on his machine, so I’m stumped for an answer…

I created a bug report about the issue on the pgpdump repository. I hope that we will get to the cause of the issue soon.

Changes to the schedule

In my original proposal I sketched out a timeline which is now (that I already making huge steps) a little bit underwhelming. The plan was initially to work on Smacks PubSub API within the first two weeks.
Florian suggested, that instead I should create a working prototype of my implementation as soon as possible, so I’m going to modify my schedule to meet the new criteria:

My new plan is, to have a fully working prototype implementation at the time of the first evaluation (june 15th).
That prototype (implemented within a small command line test client) will be capable of the following things:

  • Storing keys in a rudimental form on disk
  • automatically creating keys if needed
  • publishing public keys via PubSub
  • fetching contacts keys when needed
  • encrypting and signing messages
  • decrypting and verifying and displaying incoming messages
The final goal is still to create a sane, modular implementation. I’m just slightly modifying the path that will take me there :)
Happy Hacking!

Monday, 14 May 2018

Summer of Code: The Plan. Act 1: OpenPGP, Part Two

vanitasvitae's blog » englisch | 19:42, Monday, 14 May 2018

The Coding Phase has begun! Unfortunately my first (official) day of coding was a bad start, as I barely added any new code. Instead I got stuck on a very annoying bug. On the bright side, I’m now writing a lengthy blog post for you all, whining about my issue in all depth. But first lets continue where we left off in this post.

In the last part of my OpenPGP TL;DR, I took a look at different packet types of which an OpenPGP message may consist of. In this post, I’ll examine the structure of OpenPGP keys.

Just like an OpenPGP message, a key pair is made up from a variety of sub packets. I will now list some of them.

Key Material

First of all, there are four different types of key material:

  • Public Key
  • Public Sub Key
  • Secret Key
  • Secret Sub Key

It should be clear, what Public and Secret keys are.
OpenPGP defines a way to create key hierarchies, where sub keys belong to master keys. A typical use-case for this is when a user has multiple devices, but doesn’t want to risk losing their main key pair in case a device gets stolen. In such a case, they would create one super key (pair), which has a bunch of sub keys, one for every device of the user. The super key, which represents the identity of the user is used to sign all sub keys. It then gets locked away and only the sub keys are used. The advantage of this model is, that all sub keys clearly belong to the user, as all of them are signed by the master key. If one device gets stolen, the user can simply revoke that single key, without losing all their reputation, as the super key is still valid.

I still have to determine, whether my implementation should support sub keys, and if so, how that would work.

Signature

This model brings us directly to another, very important type of sub packet – the Signature Packet. A signature can be seen as a statement. How that statement is to be interpreted is defined by the type of the signature.
There are currently 3 different types of signatures, all with different meanings:

  • Certification Signature
    Signatures are often used as attestation. If I sign your PGP key for example, I attest other users, that I have to some degree verified, that you are the person you claim to be and that the key belongs to you.
  • Sub Key Binding Signature
    If a key has such a signature, the key is a sub key of the key that made the signature. In other words: The signature is a statement, that the key that made the signature owns the signed key.
  • Direct-Key Signature
    This type of signature is mostly used to bind additional information in form of Signature Sub Packets to a key. We will see later, what type of information that may be.

Issuing signatures shall not be taken too lightly. In the end, a signature is a statement, which will be interpreted. Creating trust signatures on keys without verifying their authenticity for example may seriously harm ecosystems like the Web of Trust.

Signature Sub Packets

What’s the purpose of Signature Sub Packets?
Signature Sub Packets are used to bind information to a key. Examples are:

  • Creation and expiration dates.
    It might be useful to know, how long a key should be in use. For that purpose the owner of the key can set an expiration date, after which the key should no longer be used.
    It is also possible to let signatures expire.
  • Preferred Algorithms
    The user can state, which algorithms (hashing, compressing and symmetric encryption) they want their interlocutors to use when creating a message.
  • Revocation status
    It might be necessary to revoke a key that has been compromised. That can be done by placing a signature on it, stating that the key should no longer be used.
  • Trust Signature
    If a signature contains a trust signature packet, the signature is to be interpreted as an attestation of trust in that key.
  • Primary User ID
    A user can specify the main user id of a key.

User ID

User IDs are stating, to which identity of a user a key belongs. That might be a real or a companies name, an email address or in our case a Jabber ID.
Sub keys can have different user ids, that way a user can differentiate between different roles.

Trust

Trust can not only be expressed by a Trust Signature (mentioned before), but also by a Trust packet. The difference is, that the signature is cryptographically backed, while a trust packet is merely an indicator.
Trust packets are mostly used by a user to keep track of which keys of contacts they trust themselves.

There is currently no XEP specifying how trust decisions of a user are synchronized across multiple devices in the context of XMPP. #FutureWork? :)

 Bouncycastle and (the not so painless) PGPainless

As I mentioned in my very last post, I was able to generate an OpenPGP key pair using GnuPG (using the “–allow-freeform-uids” flag to allow the uid format used by xmpp). The next step was trying to generate keys on my own using Bouncycastle. Bouncy-gpg (the library I forked into PGPainless) does not offer convenient methods for creating keys, so thats one feature I’ll add to PGPainless and hopefully upstream to Bouncy-gpg. I already created some basic builder structure for creating OpenPGP key pairs using RSA. In order to generate a key pair, the user would do this:

PGPSecretKeyRing secRing = BouncyGPG.createKeyPair()
        .withRSAKeys()
        .ofSize(PublicKeySize.RSA._2048)
        .forIdentity("xmpp:average@best.net")
        .withPassphrase("monkey123")
        .build()
        .generateSecretKeyRing();

Pretty easy, right? Behind the scenes, PGPainless is generating the key pair using the following code:

KeyPairGenerator pbkcGenerator = KeyPairGenerator.getInstance(
        BuildPGPKeyGeneratorAPI.this.keyType, PROVIDER);
pbkcGenerator.initialize(BuildPGPKeyGeneratorAPI.this.keySize);

// Underlying public-key-cryptography key pair
KeyPair pbkcKeyPair = pbkcGenerator.generateKeyPair();

// hash calculator
PGPDigestCalculator calculator = new JcaPGPDigestCalculatorProviderBuilder()
        .setProvider(PROVIDER)
        .build()
        .get(HashAlgorithmTags.SHA1);

// Form PGP key pair //TODO: Generalize "PGPPublicKey.RSA_GENERAL" to allow other crypto
PGPKeyPair pgpPair = new JcaPGPKeyPair(PGPPublicKey.RSA_GENERAL, pbkcKeyPair, new Date());

// Signer for creating self-signature
PGPContentSignerBuilder signer = new JcaPGPContentSignerBuilder(
        pgpPair.getPublicKey().getAlgorithm(), HashAlgorithmTags.SHA256);

// Encryptor for encrypting the secret key
PBESecretKeyEncryptor encryptor = passPhrase == null ?
        null : // unencrypted key pair, otherwise AES-256 encrypted
        new JcePBESecretKeyEncryptorBuilder(PGPEncryptedData.AES_256, calculator)
                .setProvider(PROVIDER)
                .build(passPhrase);

// Mimic GnuPGs signature sub packets
PGPSignatureSubpacketGenerator hashedSubPackets = new PGPSignatureSubpacketGenerator();

// Key flags
hashedSubPackets.setKeyFlags(false,
        KeyFlags.CERTIFY_OTHER
                | KeyFlags.SIGN_DATA
                | KeyFlags.ENCRYPT_COMMS
                | KeyFlags.ENCRYPT_STORAGE
                | KeyFlags.AUTHENTICATION);

// Encryption Algorithms
hashedSubPackets.setPreferredSymmetricAlgorithms(false, new int[]{
        PGPSymmetricEncryptionAlgorithms.AES_256.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_192.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_128.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.TRIPLE_DES.getAlgorithmId()
});

// Hash Algorithms
hashedSubPackets.setPreferredHashAlgorithms(false, new int[] {
        PGPHashAlgorithms.SHA_512.getAlgorithmId(),
        PGPHashAlgorithms.SHA_384.getAlgorithmId(),
        PGPHashAlgorithms.SHA_256.getAlgorithmId(),
        PGPHashAlgorithms.SHA_224.getAlgorithmId(),
        PGPHashAlgorithms.SHA1.getAlgorithmId()
});

// Compression Algorithms
hashedSubPackets.setPreferredCompressionAlgorithms(false, new int[] {
        PGPCompressionAlgorithms.ZLIB.getAlgorithmId(),
        PGPCompressionAlgorithms.BZIP2.getAlgorithmId(),
        PGPCompressionAlgorithms.ZIP.getAlgorithmId()
});

// Modification Detection
hashedSubPackets.setFeature(false, Features.FEATURE_MODIFICATION_DETECTION);

// Generator which the user can get the key pair from
PGPKeyRingGenerator ringGenerator = new PGPKeyRingGenerator(
        PGPSignature.POSITIVE_CERTIFICATION, pgpPair,
        BuildPGPKeyGeneratorAPI.this.identity, calculator,
        hashedSubPackets.generate(), null, signer, encryptor);

return ringGenerator;

Using the above code, I’m trying to create a key pair which is constructed equally as a key generated using GnuPG. I do this mainly to make sure that I don’t have any errors in my code. Also GnuPG is an implementation of OpenPGP with a lot of reputation. If I do what they do, chances are that I might do it right ;D

Unfortunately I’m not quite sure, whether I’m successful with this method or not. To explain my uncertainty, let me show you the output of pgpdump, a tool used to analyse OpenPGP keys:

$pgpdump gnupg.sec
Old: Secret Key Packet(tag 5)(920 bytes)
    Ver 4 - new
    Public key creation time - Tue May  8 15:15:42 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1024 bits) - ...
    Checksum - 3b 8c
Old: User ID Packet(tag 13)(23 bytes)
    User ID - xmpp:juliet@capulet.lit
Old: Signature Packet(tag 2)(334 bytes)
    Ver 4 - new
    Sig type - Positive certification of a User ID and Public Key packet(0x13).
    Pub alg - RSA Encrypt or Sign(pub 1)
    Hash alg - SHA256(hash 8)
    Hashed Sub: issuer fingerprint(sub 33)(21 bytes)
     v4 -    Fingerprint - 1d 01 8c 77 2d f8 c5 ef 86 a1 dc c9 b4 b5 09 cb 59 36 e0 3e
    Hashed Sub: signature creation time(sub 2)(4 bytes)
        Time - Tue May  8 15:15:42 CEST 2018
    Hashed Sub: key flags(sub 27)(1 bytes)
        Flag - This key may be used to certify other keys
        Flag - This key may be used to sign data
        Flag - This key may be used to encrypt communications
        Flag - This key may be used to encrypt storage
        Flag - This key may be used for authentication
    Hashed Sub: preferred symmetric algorithms(sub 11)(4 bytes)
        Sym alg - AES with 256-bit key(sym 9)
        Sym alg - AES with 192-bit key(sym 8)
        Sym alg - AES with 128-bit key(sym 7)
        Sym alg - Triple-DES(sym 2)
    Hashed Sub: preferred hash algorithms(sub 21)(5 bytes)
        Hash alg - SHA512(hash 10)
        Hash alg - SHA384(hash 9)
        Hash alg - SHA256(hash 8)
        Hash alg - SHA224(hash 11)
        Hash alg - SHA1(hash 2)
    Hashed Sub: preferred compression algorithms(sub 22)(3 bytes)
        Comp alg - ZLIB <RFC1950>(comp 2)
        Comp alg - BZip2(comp 3)
        Comp alg - ZIP <RFC1951>(comp 1)
    Hashed Sub: features(sub 30)(1 bytes)
        Flag - Modification detection (packets 18 and 19)
    Hashed Sub: key server preferences(sub 23)(1 bytes)
        Flag - No-modify
    Sub: issuer key ID(sub 16)(8 bytes)
        Key ID - 0xB4B509CB5936E03E
    Hash left 2 bytes - 87 ec
    RSA m^d mod n(2048 bits) - ...
        -> PKCS-1

Above you can see the structure of an OpenPGP RSA key generated by GnuPG. You can see its preferred algorithms, the XMPP UID of Juliet and so on. Now lets analyse a key generated using PGPainless.

$pgpdump pgpainless.sec
Old: Secret Key Packet(tag 5)(950 bytes)
    Ver 4 - new
    Public key creation time - Mon May 14 15:56:21 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1023 bits) - ...
    Checksum - 6d 18
New: unknown(tag 48)(173 bytes)
Old: Signature Packet(tag 2)(until eof)
    Ver 213 - unknown

Unfortunately the output indicates an unknown packet tag and it looks like something is broken. I’m not sure what’s going on, but I suspect either an error in my implementation, or a bug in Bouncycastle. I noticed, that the output of pgpdump is drastically changing if I change the first boolean value in any of the hashedSubPackets setter function calls from false to true (that boolean represents, whether the set value is “critical”, meaning whether the receiving implementation should throw an error in case the read property is unknown). If I do set it to true, the output looks more disturbing and broken, since strange unicode symbols start to appear, indicating a bug. Unfortunately my mail to the Bouncycastle mailing list is still unanswered, although I must add that I wrote it only seven hours ago.

It is a real pity, that it is so hard to find working example code that is not outdated :( If you can point me in the right direction, please let me know!! You can find contact details on my Github page.

My next steps debugging this will be trying whether an exported key can successfully be imported both back into PGPainless, as well as into GnuPG. Apart from that, I will spend more time thinking about an API which allows different OpenPGP backends.

Happy Hacking!

A closer look at power and PowerPole

DanielPocock.com - fsfe | 19:25, Monday, 14 May 2018

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

Sunday, 13 May 2018

USBGuard

Evaggelos Balaskas - System Engineer | 18:42, Sunday, 13 May 2018

Prologue

Security

One of the most common security concerns (especially when traveling) is the attach of unknown USB device on our system.

There are a few ways on how to protect your system.

 

Hardware Protection

 

Cloud Storage

More and more companies are now moving from local storage to cloud storage as a way to reduce the attack surface on systems:

IBM a few days ago, banned portable storage devices

 

Hot Glue on USB Ports

also we must not forget the old but powerful advice from security researches & hackers:

USB

by inserting glue or using a Hot Glue Gun to disable the USB ports of a system.

Problem solved!

 

USBGuard

I was reading the redhat 7.5 release notes and I came upon on usbguard:

 

USBGuard

The USBGuard software framework helps to protect your computer against rogue USB devices (a.k.a. BadUSB) by implementing basic whitelisting / blacklisting capabilities based on device attributes.

 

USB protection framework

So the main idea is you run a daemon on your system that tracks udev monitor system. The idea seams like the usb kill switch but in a more controlled manner. You can dynamical whitelist or/and blacklist devices and change the policy on such devices more easily. Also you can do all that via a graphical interface, although I will not cover it here.

 

Archlinux Notes

for archlinux users, you can find usbguard in AUR (Archlinux User Repository)

AUR : usbguard

or you can try my custom PKGBUILDs files

 

How to use usbguard

Generate Policy

The very first thing is to generate a policy with the current attached USB devices.

sudo usbguard generate-policy

Below is an example output, viewing my usb mouse & usb keyboard :

allow id 17ef:6019 serial "" name "Lenovo USB Optical Mouse" hash "WXaMPh5VWHf9avzB+Jpua45j3EZK6KeLRdPcoEwlWp4=" parent-hash "jEP/6WzviqdJ5VSeTUY8PatCNBKeaREvo2OqdplND/o=" via-port "3-4" with-interface 03:01:02

allow id 045e:00db serial "" name "Naturalxc2xae Ergonomic Keyboard 4000" hash "lwGc9o+VaG/2QGXpZ06/2yHMw+HL46K8Vij7Q65Qs80=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "1-1.5" with-interface { 03:01:01 03:00:00 }

The default policy for already attached USB devices are allow.

 

We can create our rules configuration file by:

sudo usbguard generate-policy > /etc/usbguard/rules.conf

 

Service

starting and enabling usbguard service via systemd:

systemctl start usbguard.service

systemctl enable usbguard.service

 

List of Devices

You can view the list of attached USB devices and

sudo usbguard list-devices

 

Allow Device

Attaching a new USB device (in my case, my mobile phone):

$ sudo usbguard list-devices | grep -v allow

we will see that the default policy is to block it:

17: block id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }

So we can allow it by:

sudo usbguard allow-device 17

then

sudo usbguard list-devices | grep BLN-L21

we can verify that is okay:

17: allow id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }

 

Block USB on screen lock

The default policy, when you (or someone else) are inserting a new USB device is:

sudo usbguard get-parameter InsertedDevicePolicy
apply-policy

is to apply the default policy we have. There is a way to block or reject any new USB device when you have your screen locker on, as this may be a potential security attack on your system. In theory, you are inserting USB devices as you are working on your system, and not when you have your screen lock on.

I use slock as my primary screen locker via a keyboard shortcut. So the easiest way to dynamical change the default policy on usbguard is via a shell wrapper:

vim /usr/local/bin/slock
#!/bin/sh

# ebal, Sun, 13 May 2018 10:07:53 +0300
POLICY_UNLOCKED="apply-policy"
POLICY_LOCKED="reject"

# function to revert the policy
revert() {
  usbguard set-parameter InsertedDevicePolicy ${POLICY_UNLOCKED}
}

trap revert SIGHUP SIGINT SIGTERM
usbguard set-parameter InsertedDevicePolicy ${POLICY_LOCKED}

/usr/bin/slock

# shell function to revert reject policy
revert

(you can find the same example on redhat’s blog post).

Friday, 11 May 2018

CentOS Dist Upgrade

Evaggelos Balaskas - System Engineer | 14:54, Friday, 11 May 2018

Upgrading CentOS 6.x to CentOS 7.x

 

Disclaimer : Create a recent backup of the system. This is an unofficial , unsupported procedure !

 

CentOS 6

CentOS release 6.9 (Final)
Kernel 2.6.32-696.16.1.el6.x86_64 on an x86_64

centos69 login: root
Password:
Last login: Tue May  8 19:45:45 on tty1

[root@centos69 ~]# cat /etc/redhat-release
CentOS release 6.9 (Final)

 

Pre Tasks

There are some tasks you can do to prevent from unwanted results.
Like:

  • Disable selinux
  • Remove unnecessary repositories
  • Take a recent backup!

 

CentOS Upgrade Repository

Create a new centos repository:

cat > /etc/yum.repos.d/centos-upgrade.repo <<EOF
[centos-upgrade]
name=centos-upgrade
baseurl=http://dev.centos.org/centos/6/upg/x86_64/
enabled=1
gpgcheck=0
EOF

 

Install Pre-Upgrade Tool

First install the openscap version from dev.centos.org:

# yum -y install https://buildlogs.centos.org/centos/6/upg/x86_64/Packages/openscap-1.0.8-1.0.1.el6.centos.x86_64.rpm

then install the redhat upgrade tool:

# yum -y install redhat-upgrade-tool preupgrade-assistant-*

 

Import CentOS 7 PGP Key

# rpm --import http://ftp.otenet.gr/linux/centos/RPM-GPG-KEY-CentOS-7 

 

Mirror

to bypass errors like:

Downloading failed: invalid data in .treeinfo: No section: ‘checksums’

append CentOS Vault under mirrorlist:

 mkdir -pv /var/tmp/system-upgrade/base/ /var/tmp/system-upgrade/extras/  /var/tmp/system-upgrade/updates/

 echo http://vault.centos.org/7.0.1406/os/x86_64/       >  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/extras/x86_64/   >  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/updates/x86_64/  >  /var/tmp/system-upgrade/updates/mirrorlist.txt 

These are enough to upgrade to 7.0.1406. You can add the below mirros, to upgrade to 7.5.1804

More Mirrors

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/os/x86_64/  >>  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/os/x86_64/           >>  /var/tmp/system-upgrade/base/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/extras/x86_64/ >>  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/extras/x86_64/          >>  /var/tmp/system-upgrade/extras/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/updates/x86_64/  >>  /var/tmp/system-upgrade/updates/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/updates/x86_64/           >>  /var/tmp/system-upgrade/updates/mirrorlist.txt 

 

Pre-Upgrade

preupg is actually a python script!

# yes | preupg -v 
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
Gathering logs used by preupgrade assistant:
All installed packages : 01/11 ...finished (time 00:00s)
All changed files      : 02/11 ...finished (time 00:18s)
Changed config files   : 03/11 ...finished (time 00:00s)
All users              : 04/11 ...finished (time 00:00s)
All groups             : 05/11 ...finished (time 00:00s)
Service statuses       : 06/11 ...finished (time 00:00s)
All installed files    : 07/11 ...finished (time 00:01s)
All local files        : 08/11 ...finished (time 00:01s)
All executable files   : 09/11 ...finished (time 00:01s)
RedHat signed packages : 10/11 ...finished (time 00:00s)
CentOS signed packages : 11/11 ...finished (time 00:00s)
Assessment of the system, running checks / SCE scripts:
001/096 ...done    (Configuration Files to Review)
002/096 ...done    (File Lists for Manual Migration)
003/096 ...done    (Bacula Backup Software)
...
./result.html
/bin/tar: .: file changed as we read it
Tarball with results is stored here /root/preupgrade-results/preupg_results-180508202952.tar.gz .
The latest assessment is stored in directory /root/preupgrade .
Summary information:
We found some potential in-place upgrade risks.
Read the file /root/preupgrade/result.html for more details.
Upload results to UI by command:
e.g. preupg -u http://127.0.0.1:8099/submit/ -r /root/preupgrade-results/preupg_results-*.tar.gz .

this must finish without any errors.

 

CentOS Upgrade Tool

We need to find out what are the possible problems when upgrade:

# centos-upgrade-tool-cli --network=7
          --instrepo=http://vault.centos.org/7.0.1406/os/x86_64/ 

 

Then by force we can upgrade to it’s latest version:

# centos-upgrade-tool-cli --force --network=7
          --instrepo=http://vault.centos.org/7.0.1406/os/x86_64/
          --cleanup-post

 

Output

setting up repos...
base                                                          | 3.6 kB     00:00
base/primary_db                                               | 4.9 MB     00:04
centos-upgrade                                                | 1.9 kB     00:00
centos-upgrade/primary_db                                     |  14 kB     00:00
cmdline-instrepo                                              | 3.6 kB     00:00
cmdline-instrepo/primary_db                                   | 4.9 MB     00:03
epel/metalink                                                 |  14 kB     00:00
epel                                                          | 4.7 kB     00:00
epel                                                          | 4.7 kB     00:00
epel/primary_db                                               | 6.0 MB     00:04
extras                                                        | 3.6 kB     00:00
extras/primary_db                                             | 4.9 MB     00:04
mariadb                                                       | 2.9 kB     00:00
mariadb/primary_db                                            |  33 kB     00:00
remi-php56                                                    | 2.9 kB     00:00
remi-php56/primary_db                                         | 229 kB     00:00
remi-safe                                                     | 2.9 kB     00:00
remi-safe/primary_db                                          | 950 kB     00:00
updates                                                       | 3.6 kB     00:00
updates/primary_db                                            | 4.9 MB     00:04
.treeinfo                                                     | 1.1 kB     00:00
getting boot images...
vmlinuz-redhat-upgrade-tool                                   | 4.7 MB     00:03
initramfs-redhat-upgrade-tool.img                             |  32 MB     00:24
setting up update...
finding updates 100% [=========================================================]
(1/323): MariaDB-10.2.14-centos6-x86_64-client.rpm            |  48 MB     00:38
(2/323): MariaDB-10.2.14-centos6-x86_64-common.rpm            | 154 kB     00:00
(3/323): MariaDB-10.2.14-centos6-x86_64-compat.rpm            | 4.0 MB     00:03
(4/323): MariaDB-10.2.14-centos6-x86_64-server.rpm            | 109 MB     01:26
(5/323): acl-2.2.51-12.el7.x86_64.rpm                         |  81 kB     00:00
(6/323): apr-1.4.8-3.el7.x86_64.rpm                           | 103 kB     00:00
(7/323): apr-util-1.5.2-6.el7.x86_64.rpm                      |  92 kB     00:00
(8/323): apr-util-ldap-1.5.2-6.el7.x86_64.rpm                 |  19 kB     00:00
(9/323): attr-2.4.46-12.el7.x86_64.rpm                        |  66 kB     00:00
...
(320/323): yum-plugin-fastestmirror-1.1.31-24.el7.noarch.rpm  |  28 kB     00:00
(321/323): yum-utils-1.1.31-24.el7.noarch.rpm                 | 111 kB     00:00
(322/323): zlib-1.2.7-13.el7.x86_64.rpm                       |  89 kB     00:00
(323/323): zlib-devel-1.2.7-13.el7.x86_64.rpm                 |  49 kB     00:00
testing upgrade transaction
rpm transaction 100% [=========================================================]
rpm install 100% [=============================================================]
setting up system for upgrade
Finished. Reboot to start upgrade.

 

Reboot

The upgrade procedure, will download all rpm packages to a directory and create a new grub entry. Then on reboot the system will try to upgrade the distribution release to it’s latest version.

# reboot 

 

Upgrade

centos6_7upgr.png

centos6_7upgr_b.png

centos6_7upgr_c.png

CentOS 7

CentOS Linux 7 (Core)
Kernel 3.10.0-123.20.1.el7.x86_64 on an x86_64

centos69 login: root
Password:
Last login: Fri May 11 15:42:30 on ttyS0

[root@centos69 ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)

 

Tag(s): centos, centos7

Wednesday, 09 May 2018

Summer of Code: Small steps

vanitasvitae's blog » englisch | 15:18, Wednesday, 09 May 2018

Yesterday I got my first results encrypting and decrypting OpenPGP messages using PGPainless, my fork of bouncy-gpg. There were some interesting hurdles that I want to discuss though.

GnuPG

As a first step towards working encryption and decryption, I obviously needed to create some PGP keys for testing purposes. As a regular user of OpenPGP I knew how to create keys using the command line tool GnuPG, so I started up the key creation by typing “gpg –generate-key”. I chose the key type to be RSA with a length of 2048 bits, as those settings are also the defaults recommended by GnuPG itself. When it came to entering user id information though, things got a little more complicated. GnuPG asks for the name of the user, their email address and a comment. XEP-0373 states, that the user id packet of a PGP key MUST be of the format “xmpp:juliet@capulet.lit”. My first thing to figure out was, if I should enter that String as the name, email or as a comment. I first tried with the name, upon which GnuPG complained, that neither name, nor comment is allowed to contain an email address. Logically my next step was to enter the String as the users email address. Again, GnuPG complained, this time it stated, that “xmpp:juliet@capulet.lit” was not a valid Email address. So I got stuck.

Luckily I knew that Philipp Hörist was working on an experimental OX plugin for Gajim. He could hint me to a process of “Unattended Key Generation“, which reads input values from a script file. This input would not be validated by GnuPG as strictly as when using the wizard, so now I was able to successfully create my testing keys. Big thanks for the help :)

Update: Apparently GnuPG provides an additional flag “–allow-freeform-uid”, which does exactly that. Allowing uids of any form. Using that flag allows easy generation and editing of keys with freely chosen uids. Thanks to Wiktor from the Conversations.im chat room :)

Bouncy-gpg

As a next step, I wrote a little JUnit test which signs and encrypts a little piece of text, followed by decryption and signature validation. Here I came across my next problem.

Bouncy-gpg provides a class called Rfc4880KeySelectionStrategy, which is used to select keys from the users keyring following a certain strategy. In my testing code, I created two keyrings for Romeo and Juliet and added their respective public and private keys like you would do in a real life scenario. The issue I then encountered was, that when I tried to encrypt my message from Juliet for Romeos public key, I got the error, that “no suitable key was found”. How could that be? I did some more debugging and was able to verify that the keys were in fact added to the keyrings just as I intended.

To explain the cause of this issue, I have to explain in a little more depth, how the user id field is formatted in OpenPGP.
RFC4880 states, the following:

A User ID packet consists of UTF-8 text that is intended to represent
the name and email address of the key holder.  By convention, it
includes an RFC 2822 [RFC2822] mail name-addr, but there are no
restrictions on its content.

A “mail name-addr” follows this format: “Juliet Capulet (The Juliet from Shakespear’s play) <juliet@capulet.lit>”.
First there is the name of the key owner, followed by a comment in parentheses, followed by the email address in angle brackets. The usage of brackets makes it unambiguous, which value is which, so all values are optional.
“<juliet@capulet.lit>” would still be a valid user id for example.

So what was the problem?
The user id of my testing key looked like this: “xmpp:juliet@capulet.lit”. Note that there are no angle brackets or parentheses around the text, so the string would be interpreted as name.
Bouncy-gpg’s Rfc4880KeySelectionStrategy however contained some code, which would check, whether the query the user entered to search for a key would be enclosed in angle brackets, to follow the email address format. In case it doesn’t, the code would add the angle brackets prior to executing the search. Instead of searching for “xmpp:juliet@capulet.lit”, the selection strategy would look out for keys with the user id “<xmpp:juliet@capulet.lit>”.
My solution to the problem was to create my own KeySelectionStrategy, which would leave the query as is, in order for it to match my keys user id. Figuring that out took me quite a while :D

Conclusions

So what conclusions can I draw from my experiences?
First of all, I’m not sure, if it is a good idea to give the user the ability to import their own PGP keys. GnuPGs behaviour of forbidding the user to add user ids which don’t follow the mail name-addr format will make it very hard for a user to create a key with a valid user id. (Update: Users can use the flag “–allow-freeform-uid” to generate new keys and edit existing ones with unconventional uids.) Philipp Hörist suggested that implementations of XEP-0373 should instead create a key for the user on the first use and I think I aggree with him. As a logical next step I have to figure out, how to create PGP keys using Bouncycastle :D

I hope you liked my little update post, which grew longer than I expected :D

Happy Hacking!

Tuesday, 08 May 2018

Summer of Code: The plan. Act 1: OpenPGP

vanitasvitae's blog » englisch | 08:05, Tuesday, 08 May 2018

OpenPGP

OpenPGP (know as RFC4880) defines a format for encrypted and signed data, as well as encryption keys and signatures.

My main problem with the specification is, that it is very noisy. The document is 90 pages long and describes every aspect an implementer needs to know about, from how big numbers are stored, over which magic bits and bytes are in use to mark special regions in a packet, to recommendations about used algorithms. Since I’m not going to write a crypto library from scratch, the first step I have to take is to identify which parts are important for me as a user of a – lets call it mid-level-API – and which parts I can ignore. You can see this posting as kind of an hopefully somewhat entertaining piece of jotting paper which I use to note down important parts of the spec while I go through the document.

Lets start to create a short TL;DR of the OpenPGP specification.
The basic process of creating an encrypted message is as follows:

  • The sender provides a plaintext message
  • That message gets encrypted with a randomly generated symmetric key (called session key)
  • The session key then gets encrypted for each recipients public key and the resulting block of data gets prepended to the previously encrypted message

As you can see, an OpenPGP message consists of multiple parts. Those are called sub-packets. There is a pretty respectable number of sub-packet types specified in the RFC. Many of them are not very interesting, so lets identify the few which are relevant for our project.

  • Public-Key Encrypted Session Key Packets
    Those packets represent a session key encrypted with the public key of a recipient.
  • Signature Packets
    Digital signatures are used to provide authenticity. If a piece of data is signed using the secret key of the sender, the recipient is able to verify its origin and authenticity. There is a whole load of different signature sub-packets, so for now we just acknowledge their existence without going into too much detail.
  • Compressed Data Packets
    OpenPGP provides the feature of compressing plaintext data prior to encrypting it. This might come in handy, since encrypting files or messages adds quite a bit of overhead. Compressing the original data can compensate that effect a little bit.
  • Symmetrically Encrypted Data Packets
    This packet type represents data which has been encrypted using a symmetric key (in our case the session key).
  • Literal Data Packets
    The original message we want to encrypt is referred to as literal data. The literal data packet consists of metadata like encoding of the original message, or filename in case we want to encrypt a file, as well as – of course – the data itself.
  • ASCII Armor (not really a Packet)
    Encrypted data is represented in binary form. Since one big use case of OpenPGP encryption is in Email messaging though, it is necessary to bring the data into a form which can be transported safely. The ASCII Armor is an additional layer which encodes the binary data using Base64. It also makes the data identifiable for humans by adding a readable header and footer. XEP-0373 forbids the use of ASCII Armor though, so lets focus on other things instead :D

Those packet types can be nested, as well as concatenated in many different ways. For example, a common constellation would consist of a Literal Data Packet of our original message, which is, along with a Signature Packet, contained inside of a Compressed Data Packet to save some space. The Compressed Data Packet is nested inside of a Symmetrically Encrypted Data Packet, which lives inside of an OpenPGP message along with one or more Public-Key Encrypted Session Key Packets.

Each packet carries additional information, for example which compression algorithm is used in the Compressed Data Packet. We will not focus on those details, as we assume that the libraries we use will already handle those specifics for us.

OpenPGP also specifies a way to store and exchange keys. In order to be able to receive encrypted messages, a user must distribute their keys to other users. A key can carry a lot of additional information, like identities and signatures of other keys. Signatures are used to create trust networks like the web of trust, but we will most likely not dive deeper into that.

Signatures on keys can also be used to create key hierarchies like super keys and sub-keys. It still has to be determined, if and how those patterns will be reflected in my code. I can imagine it would be useful to only have sub-keys on mobile devices, while the main super key is hidden away from the orcs in a bunker somewhere, but I also think that it would be a rather complicated task to add support for sub-keys to my project. We will see ;)

That’s it for Part 1 of my sighting of the OpenPGP RFC.

Happy Hacking!

Monday, 07 May 2018

Converting an existing installation to LUKS using luksipc - 2018 notes

Losca | 11:08, Monday, 07 May 2018

Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Powering a ham radio transmitter

DanielPocock.com - fsfe | 08:04, Monday, 07 May 2018

Last week I announced the crowdfunding campaign to help run a ham radio station at OSCAL. Thanks to all those people who already donated or expressed interest in volunteering.

Modern electronics are very compact and most of what I need to run the station can be transported in my hand luggage. The two big challenges are power supplies and antenna masts. In this blog post there are more details about the former.

Here is a picture of all the equipment I hope to use:

The laptop is able to detect incoming signals using the RTL-SDR dongle and up-converter. After finding a signal, we can then put the frequency into the radio transmitter (in the middle of the table), switch the antenna from the SDR to the radio and talk to the other station.

The RTL-SDR and up-converter run on USB power and a phone charger. The transmitter, however, needs about 22A at 12V DC. This typically means getting a large linear power supply or a large battery.

In the photo, I've got a Varta LA60 AGM battery, here is a close up:

There are many ways to connect to a large battery. For example, it is possible to use terminals like these with holes in them for the 8 awg wire or to crimp ring terminals onto a wire and screw the ring onto any regular battery terminal. The type of terminal with these extra holes in it is typically sold for car audio purposes. In the photo, the wire is 10 awg superflex. There is a blade fuse along the wire and the other end has a PowerPole 45 plug. You can easily make cables like this yourself with a PowerPole crimping tool, everything can be purchased online from sites like eBay.

The wire from the battery goes into a fused power distributor with six PowerPole outlets for connecting the transmitter and other small devices, for example, the lamp in the ATU or charging the handheld:

The AGM battery in the photo weighs about 18kg and is unlikely to be accepted in my luggage, hence the crowdfunding campaign to help buy one for the local community. For many of the young people and students in the Balkans, the price of one of the larger AGM batteries is equivalent to about one month of their income so nobody there is going to buy one on their own. Please consider making a small donation if you would like to help as it won't be possible to run demonstrations like this without power.

Friday, 04 May 2018

GoFundMe: errors and bait-and-switch

DanielPocock.com - fsfe | 09:34, Friday, 04 May 2018

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

Thursday, 03 May 2018

Turning a dictator's pyramid into a ham radio station

DanielPocock.com - fsfe | 19:44, Thursday, 03 May 2018

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 25 people donate EUR 10 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the hackerspaces, radio clubs and non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

Summer of Code: Preparations

vanitasvitae's blog » englisch | 13:13, Thursday, 03 May 2018

During preparations for my GSoC project, I’m finding first traces left by developers who dealt with OpenPGP before. It seems that Florian was right when he noted, that there is a lack of usable higher level Java libraries as I found out when I stumbled across this piece of code. On the other hand I also found a project which thrives to simplify OpenPGP encryption as much as possible. bouncy-gpg – while apparently laying its focus on file encryption and GnuPG compatibility – looks like a promising candidate for Smacks OX module. Unfortunately its code contained some very recent Java features like stream semantics, but I managed to modify its source code to make it compatible down to Androids API level 9, the version Smack is currently targeting. My changes will eventually be upstreamed.

While my next target is now to create a very basic prototype of OX encryption, I’m also reading into OpenKeychains OpenPGP API. It would be very nice to create a universal interface that allows for OX encryption using multiple backends – BouncyCastle on pure Java systems and SpongyCastle / OpenKeychain on Android.

During my work on OX providers for Smack, I stumbled across an interesting issue. When putting a body element as a child into an signcrypt element, the body did not include its namespace, as it is normally only used as child of the message element. Putting it into a signcrypt element made up a special edge case. When a Provider tries to parse the body element, it would falsely interpret the missing namespace as the one of the parent element. Florian provided the solution to this problem by modifying the “toXML()” method of all elements to require an enclosing namespace. Now the body is able to include its namespace in the XML in case the enclosing namespace is different from “jabber:client”.

Happy Hacking!

Dynamic DNS for NSD

Michael Clemens | 02:26, Thursday, 03 May 2018

This post has been moved to my new web site: http://clemens.name/dynamic-dns-for-nsd/

Sunday, 29 April 2018

DNS RPZ with PowerDNS

Evaggelos Balaskas - System Engineer | 14:01, Sunday, 29 April 2018

Domain Name Service Response Policy Zones

from PowerDNS Recursor documentation :

Response Policy Zone is an open standard developed by Paul Vixie (ISC and Farsight) and Vernon Schryver (Rhyolite), to modify DNS responses based on a policy loaded via a zonefile.

Sometimes it is called: DNS Firewall

Reading Material

aka useful links:

Scheme

An example scheme to get a a better understanding on the concept behind RPZ.

DNS RPZ

Purpose

The main purposes of implentanting DNS RPZ in your DNS Infrastructure are to dynamicaly DNS sinkhole:

  • Malicious domains,
  • Implement goverment regulations,
  • Prevent users to visit domains that are blocked via legal reasons.

by maintaining a single RPZ zone (or many) or even getting a subscription from another cloud provider.

Althouth for SOHO enviroments I suggest reading this blog post: Removing Ads with your PowerDNS Resolver and customize it to your needs.

RPZ Policies

These are the RPZ Policies we can use with PowerDNS.

  • Policy.Custom (default policy)
  • Policy.Drop
  • Policy.NXDOMAIN
  • Policy.NODATA
  • Policy.Truncate
  • Policy.NoAction

Policy.Custom:

Will return a NoError, CNAME answer with the value specified with
defcontent, when looking up the result of this CNAME, RPZ is not taken into account

Use Case

Modify the DNS responces with a list of domains to a specific sinkhole dns record.

eg.

  thisismytestdomain.com.org ---> sinkhole.example.net.
*.thisismytestdomain.com.org ---> sinkhole.example.net.
  example.org                ---> sinkhole.example.net.
*.example.org                ---> sinkhole.example.net.
  example.net                ---> sinkhole.example.net.
*.example.net                ---> sinkhole.example.net.

DNS sinkhole record

Create an explicit record outside of the DNS RPZ scheme.

A type A Resource Record to a domain zone that points to 127.0.0.1 is okay, or use an explicit host file that the resolver can read. In the PowerDNS Recursor the configuration for this, are these two lines:

etc-hosts-file=/etc/pdns-recursor/hosts.blocked
export-etc-hosts=on

then

$ echo "127.0.0.5 sinkhole.example.net" >> /etc/pdns-recursor/hosts.blocked

and reload the service.

rpz.zone

RPZ functionality is set by reading a bind dns zone file, so create a simple file:

/etc/pdns-recursor/rpz.zone

; Time To Live
$TTL 86400

; Start Of Authorite
@       IN  SOA authns.localhost. hostmaster. 2018042901 14400 7200 1209600 86400

; Declare Name Server
@                    IN  NS      authns.localhost.

Lua

RPZ support configuration is done via our Lua configuration mechanism

In the pdns-recursor configuration file: /etc/pdns-recursor/recursor.conf we need to declare a lua configuration file:

lua-config-file=/etc/pdns-recursor/rpz.lua

Lua-RPZ Configuration file

that points to the rpz.zone file. In this example, we will use Policy.Custom to send every DNS query to our default content: sinkhole.example.net

/etc/pdns-recursor/rpz.lua

rpzFile("/etc/pdns-recursor/rpz.zone", {defpol=Policy.Custom, defcontent="sinkhole.example.net."})

Restart PowerDNS Recursor

At this moment, restart the powerdns recusor

# systemctl restart pdns-recursor

or

# service pdns-recursor restart

and watch for any error log.

Domains to sinkhole

Append to the rpz.zone all the domains you need to sinkhole. The defcontent="sinkhole.example.net." will ignore the content of the zone, but records must be valid, or else pdns-recursor will not read the rpz bind zone file.

; Time To Live
$TTL 86400

; Start Of Authorite
@   IN  SOA authns.localhost. hostmaster. 2018042901 14400 7200 1209600 86400

; Declare Name Server
@                    IN  NS      authns.localhost.

; Domains to sinkhole
thisisatestdomain.org.  IN  CNAME    sinkhole.example.net.
thisisatestdomain.org.  IN  CNAME    sinkhole.example.net.
example.org.            IN  CNAME    sinkhole.example.net.
*.example.org.          IN  CNAME    sinkhole.example.net.
example.net.            IN  CNAME    sinkhole.example.net.
*.example.net.          IN  CNAME    sinkhole.example.net.

When finished, you can reload the lua configuration file that read the rpz.zone file, without restarting the powerdns recursor.

# rec_control reload-lua-config

Verify with dig

testing the dns results with dig:

$ dig example.net.

;; QUESTION SECTION:
;example.net.           IN  A

;; ANSWER SECTION:
example.net.        86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86261   IN  A   127.0.0.5

$ dig thisisatestdomain.org

;; QUESTION SECTION:
;thisisatestdomain.org.     IN  A

;; ANSWER SECTION:
thisisatestdomain.org.  86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86229   IN  A   127.0.0.5

Wildcard

test the wildcard record in rpz.zone:

$ dig example.example.net.

;; QUESTION SECTION:
;example.example.net.       IN  A

;; ANSWER SECTION:
example.example.net.    86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86400   IN  A   127.0.0.5

Tag(s): dns, rpz, PowerDNS

Monday, 23 April 2018

Another Summer of Code with Smack

vanitasvitae's blog » englisch | 17:15, Monday, 23 April 2018

I’m very happy to announce that once again I will participate in the Summer of Code. Last year I worked on OMEMO encrypted Jingle Filetransfer for the XMPP client library Smack. This year, I will once again contribute to the Smack project. A big thanks goes out to Daniel Gultsch and Conversations.im, who act as an umbrella organization.

My new project will be an implementation of XEP-0373 and XEP-0374 OpenPGP for XMPP + Instant Messaging. The projects description can be found here.

Now I will enjoy my last few free days before the hard work begins. I will keep you updated :)

Happy Hacking!

Thursday, 19 April 2018

DataworksSummit Berlin - Wednesday morning

Inductive Bias | 06:50, Thursday, 19 April 2018

Data strategy - cloud strategy - business strategy: Aligning the three was one of the main themes (initially put forward in his opening keynote by CTO of Hortonworks Scott Gnau) thoughout this weeks Dataworks Summit Berlin kindly organised and hosted by Hortonworks. The event was attended by over 1000 attendees joining from 51 countries.

The inspiration hat was put forward in the first keynote by Scott was to take a closer look at the data lifecycle - including the fact that a lot of data is being created (and made available) outside the control of those using it: Smart farming users are using a combination of weather data, information on soil conditions gathered through sensors out in the field in order to inform daily decisions. Manufacturing is moving towards closer monitoring of production lines to spot inefficiencies. Cities are starting to deploy systems that allow for better integration of public services. UX is being optimized through extensive automation.

When it comes to moving data to the cloud, the speaker gave a nice comparison: To him, explaining the difficulties that moving to the cloud brings is similar to the challenges that moving "stuff" to external storage in the garage brings: It opens questions of "Where did I put this thing?", but also about access control, security. Much the same way, cloud and on-prem integration means that questions like encryption, authorization, user tracking, data governance need to be answered. But also questions like findability, discoverability and integration for analysis purposes.

The second keynote was given by Mandy Chessell from IBM introducing Apache Atlas for metadata integration and governance.

In the third keynote, Bernard Marr talked about the five promises of big data:

  • Informing decisions based on data: The goal here should be to move towards self service platforms to remove the "we need a data scientist for that" bottleneck. That in turn needs quite some training and hand-holding for those interested in the self-service platforms.
  • Understanding customers and customer trends better: The example given was a butcher shop that would install a mobile phone tracker in his shop window in order to see which advertisement would make more people stop by and look closer. As a side effect he noticed an increase in people on the street in the middle of the night (coming from pubs nearby). A decision was made to open at that time, offer what people were searching for at that time according to Google trends - by now that one hour in the night makes a sizeable portion of the shop's income. The second example given was Disney already watching all it's Disney park visitors through wrist bands, automating line management at popular attractions - but also deploying facial recognition watching audiences watch shows in figure out how well those shows are received.
  • Improve the customer value proposition: The example given was the Royal Bank of Scotland moving closer to it's clients, informing them through automated means when interest rates are dropping, or when they are double insured - thus building trust and transparency. The other example given was that of a lift company building sensors into lifts in order to be able to predict failures and repair lifts when they are least used.
  • Automate business processes: Here the example was that of a car insurance that would offer dynamic rates if people would let themselves monitor during driving. Those adhering to speed limits, avoiding risky routes and times would get lower rates. Another example was that of automating the creation of sports reports e.g. for tennis matches based on sensors deployed, or that of automating Forbes analyst reports some of which get published without the involvement of a journalist.
  • Last but not least the speaker mentioned the obvious business case of selling data assets - e.g. selling aggregated and refined data gathered through sensors in the field back to farmers. Another example was the automatic detection of events based on sounds detected - e.g. gun shots close to public squares and selling that back to the police.


After the keynotes were over breakout sessions started - including my talk about the Apache Way. It was good to see people show up to learn how all the open source big data projects are working behind the scences - and how they themselves can get involved in contributing and shaping these projects. I'm looking forward to receiving pictures of feather shaped cookies.

During lunch there was time to listen in on how Santander operations is using data analytics to drive incident detection, as well as load prediction for capacity planning.

After lunch I had time for two more talks: The first explained how to integrate Apache MxNet with Apache NiFi to bring machine learning to the edge. The second one introduced Apache Beam - an abstraction layer above Apache Flink, Spark and Google's platform.

Both, scary and funny: Walking up to the Apache Beam speaker after his talk (having learnt at DataworksSummit that he is PMC Chair of Apache Beam) - only to be greeted with "I know who *you* are" before even getting to introduce oneself...

Tuesday, 17 April 2018

Extending L4Re/Fiasco.OC to the Letux 400 Notebook Computer

Paul Boddie's Free Software-related blog » English | 23:31, Tuesday, 17 April 2018

In my summary of the port of L4Re and Fiasco.OC to the Ben NanoNote, I remarked that progress had been made on supporting other products and hardware peripherals. In fact, such progress occurred more rapidly than I had thought possible, and I have been able to extend the work to support the Letux 400 notebook computer. It is perhaps worth describing the Letux 400 in a bit more detail because it has an interesting place in the history of netbook computers.

Some History

Back in the early 21st century, laptop computers were becoming increasingly popular at the expense of desktop computers, but as laptops began to take the place of desktops in homes and workplaces, this gradually led each successive generation of laptops to sacrifice portability and affordability in favour of larger, faster, higher-resolution screens and general hardware specifications more competitive with the desktop offerings they sought to replace. Laptops were becoming popular but also bigger, heavier and more expensive.

Things took an interesting turn in 2006 with the introduction of the XO-1 from the One Laptop per Child (OLPC) initiative. With rather different goals to those of the mainstream laptop vendors, the focus was to deliver a relatively-inexpensive yet robust portable computer for use by schoolchildren, many of whom might be living in places with limited infrastructure where increasingly power-hungry mainstream laptops would have been unsuitable, even unusable.

One unexpected consequence of the introduction of the XO-1 was the revival in interest in modestly-performing portable computing hardware. People were actually interested in a computer that did the things they needed, rather than having to buy something designed for gamers, software developers, or corporate “power users” (of both the pretend and genuine kinds). Rather than having to haul increasingly big and heavy laptops and all the usual accessories in a big dedicated bag, they liked the idea of slipping a smaller, lighter device into their everyday bag, as had probably been the idea with subnotebooks while they were still a thing.

Thus, the Asus Eee PC came about, regarded as the first widely-available netbook of recent times (acknowledging the earlier Psion netBook, of course), bringing with it the attention of large-volume manufacturers and economies of scale. For “lightweight tasks”, netbooks were enough for many people: a phenomenon that found itself repeating with tablets, particularly as recreational usage of technology became more important to buyers and evolved in certain ways.

Now, one thing that had been a big part of the OLPC initiative’s objectives was a $100 price point. At first, despite fairly radical techniques being used to reduce cost, and despite the involvement of a major original equipment manufacturer in the production of the XO-1, that price point of $100 was out of reach. Even the Eee PC retailed for a few hundred dollars.

This is where a product known as the Skytone Alpha 400 enters the picture. Some vendors, rebranding this product, offered it as possibly the first $100 laptop – or netbook – to be made available for sale. One of the vendors offers it as the Letux 400, and it has been available for as little as €125 during its retail lifespan. Noting that it has rather similar hardware to the Ben NanoNote, but has a more conventional physical profile and four times as much RAM, my brother bought one to investigate a few years ago. That is how I eventually ended up embarking on this experiment.

Extending Recent Work

There are many similarities between the JZ4720 system-on-a-chip (SoC) used in the Ben and the JZ4730 used in the Letux 400. However, it can be said that the JZ4720 is much better understood. The JZ4740 and closely-related devices like the JZ4720 have appeared in a number of different devices, documentation has surfaced for these products, and vendor source code has always been available, typically using or implicitly documenting most of the hardware.

In contrast, limited documentation is known to exist for the JZ4730, and the available vendor source code has not always described every detail of the hardware, even though the essential operations and register details appear to be present. Having looked at the Linux kernel sources that support the JZ4730, together with U-Boot source code, the similarities and differences between the JZ4720 and JZ4730 began to take shape in my mind.

I took an optimistic approach that mostly paid off. The Fiasco.OC kernel needs augmenting with the details of the JZ4730, but these are similar in some ways to the JZ4720 and familiar otherwise. For instance, the JZ4730 has a 32-bit “operating system timer” (OST) that curiously does not appear in the JZ4740 but does appear in more recent products such as the JZ4780. Bearing such things in mind, the timer and interrupt support was easily enough added.

One very different thing about the JZ4730 is that it does not seem to support the “set” and “clear” register locations that are probably common to most modern SoCs. Typically, one might want to update a hardware-related register to change a peripheral’s configuration, and it must have become apparent to hardware designers that such updates mostly want to either set or clear bits. Normally in a program, to achieve such things involves reading a value, performing a logical operation that combines the value with a description of the bits to be set or cleared, and then the value is written back to where it came from. For example:

define bits to set
load value from location (exposing a hardware register, perhaps)
logical-or value with bits
store result in location

Encapsulating this in a single instruction avoids potential issues with different things competing to update the location at the same time, if the hardware permits this, and just offers something that is more efficient and convenient, anyway. Separate locations are provided for “set” and “clear” operations, and the original location is provided to read and to overwrite the hardware register’s value. Sometimes, such registers might only support read-only access, in fact. But the JZ4730 does not support such additional locations, and so we have to do things the hard way when updating registers and doing things like clearing and setting bits.

One odd thing that caught me out was a strange result from the special “exception base” (EBASE) register that does not seem to return zero for the CPU identifier, something that the bootstrap code in L4Re expects. I suppressed this test and made the kernel always return zero when it asks for this identifier. To debug such things, I could not use the screen as I had done with the Ben since the bootloader does not configure it on the Letux. Fortunately, unlike the Ben, the Letux provides a few LEDs to indicate things like keyboard and network status, and these can be configured and activated to communicate simple status information.

Otherwise, the exercise mostly involved me reworking some existing code I had (itself borrowing somewhat from existing driver code) that provides driver support for the Letux hardware peripherals. The clock and power management (CPM) arrangement is familiar but different from the JZ4720; the LCD driver can actually be used as is; the general-purpose input/output (GPIO) arrangement is different from the JZ4720 and, curiously enough once again, perhaps more similar to the JZ4780 in some ways. To support the LCD panel’s backlight, a pulse-width modulation (PWM) driver needed to be added, but this involves very little code.

I also had to deal with the mistakes I made myself when not concentrating hard enough. Lots of testing and re-testing occurred. But in the space of a weekend or so, I had something to show for all the previous effort plus this round’s additional effort.

The Letux 400 and Ben NanoNote running the "spectrum" example

The Letux 400 and Ben NanoNote running the "spectrum" example

Here, you can see what kind of devices we are dealing with! The Letux 400 is less than half the width of a normal-size keyboard (with numeric keypad), and the Ben NanoNote is less than half the width of the Letux 400. Both of them were inexpensive computing devices when they were introduced, and although they may not be capable of running “modern” desktop environments or Web browsers, they offer computing facilities that were, once upon a time, “workstation class” in various respects. And they did, after all, run GNU/Linux when they were introduced.

And that is why it is attractive to consider running other “proper” operating system technologies on them now. Maybe we can revisit the compromises that led to the subnotebook and the netbook, perhaps even the tablet, where devices that are not the most powerful still have a place in fulfilling our computing needs.

Apache Breakfast

Inductive Bias | 07:39, Tuesday, 17 April 2018

In case you missed it but are living in Berlin - or are visiting Berlin/ Germany this week: A handful of Apache people (committers/ members) are meeting over breakfast on Friday morning this week. If you are interested in joining, please let me know (or check yourself - in the archives of the mailing list party@apache.org)

FOSS Backstage - Schedule online

Inductive Bias | 07:27, Tuesday, 17 April 2018

In January the CfP for FOSS Backstage opened. By now reviews have been done, speakers notified and a schedule created.

I'm delighted to find both - a lot of friends from the Apache Software Foundation but also a great many speakers that aren't affiliated with the ASF among the speakers.

If you want to know how Open Source really works, if you want to get a glimpse behind the stage, do not wait for too long to grab your ticket now and join us in summer in Berlin/ Germany.

If project management is only partially of your interest, we have you covered as well: For those interested in storing, searching and scaling data analysis, Berlin Buzzwords is scheduled to take place in the same week. For those interested in Tomcat, httpd, cloud and iot, Apache Roadshow is scheduled to happen on the same days as FOSS Backstage - and your FOSS Backstage ticket grants you access to Apache Roadshow as well.

If you're still not convinced - head over to the conference website and check out the talks available yourself.

Monday, 16 April 2018

Porting L4Re and Fiasco.OC to the Ben NanoNote (Summary)

Paul Boddie's Free Software-related blog » English | 20:36, Monday, 16 April 2018

As promised, here is a summary of the work involved in porting L4Re and Fiasco.OC to the Ben NanoNote. First of all, a list of all the articles with some brief descriptions of what they cover:

  1. Familiarisation with L4Re and Fiasco.OC on the MIPS Creator CI20, adding some missing pieces
  2. Setting up and introducing a suitable compiler for the Ben, also describing the hardware in the kernel
  3. Handling instructions unsupported by the JZ4720 (the Ben’s SoC) in the kernel
  4. Describing the Ben and dealing with unsupported instructions in the L4Re portion of the system
  5. Configuring the memory layout and attempting to bootstrap the kernel
  6. Making the kernel support the MIPS architecture revision used by the JZ4720, also fixing the interrupt system description
  7. Investigating context/thread switching and fixing an inadvertently-introduced fault in the unsupported instruction handling
  8. Configuring user space examples and getting a simple framebuffer demonstration working
  9. Getting the framebuffer driver, GUI multiplexer, and “spectrum” example working

As I may have noted a few times in the articles, this work just builds on previous work done by a number of people over the years, obviously starting with the whole L4 microkernel effort, the development of Fiasco.OC, L4Re and their predecessors, and the work done to port these components to the MIPS architecture. On the l4-hackers mailing list, Adam Lackorzynski was particularly helpful when I ran into obstacles, and Sarah Hoffman provided some insight into problems with the CI20 just as it was needed.

You really don’t have to read all the articles or even any of them! The point of this article is to summarise the work and perhaps make similar porting efforts a bit more approachable for others in the same position: anyone having a vague level of familiarity with L4Re/Fiasco.OC or similar systems, also having a device that might be supported, and being somewhat familiar with writing code that drives hardware.

Practical Details

It might be useful to give certain practical details here, if only to indicate the nature of the development and testing routine employed in this endeavour. First of all, I have been using a chroot containing the Debian “unstable” distribution for the i386 architecture. Although this was essential for a time when building the software for the CI20 and trying to take advantage of Debian’s cross-compiler packages, any fairly recent version of Debian would probably be fine because I ended up using a Buildroot toolchain to be able to target the Ben. You could probably choose any Free Software distribution and reproduce what I have done.

The distribution of patches contains instructions regarding preparation and the building of the software. It isn’t too useful to repeat that information here, but the following things need doing:

  1. Installing packages for build tools
  2. Obtaining or building a cross-compiler
  3. Checking out the source code for L4Re and Fiasco.OC from its repository
  4. Applying the patches
  5. Configuring and building the kernel
  6. Configuring and building the runtime environment
  7. Copying the payload to a memory card
  8. Booting the device

Some scripts have been included in the patch distribution, one of which should do the tricky job of applying patches to the repository checkout according to the chosen device configuration. Because a centralised version control system (Subversion) has been used to publish the L4Re and Fiasco.OC sources, I had to find a way of working with my own local changes. Consequently, I wrote a few scripts to maintain bundles of changes associated with certain files, and I then managed these bundles in a different version control system. Yes, this effectively meant versioning the changes themselves!

Things would be simpler with a decentralised version control system because local commits would be convenient, and upstream updates would be incorporated into the repository separately and merged with local changes in a controlled fashion. One of the corporate participants has made a Git repository for Fiasco.OC available, which may alleviate some issues, although I am increasingly finding larger Git repositories to be unusable on my modest hardware, and I also tend to disagree with everybody deciding to put everything on GitHub.

Fixing and Building

Needing to repeatedly build, test, go back and fix, I found myself issuing the same command sequences a lot. When working with the kernel, I tended to enter the kernel build directory, which I called “mybuild”, edit the kernel sources, and then re-run the make command:

cd mybuild
vi ../src/kern/mips/exception.S # edit a familiar file with vim
make

Having built a new kernel, I would then need to build a new payload to deploy, which meant ascending the directory hierarchy and building an image in the L4Re section of the sources:

cd ../../../l4
make O=mybuild uimage E=mips-qi_lb60-spectrum-example

Given a previously-built “user space”, this would bundle the new kernel together with code that might be able to test it. Of particular importance is the bootstrap code which launches the kernel: without that, there is no point in even trying to test the kernel!

I found that re-building L4Re components seemed to require a general build to be performed:

make O=mybuild

If that proved successful, an image would then be built and tested. In general, focusing on either the kernel or some user space component meant that there was rarely a need to build a new kernel and then build much of the user space.

Work Summary

The patches accumulated during this process cover a range of different areas of functionality. Looking at them organised by functional area, instead of in the more haphazard fashion presented throughout the series of articles, allows for a more convenient review of the work actually needed to get the job done.

Build System Adjustments and Various Fixes

As early as my experiments with the CI20, I experienced the need to fix some things that didn’t work on my system, either due to some Debian peculiarities or differences in compiler behaviour:

  • l4util-mips-thread.diff (fixes a symbol visibility issue with certain compiler versions)
  • mips-gcc-cpload.diff (fixes the initialisation of certain L4Re components)
  • no-at.diff (allows the build to work on Debian for the i386 architecture)

Other adjustments are required to let the build system do its job, setting paths for other components and for the toolchains:

  • conf-makeconf-boot.diff (lets the L4Re build system find things like the kernel, modules and hardware descriptions)
  • qi_lb60-gcc-buildroot-fiasco.diff (changes the compiler and architecture settings)
  • qi_lb60-gcc-buildroot-l4re.diff (changes the compiler, architecture and soft-float settings)

The build system also needs directing towards new drivers, and various files need to be excluded or changed:

  • ingenic-mips-drivers-top.diff (enables drivers added by this work)
  • qi_lb60-fbdrv.diff (changes the splash image for the framebuffer driver)
  • qi_lb60-l4re.diff (includes a temporary fix disabling a Mag plugin)

The first of these is important to remember when adding drivers since it changes the l4/pkg/drivers/Control file and defines the driver “packages” provided by each of the driver libraries. These package definitions help the build system work out which other parts of the system need to be consulted when building a particular driver.

Supporting MIPS32r1 Devices

Throughout the kernel and L4Re, changes need making to support the earlier architecture version provided by the JZ4720. The bulk of the following patch files deals with such changes:

  • qi_lb60-fiasco.diff
  • qi_lb60-l4re.diff

Maybe I will try and break out the architecture version changes into specific patch files, provided this does not result in the original source files ending up being patched by multiple patch files. My aim has been to avoid patches having to be applied in a particular order, and that starts to happen when multiple patches modify the same file.

Describing the Ben NanoNote

The kernel needs some knowledge of the Ben with regard to timers and interrupts. Meanwhile, L4Re needs to set the Ben up correctly when booting. Both sections of the system need an awareness of how memory is going to be used, and extra configuration options need to be provided to merely allow the selection of the Ben for building. Currently, the following patch files include things concerned with such matters:

  • qi_lb60-fiasco.diff (contains timer, interrupt and memory details, plus configuration system changes)
  • qi_lb60-l4re.diff (contains bootstrap and memory details, plus configuration system changes)
  • qi_lb60-platform.diff (platform definitions for the Ben in L4Re)

One significant objective here is to be able to offer the Ben as a “first class” configuration option and have the build system do the right thing, setting up all the components and code regions that the Ben needs to function.

Introducing Driver Code

To be able to activate the framebuffer on the Ben, driver code needs introducing for a few peripherals provided by the JZ4720: CPM (clock/power management), GPIO (general-purpose input/output) and LCD (liquid crystal display, or similar). A few different patch files cover these areas:

  • ingenic-mips-cpm.diff (CPM support for JZ4720 and JZ4780)
  • ingenic-mips-gpio.diff (GPIO support for JZ4720 and JZ4780)
  • qi_lb60-lcd.diff (LCD support for JZ4720)

The JZ4780 support is intended for the CI20 and will not be used with the Ben. However, it is convenient to incorporate support for these different platforms in the same patch file in each instance.

Meanwhile, the LCD driver should work with a range of JZ4700-series devices (labelled as JZ4740 in the patches). While focusing on getting things working, the only panel supported by this work was that provided by the Ben. Since then, support has been made slightly more general, just as was done with the Linux kernel support for products employing this particular SoC family and, subsequently, for panels in general. (Linux has moved towards a “device tree” approach for specifying things like panels and displays, although this is arguably just restating things that were once C-coded structures in another, rather peculiar, format.)

To support these drivers, some useful code has been copied from elsewhere in L4Re:

  • drivers_frst-register-block.diff

This provides a convenient abstraction for registers that is exposed via an include directive:

#include <l4/drivers/hw_mmio_register_block.h>

Indeed, it is worth focusing on the LCD driver briefly. The code has its origins in existing driver code written for the Ben that I adapted to get working as part of a simple “bare metal” payload. I have maintained a separation between the more intricate hardware configuration and aspects that deal with the surrounding software. As part of L4Re, the latter involves obtaining access to memory using the appropriate API calls and invoking other drivers.

In L4Re, there is a kind of framework for LCD drivers, and the existing drivers seem to be written in C rather than C++. Reminiscent of Linux, there is a mechanism for exporting driver operations using a well-defined data structure, and this permits the “probing” of drivers to see if they can be enabled and if meaningful information can be obtained about things like the supported resolution, colour depth and pixel format. To make the existing code compatible with L4Re, a fair amount of the work involves translating the information already known (and used) in the hardware configuration activity to a form that other L4Re components can understand and use.

Originally, for the GPIO driver, I had intended it to operate as part of the Io server framework. Components using GPIO functionality would then employ the appropriate API to configure and interact with the exposed input and output pins. Unfortunately, this proved rather cumbersome, and so I decided to take a simpler approach of providing the driver as an abstraction that a program would use together with explicitly-requested memory. I did decide to preserve the general form of the API for this relocated abstraction, however, meaning that various classes and methods are provided that behave in the same way as those “left behind” in the Io server framework.

Thus, a program would itself request access to the GPIO-related memory, and it would then use GPIO-related abstractions to “do the right thing” with this memory. One would envisage that such a program would not be a “normal”, unprivileged program as such, but instead be more like a server or driver in its own right. Indeed, the LCD driver employs these abstractions to use SPI-based signalling with the LCD panel, and it uses the same techniques to configure the LCD clock frequencies using the CPM-related memory and CPM-related abstractions.

Although the GPIO driver follows existing conventions, the CPM driver has no obvious precedent in L4Re, but I adopted some of the conventions employed in the GPIO driver, adding more specialised methods and functions to expose functionality specific to the SoC. Since I had previously written a CPM driver for the JZ4780, the main objective was to make the JZ4720/JZ4740 driver resemble the existing driver as much as possible.

Introducing and Configuring Example Programs

Throughout the series of articles, I was working towards running one specific example program, making some new ones on the way for testing purposes. These additional programs are provided together with their configuration, accompanied by suitable configurations for existing examples and components, by the following patch files:

  • ingenic-mips-modules.diff (example program definitions)
  • qi_lb60-examples.diff (example program implementations and configuration details)

The additional programs (defined in l4/conf/modules.list) are as follows:

  • mips-qi_lb60-lcd-example (implemented by qi_lb60_lcd, configured by the mips-qi_lb60-lcd files)
  • mips-qi_lb60-lcd-driver-example (implemented by qi_lb60_lcd_driver, configured by the mips-qi_lb60-lcd-driver files)

Configurations are provided for the existing examples and components as follows:

  • mips-qi_lb60-fbdrv-example (configured by the mips-qi_lb60-fbdrv files)
  • mips-qi_lb60-spectrum-example (configured by the mips-qi_lb60-spectrum files)

All configurations reside in the l4/conf/examples directory. All new examples reside in the l4/pkg/examples/misc directory.

Further Work

In the final article in the series, I mentioned a few ideas for further work based on that described above:

Since completing the above work, I have already made some progress on the first two of these topics. More on that in an upcoming post!

Research on the sustainability of participation in FSFE

Giacomo Poderi | 14:45, Monday, 16 April 2018

I’m a sociologist and I currently work as a researcher at IT University of Copenhagen, where I am responsible for “Infrastructuring SuStainable Playbour“ (ISSP): a project I received funding for from the EU/H2020 framework, under the Marie Skłodowska-Curie Action – Individual Fellowship fund.

This project investigates the sustainability of collaborative spaces, as commons, and it focuses on participants’ continuous contribution to the maintenance and development of such ‘places’.

The research involves three case studies, and . . . → Read More: Research on the sustainability of participation in FSFE

Thursday, 12 April 2018

Akademy 2018 hotel and flight booked!

TSDgeos' blog | 22:45, Thursday, 12 April 2018

I just booked my flights and hotel for Akademy 2018.

If you're planning to come you should too! [1]

You can find information about the recommended accommodation here.

See you in Viena!

[1] unless you're applying for sponsored travel+accommodation

Disconnecting from Facebook

Ramblings of a sysadmin (Posts about planet-fsfe) | 11:30, Thursday, 12 April 2018

About 5 months ago I moved away from Whatsapp (owned by Facebook) to Signal and today I moved away from Facebook itself. It has been on my to do list for a while already. Watching Zondag met Lubach this week gave me the final push to put my Facebook account removal at the top of my to do list. Arjen Lubach even created a Facebook event (quite funny) called "Bye Bye Facebook", which was scheduled for yesterday evening at 20.00. He stuck to his word and removed his own account. What made it funnier was that his event was not very easy to find using the search function, which usually works fine.

After a quick online search, I found the official help page "How do I permanently delete my account?". It refers to a "Download a copy of your info" page, which I did first (since generating this download might take a while). After having received the download link via e-mail and having downloaded my information, I followed the "Let us know" link to really remove my account.

The account removal dialog:

/img/posts/2018/04/12-disconnecting-from-facebook/permanently_delete_account_dialog.thumbnail.png

And the confirmation dialog:

/img/posts/2018/04/12-disconnecting-from-facebook/permanently_delete_account_confirmation.thumbnail.png

The top of my timeline (this morning). Strangely enough my "Bye Bye Facebook" post went missing. I'll just "assume" this was because I put a link to an event in there, which is in the past...

/img/posts/2018/04/12-disconnecting-from-facebook/timeline.thumbnail.png

Now I just have to wait 2 weeks and my account should "permanently" be deleted. Disconnecting from Facebook seems like a big deal, but really isn't. I still can be reached IRL (In Real Life), by sending me an e-mail or by messaging me (using Signal) or calling me. For reading news I still rely on my ttrss instance, luckily I've never used social media for that purpose.

If you are craving the social media experience and want privacy (yes, the two can be combined), I suggest you try (or look in to) the following:

They all offer the ability to join an existing instance or create one yourself. I'll be creating several for my friends and family, if they want me to ;-)

Tip: look through the details of the 'data export'. It creeped me out quite a bit. They were nice enough to make it easily readable and even made it into a simple website (index.html)

Wednesday, 11 April 2018

Tutorial: Writing your first view from scratch (C++20 / P0789)

Posts on Hannes Hauswedell's homepage | 11:45, Wednesday, 11 April 2018

C++17 was officially released last year and the work on C++20 quickly took off. A subset of the Concepts TS was merged and the first part of the Ranges TS has been accepted, too. Currently the next part of the Ranges TS is under review: “Range Adaptors and Utilities”. It brings the much-hyped “Views” to C++, but maybe you have been using them already via the Range-V3 library? In any case you might have wondered what you need to do to actually write your own view. This is the first in a series of blog posts describing complete view implementations (not just adaptations of existing ones).

Introduction (skip this if you have used views before)

Ranges are an abstraction of “a collection of items”, or “something iterable”. The most basic definition requires only the existence of begin() and end(), their comparability and begin being incrementable, but more refined range concepts add more requirements.

The ranges most commonly known are containers, e.g. std::vector. Containers are types of ranges that own the elements in the collection, but in this blog post we are more interested views.

What are views?

Views are ranges that usually (but not always!) performs an operation on another range. They are lazy-evaluated stateful algorithms on ranges that present their result again as a range. And they can be chained to combine different algorithms which can be done via the | operator like on the UNIX command line.

Ok, sounds cool, what does this mean in practice?

Well, you can, e.g. take a vector of ints, apply a view that computes the square of every element, and then apply a view that drops the first two elements:

1  std::vector<int> vec{1, 5, 6, 8, 5};
2  auto v = vec | view::transform([] (int const i) { return i*i; }) | view::drop(2);
3  std::cout << *std::begin(v) << '\n'; // prints '36'

And the point here is that only one “squaring-operation” actually happens and that it happens when we dereference the iterator, not before (because of lazy evaluation!).

What type is v? It is some implementation defined type that is guaranteed to satisfy certain range concepts: the View concept and the InputRange concept. The view concept has some important requirements, among them that the type is “light-weight”, i.e. copy’able in constant time. So while views appear like containers, they behave more like iterators.

If you are lost already, I recommend you check out some of the following resources

Prerequisites

The following sections assume you have a basic understanding of what a view does and have at least tried some of the toy examples yourself.

DISCLAIMER: Although I have been working with views and range-v3 for a while now, I am surprised by things again and again. If you think I missed something important in this article I would really appreciate feedback!

In general this post is aimed at interested intermediate C++ programmers, I try to be verbose with explanations and also provide many links for further reading.

You should have a fairly modern compiler to test the code, I test with GCC7 and Clang5 and compile with -std=c++17 -Wall -Wextra.

I refer to constraints and concepts in some of the examples. These are not crucial for the implementation so if they are entirely unfamiliar to you, just skip over them. If you use GCC on the other hand, you can uncomment the respective sections and add -fconcepts to your compiler call to activate them.

While the views we are implementing are self-contained and independent of the range-v3 library, you should get it now as some of our checks and examples require it.

And you should be curious of how to make your own view, of course 😄

Adapting existing views

Our task in this post is to write a view that works on input ranges of uint64_t and always adds the number 42, i.e. we want the following to work:

 1  int main()
 2  {
 3      std::vector<uint64_t> in{1, 4, 6, 89, 56, 45, 7};
 4 
 5      for (auto && i : in | view::add_constant)
 6          std::cout << i << ' ';
 7      std::cout << '\n'; // should print: 43 46 48 131 98 87 49
 8 
 9      // combine it with other views:
10      for (auto && i : in | view::add_constant | ranges::view::take(3))
11          std::cout << i << ' ';
12      std::cout << '\n'; // should print: 43 46 48
13  }

Most of the time it will be sufficient to adapt an existing view and whenever this is feasible it is of course recommended. So the recommended solution to the task is to just re-use ranges::view::transform:

 1  #include <iostream>
 2  #include <range/v3/view/transform.hpp>
 3  #include <range/v3/view/take.hpp>
 4 
 5  namespace view
 6  {
 7  auto const add_constant = ranges::view::transform([] (uint64_t const in)
 8                                                    {
 9                                                       return in + 42;
10                                                    });
11  }
12 
13  int main()
14  {
15      std::vector<uint64_t> in{1, 4, 6, 89, 56, 45, 7};
16 
17      for (auto && i : in | view::add_constant)
18          std::cout << i << ' ';
19      std::cout << '\n'; // should print: 43 47 64 131 98 87 49
20 
21      // combine it with other views:
22      for (auto && i : in | view::add_constant | ranges::view::take(3))
23          std::cout << i << ' ';
24      std::cout << '\n'; // should print: 43 47 64
25  }

As you can see, it’s very easy to adapt existing views!

But it’s not always possible to re-use existing views and the task was to get our hands dirty with writing our own view. The official manual has some notes on this, but while abstractions are great for code-reuse in a large library and make the code easier to understand for those that know what lies behind them, I would argue that they can also obscure the actual implementation for developers new to the code base who need to puzzle together the different levels of inheritance and template specialisation typical for C++ abstractions.

So in this post we will develop a view that does not depend on range-v3, especially not the internals.

The components of a view

What is commonly referred to as a view usually consists of multiple entities:

  1. the actual class (template) that meets the requirements of the View concept and at least also InputRange concept; by convention of the range-v3 library it is called view_foo for the hypothetical view “foo”.
  2. an adaptor type which overloads the () and | operators that facilitate the “piping” capabilities and return an instance of 1.; by convention of range-v3 it is called foo_fn.
  3. an instance of the adaptor class that is the only user-facing part of the view; by convention of range-v3 it is called foo, in the namespace view, i.e. view::foo.

If the view you are creating is just a combination of existing views, you may not need to implement 1. or even 2., but we will go through all parts now.

The actual implementation

preface

1 #include <range/v3/all.hpp>
2 #include <iostream>
3 
4 template <typename t>
5 using iterator_t = decltype(std::begin(std::declval<t &>()));
6 
7 template <typename t>
8 using range_reference_t = decltype(*std::begin(std::declval<t &>()));
  • As mentionend previously, including range-v3 is optional, we only use it for concept checks – and in production code you will want to select concrete headers and not “all”.
  • The iterator_t metafunction retrieves the iterator type from a range by checking the return type of begin().
  • The range_reference_t metafunction retrieves the reference type of a range which is what you get when dereferencing the iterator. It is only needed in the concept checks. 1
  • Both of these functions are defined in the range-v3 library, as well, but I have given minimal definitions here to show that we are not relying on any sophisticated magic somewhere else.

view_add_constant

We start with the first real part of the implementation:

1 template <typename urng_t>
2 //     requires (bool)ranges::InputRange<urng_t>() &&
3 //              (bool)ranges::CommonReference<range_reference_t<urng_t>, uint64_t>()
4 class view_add_constant : public ranges::view_base
5 {
  • view_add_constant is a class template, because it needs to hold a reference to the underlying range it operates on; that range’s type urng_t is passed in a as template parameter.
  • If you use GCC, you can add -fconcepts and uncomment the requires-block. It enforces certain constraints on urng_t, the most basic constraint being that it is an InputRange. The second constraint is that the underlying range is actually a range over uint64_t (possibly with reference or const).
  • Please note that these constraints are specific to the view we are just creating. Other views will have different requirements on the reference type or even the range itself (e.g. it could be required to satisfy RandomAccessRange).
  • We inherit from view_base which is an empty base class, because being derived from it signals to some library checks that this class is really trying to be a view (which is otherwise difficult to detect sometimes); in our example we could also omit it.
1 private:
2     /* data members == "the state" */
3     struct data_members_t
4     {
5         urng_t urange;
6     };
7     std::shared_ptr<data_members_t> data_members;
  • The only data member we have is (the reference to) the original range. It may look like we are saving a value here, but depending on the actual specialisation of the class template, urng_t may also contain & or const &.
  • Why do we put the member variables inside an extra data structure stored in a smart pointer? A requirement of views is that they be copy-able in constant time, e.g. there should be no expensive operations like allocations during copying. An easy and good way to achieve implicit sharing of the data members is to put them inside a shared_ptr. Thereby all copies share the data_members and they get deleted with the last copy. 2
  • In cases where we only hold a reference, this is not strictly required, but in those cases we still benefit from the fact that storing the reference inside the smart pointer makes our view default-constructible. This is another requirement of views – and having a top-level reference member prevents this. [Of course you can use a top-level pointer instead of a reference, but we don’t like raw pointers anymore!]
  • Other more complex views have more variables or “state” that they might be saving in data_members.
 1     /* the iterator type */
 2     struct iterator_type : iterator_t<urng_t>
 3     {
 4         using base = iterator_t<urng_t>;
 5         using reference = uint64_t;
 6 
 7         iterator_type() = default;
 8         iterator_type(base const & b) : base{b} {}
 9 
10         iterator_type operator++(int)
11         {
12             return static_cast<base&>(*this)++;
13         }
14 
15         iterator_type & operator++()
16         {
17             ++static_cast<base&>(*this);
18             return (*this);
19         }
20 
21         reference operator*() const
22         {
23             return *static_cast<base>(*this) + 42;
24         }
25     };
  • Next we define an iterator type. Since view_add_constant needs to satisfy basic range requirements, you need to be able to iterate over it. In our case we can stay close to the original and inherit from the original iterator.
  • For the iterator to satisfy the InputIterator concept we need to overload the increment operators so that their return type is of our class and not the base class. The important overload is that of the dereference operation, i.e. actually getting the value. This is the place where we interject and call the base class’s dereference, but then add the constant 42. Note that this changes the return type of the operation (::reference); it used to be uint64_t & (possibly uint64_t const &), now it’s uint64_t → A new value is always generated as the result of adding 42.
  • Note that more complex views might require drastically more complex iterators and it might make sense to define those externally. In general iterators involve a lot of boilerplate code, depending on the scope of your project it might make sense to add your own iterator base classes. Using CRTP also helps re-use code and reduce “non-functional” overloads.

We continue with the public interface:

1 public:
2     /* member type definitions */
3     using reference         = uint64_t;
4     using const_reference   = uint64_t;
5     using value_type        = uint64_t;
6 
7     using iterator          = iterator_type;
8     using const_iterator    = iterator_type;
  • First we define the member types that are common for input ranges. Of course our value type is uint64_t as we only operate on ranges over uint64_t and we are just adding a number. As we mentioned above, our iterator will always generate new values when dereferenced so the reference types are also value types.
  • Note: Other view implementation might be agnostic of the actual value type, e.g. a view that reverses the elements can do so independent of the type. AND views might also satisfy OutputRange, i.e. they allow writing to the underlying range by passing through the reference. To achieve this behaviour you would write using reference = range_reference_t<urng_t>;. The value type would then be the reference type with any references stripped (using value_type = std::remove_cv_t<std::remove_reference_t<reference>>;).
  • The iterator type is just the type we defined above.
  • In general views are not required to be const-iterable, but if they are the const_iterator is the same as the iterator and const_reference is the same as reference. 3
 1     /* constructors and deconstructors */
 2     view_add_constant() = default;
 3     constexpr view_add_constant(view_add_constant const & rhs) = default;
 4     constexpr view_add_constant(view_add_constant && rhs) = default;
 5     constexpr view_add_constant & operator=(view_add_constant const & rhs) = default;
 6     constexpr view_add_constant & operator=(view_add_constant && rhs) = default;
 7     ~view_add_constant() = default;
 8 
 9     view_add_constant(urng_t && urange)
10         : data_members{new data_members_t{std::forward<urng_t>(urange)}}
11     {}
  • The constructors are pretty much standard. We have an extra constructor that initialises our urange from the value passed in. Note that this constructor covers all cases of input types (&, const &, &&), because more attributes can be stuck in the actual urng_t and because of reference collapsing.
 1     /* begin and end */
 2     iterator begin() const
 3     {
 4         return std::begin(data_members->urange);
 5     }
 6     iterator cbegin() const
 7     {
 8         return begin();
 9     }
10 
11     auto end() const
12     {
13         return std::end(data_members->urange);
14     }
15 
16     auto cend() const
17     {
18         return end();
19     }
20 };
  • Finally we add begin() and end(). Since we added a constructor for this above, we can create our view’s iterator from the underlying range’s iterator implicitly when returning from begin().
  • For some ranges the sentinel type (the type returned by end()) is not the same as the type returned by begin(), this is only true for BoundedRanges; the only requirement is that the types are comparable with == and !=. We need to take this into account here, that’s why the end function returns auto and not the iterator (the underlying sentinel is still comparable with our new iterator, because it inherits from the underlying range’s iterator).
  • As noted above, some views may not be const-iterable, in that case you can omit cbegin() and cend() and not mark begin() and end() as const.
  • Note that if you want your view to be stronger that an InputRange, e.g. also be a SizedRange or even a RandomAccessRange, you might want to define additional member types (size_type, difference_type) and additional member functions (size(), operator[]…). *Although strictly speaking the range “traits” are now deduced completely from the range’s iterator so you don’t need additional member functions on the range.*
1 template <typename urng_t>
2 //     requires (bool)ranges::InputRange<urng_t>() &&
3 //              (bool)ranges::CommonReference<range_reference_t<urng_t>, uint64_t>()
4 view_add_constant(urng_t &&) -> view_add_constant<urng_t>;
  • We add a user-defined type deduction guide for our view.
  • Class template argument deduction enables people to use your class template without having to manually specify the template parameter.
  • In C++17 there is automatic deduction, as well, but we need user defined deduction here, if we want to cover both cases of urng_t (value tpye and reference type) and don’t want to add more complex constructors.
1 static_assert((bool)ranges::InputRange<view_add_constant<std::vector<uint64_t>>>());
2 static_assert((bool)ranges::View<view_add_constant<std::vector<uint64_t>>>());
  • Now is a good time to check whether your class satisfies the concepts it needs to meet, this also works on Clang without the Concepts TS or C++20. We have picked std::vector<uint64_t> as an underlying type, but others would work, too.
  • If the checks fail, you have done something wrong somewhere. The compilers don’t yet tell you why certain concept checks fail (especially when using the range library’s hacked concept implementation) so you need to add more basic concept checks and try which ones succeed and which break to get hints on which requirements you are failing. A likely candidate is your iterator not meeting the InputIterator concept (old, but complete documentation).

add_constant_fn

Off to our second type definition, the functor/adaptor type:

 1 struct add_constant_fn
 2 {
 3     template <typename urng_t>
 4 //         requires (bool)ranges::InputRange<urng_t>() &&
 5 //                  (bool)ranges::CommonReference<range_reference_t<urng_t>, uint64_t>()
 6     auto operator()(urng_t && urange) const
 7     {
 8         return view_add_constant{std::forward<urng_t>(urange)};
 9     }
10 
11     template <typename urng_t>
12 //         requires (bool)ranges::InputRange<urng_t>() &&
13 //                  (bool)ranges::CommonReference<range_reference_t<urng_t>, uint64_t>()
14     friend auto operator|(urng_t && urange, add_constant_fn const &)
15     {
16         return view_add_constant{std::forward<urng_t>(urange)};
17     }
18 
19 };
  • The first operator facilitates something similar to the constructor, it enables traditional usage of the view in the so called function-style: auto v = view::add_constant(other_range);.
  • The second operator enables the pipe notation: auto v = other_range | view::add_constant;. It needs to be friend or a free function and takes two arguments (both sides of the operation).
  • Both operators simply delegate to the constructor of view_add_constant.

view::add_constant

Finally we add an instance of the adaptor to namespace view:

1 namespace view
2 {
3 
4 add_constant_fn constexpr add_constant;
5 
6 }

Since the adapter has no state (in contrast to the view it generates), we can make it constexpr. You can now use the adaptor in the above example.

We are done 😊

Here is the full code: view_add_constant.cpp

Post scriptum

I will follow up on this with a second tutorial, it will cover writing a view that takes arguments, i.e.

1 std::vector<uint64_t> in{1, 4, 6, 89, 56, 45, 7};
2 auto v = in | view::add_number(42);
3 // decide this at run-time     ^

If you found mistakes (of which I am sure there are some) or if you have questions, please comment below via GitHub, Gitea, Twitter or Mastodon!


  1. If you are confused that we are dealing with the “reference type” and not the “value type”, remember that member functions like at() and operator[] on plain old containers also always return the ::reference type.
  2. This is slightly different than in range-v3 where views only accept temporaries of other views, not of e.g. containers (containers can only be given as lvalue-references). This enables constant time copying of the view even without implicit sharing of the underlying range, but it mandates a rather complicated set of techniques to tell apart views from other ranges (the time complexity of a function is not encoded in the language so tricks like inheriting ranges::view are used). I find the design used here more flexible and robust.
  3. This might be confusing to wrap your head around, but remember that the const_iterator of a container is like an iterator over the const version of that container. The same is true for views, except that since the view does not own the elements its own const-ness does not “protect” the elements from being written to. Ranges behave similar to iterators in this regard, an iterator const on a vector can also be used to write to the value it points to. More on this in this range-v3 issue.

Thursday, 05 April 2018

Surveillance Valley – a review

agger's Free Software blog | 14:26, Thursday, 05 April 2018

Note: This post is a book review. I did not buy this book on Amazon, and if, after reading this post, you consider buying it, I strongly urge you not to buy it on Amazon. Amazon is a proprietary software vendor and, more importantly, a company with highly problematic business and labour practices. They should clean up their act and, failing that, we should all boykot them. 

Most of us have heard that the Internet started as a research project initiated by the ARPA, the Advanced Research Projects Agency, an agency under the US military conducting advanced research, especially focusing on counter-insurgency and future war scenarios. A common version of this story is that the Internet was originally intended to be a decentralized network, a network with no central hub necessary for its operation, where individual nodes might be taken out without disrupting the traffic, which would just reroute itself through other nodes. A TCP/IP network may indeed work like that, but the true origins of the Internet are far darker.

In the 1940′s and 50′s, Norbert Wiener’s theory of cybernetics became very popular. Wiener was a mathematician who worked for the American military during WWII. The gist of cybernetics is that all systems maintain themselves through feedback between their elements. If one could understand the nature of the feedback that keeps them stable, one could predict their future behaviour. The beauty of this theory is that systems could consist of human beings and machines, and it did not in fact matter if a given element was one or the other; as the systems were supposed to stabilize naturally just like ecosystems, it should be possible to set down mathematical equations they’d need to fulfill to serve their role in the system.

This theory was criticized, in fact even by Wiener himself, for reducing human beings to machines; and the analogy to ecosystems has proven false, as later biological research has shown that ecosystems do not tend to become stable – in fact, they are in constant change. In the 50s, however, this theory was very respected, and ARPA wanted to utilize it for counterinsurgency in Asian countries. For that purpose, they started a detailed anthropological study of tribes in Thailand, recording the people’s physical traits as well as a lot of information about their culture, habits and overall behaviour. The intention was to use this information in cybernetic equations in order to be able to predict people’s behaviour in wars like the Korea or, later, the Vietnam war.

In order to do this, they needed computation power – a lot of it. After the Soviets sent up the Sputnik and beat the Americans to space, there was an extraordinary surge of investments in scientific and engineering research, not least into the field of computers. In the early 60′s, psychologist and computer scientist J.R.C. Licklider proposed “The Intergalactic Network” as a way to provide sufficient computation power for the things that ARPA wanted to do – by networking the computers, so problems might be solved by more computers than the user was currently operating. In doing so, Licklider predicted remote execution, keyboard-operated screens as well as a network layout that was practically identical to (if much smaller than) the current Internet. Apart from providing the power to crunch the numbers needed to supposedly predict the behaviour of large populations for counterinsurgency purposes, the idea that such a network could be used for control and surveillance materialized very early.

In the 1990s, the foundations of the company currently known as Google was created in Stanford Research Institute, a university lab that had for decades been operating as a military contractor. The algorithmic research that gave us the well-known Page Rank algorithm was originally funded by grants from the military.

From the very beginning, Google’s source of income was mining the information in its search log. You could say that from the very beginning, Google’s sole business model has been pervasive surveillance, dividing its users into millions of buckets in order to sell as fine-tuned advertising as possible.

At the same time, Google has always been a prolific military contractor, selling upgraded versions of all kinds of applications to help the US military fight their wars. As an example, Google Earth was originally developed by Keyhole, Inc. with military purposes in mind – the military people loved the video game-like interface, and the maps and geographical features could be overlaid with all kinds of tactical information about targets and allieds in the area.

More controversially, the Tor project, the free software project so lauded by the Internet Freedom and privacy communities, is not what it has consistently described itself as. It is commonly known that it was originally commissioned by a part of the US Navy as an experimental project for helping their intelligence agents stay anonymous, but it is less known that Tor has, since its inception, been almost exclusively financed by the US government, among others through grants from the Pentagon and the CIA but mainly by BBG, the “Broadcasting Board of Governors”, which originated in the CIA.

The BBG’s original mission was to run radio stations like Voice of America and, more recently, Radio Free Asia, targeting the populations of countries that were considered military enemies of the US. Among other things, BBG has been criticized for simply being a propaganda operation, a part of a hostile operation against political adversaries:

Wherever we feel there is an ideological enemy, we’re going to have a Radio Free Something (…) They lean very heavily on reports by and about dissidents in exile. It doesn’t sound like reporting about what’s going on in a country. Often, it reads like a textbook on democracy, which is fine, but even to an American it’s rather propagandistic.

One could ask, what kind of interest could the BBG possibly have in privacy activism such as that supposedly championed by the Tor project? None, of course. But they might be interested in providing dissidents in hostile countries with a way to avoid censorship, maybe even to plot rebellion without being detected by the regime’s Internet surveillance. Radio Free Asia had for years been troubled by the Chinese government’s tendency to block their transmission frequencies. Maybe Tor could be used to blast a hole in the Great Chinese Firewall?

At the same time, Tor could be used by operatives from agencies like the CIA, the NSA or the FBI to hide their tracks when perusing e.g. Al Qaeda web sites.

But, if the US government promotes this tool to dissidents in Russia, China or Iran as a creation of the US government – why would they trust it? And, if an Al Qaeda site suddenly got a spike of visitors all using Tor – maybe they’d figure it out anyway, if Tor was known as a US government tool? Wouldn’t it be nice if millions of people used Tor because they thought they were “sticking it to the man” and “protecting their privacy”, giving legitimacy with respect to the dissidents and cover to the agents?

And so, Tor the Privacy Tool was born. People were told that if they used Tor and were careful, it was cryptographically impossible that anyone should know which sites they were visiting. Except for the fact that Tor has all the time had serious (unintentional) weaknesses which meant that hidden services might have their IP exposed and web site visitors might, with some probability, be identified even if they were using Tor correctly. And using Tor correctly is already very difficult.

Yes, someone like Edward Snowden who knew about its weaknesses and had considerable insight into its security issues could indeed use Tor safely to perform his leaks and communicate about them, for a short while. But advising people in repressive societies with no technical insight who may have their lives at stake doing really serious things to rely on this tool might be … completely irresponsible. Like sending someone in battle with a wooden toy gun.

And maybe, just maybe, the American government was happy enough letting these pesky privacy activists run around with their wooded toy gun, courtesy of Uncle Sam, instead of doing something stupid like demanding effective regulations. And who better to evangelize this wooden toy gun but Jacob Appelbaum, the now-disgraced Tor developer who toured the world pretending to “stick it to the Man”, all the while working for a military contractor and netting a $100,000 paycheck directly from the American government? Maybe, in that sense, Tor as a privacy tool was always worse than nothing.

These are just a few of the topics covered in Yasha Levine’s new book Surveillance Valley. Levine’s idea is to cover the military roots of the modern computer industry, and he does that in gory and unsettling detail.  Apart from cybernetics, ARPA, Google and Tor he also covers the influence of cybernetics on the counterculture and its later history of WIRED magazine and the Californian ideology. It also offers a critical examination of the consequences of Edward Snowden’s leaks.

This is not a flawless book; Levine has a point he wishes to get through, and in order to get there, he occasionally resorts “hatchet job” journalism, painting people’s motives in an artificially unfavourable light or not researching his accusations thoroughly enough. For instance, Levine accuses Dingledine and the Tor project of giving vulnerabilities to the government for possible exploitation before making them public. The example he gives to prove that assertion is wrong, and I guess he makes the mistake because his eagerness to nail them made him sloppy, and because Levine himself lacks the technical expertise to see why the vulnerability he mentions (TLS normalization, detectability of Tor traffic) couldn’t possibly have been unknown to others at the time.

But, apart from that, I wholeheartedly recommend the book. It tells a story about Silicon Valley that really isn’t told enough, and it points out some really unpleasant – but, alas, all too true – aspects of the technology that we have all come to depend on. Google, the “cool” and “progressive” do-good-company, in fact a military contractor that helps American drones kill children in Yemen and Afghanistan? As well as a partner in predictive policing and a collector of surveillance data that the NSA may yet try to use to control enemy populations in a Cybernetics War 2.0? The Tor Project as paid shills of the belligerent US foreign policy? And the Internet itself, that supposedly liberating tool, was originally conceived as a surveillance and control mechanism?

Yes, unfortunately – in spite of the book’s flaws, true on all counts. For those of us who love free software because we love freedom itself, that should be an eyeopener.

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André Ockers on Free Software  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Giacomo Poderi  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker's blog  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Repentinus » English  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog