Free Software, Free Society!
Thoughts of the FSFE Community (English)

Tuesday, 20 October 2020

Make sure KDE software is usable in your language, join KDE translations!

Translations are a vital part of software. More technical people often overlook it because they understand English well enough to use the software untranslated, but only 15% of the World understands English, so it's clear we need good translations to make our software more useful to the rest of the world.

Translations are a place that [almost] always needs help, so I would encourage you to me (aacid@kde.org) if you are interested in helping.

Sadly, some of our teams are not very active, so you may find yourself alone, it can be a bit daunting at the beginning, but the rest of us in kde-i18n-doc will help you along the way :)

This is a list of teams sorted by how many translation commits have happened in the last year, more commits doesn't mean better, even teams with lots of commits will probably welcome help, maybe it's not in pure translation but instead in reviewing, you can also check statistics at https://l10n.kde.org/stats/gui/trunk-kf5/team/

More than 250 commits


Azerbaijani
Basque
Brazilian Portuguese
Catalan
Estonian
French
Interlingua
Lithuanian
Dutch
Portuguese
Russian
Slovak
Slovenian
Swedish
Ukrainian

Between 100 and 250 commits


German
Greek
Italian
Norwegian Nynorsk
Spanish

Between 50 and 100 commits


Asturian
Catalan (Valencian)
Czech
Finnish
Hungarian
Indonesian
Korean
Norwegian Bokmal
Polish
Vietnamese
Chinese Traditional

Between 10 and 50 commits


British English
Danish
Galician
Hindi
Icelandic
Japanese
Malayalam
Northern Sami
Panjabi/Punjabi
Romanian
Tajik
Chinese Simplified

Between 0 and 10 commits


Albanian
Belarusian
Latvian
Serbian
Telugu
Turkish

No commits


Afrikaans
Arabic
Armenian
Assamese
Bengali
Bosnian
Bulgarian
Chhattisgarhi
Crimean Tatar
Croatian
Esperanto
Farsi
Frisian
Georgian
Gujarati
Hausa
Hebrew
Irish Gaelic
Kannada
Kashubian
Kazakh
Khmer
Kinyarwanda
Kurdish
Lao
Low Saxon
Luxembourgish
Macedonian
Maithili
Malay
Maltese
Marathi
Nepali
Northern Sotho
Occitan
Oriya
Pashto
Scottish Gaelic
Sinhala
Tamil
Tatar
Thai
Tswana
Upper Sorbian
Uyghur
Uzbek
Venda
Walloon
Welsh
Xhosa

P.S: Please don't mention web base translation workflows as comments to this blog, it's not the place to discuss that.

Monday, 19 October 2020

Akademy-es call for papers expanded to October 27

This year Akademy-es is a bit special since it is happening in the Internet so you don't need to travel to Spain to participate.

If you're interested in giving a talk please visit https://www.kde-espana.org/akademy-es-2020/presenta-tu-charla for more info.

Tuesday, 13 October 2020

The stories we tell each other

I have recently been working on a conversion document that adapts Dungeons & Dragons’ Eberron campaign setting to the Savage Worlds system. I’m not a game designer and I’m not a particularly prolific writer, so this was a bit of a challenge for me. One of the most challenging things to pull off was converting the races. Through writing the document, I think I developed a deeper understanding for the racism inherent to fantasy fiction. And I’d like to talk about that.

Two disclaimers to start off:

  1. Real-world racism is incomparably worse than fictitious racism against fictitious beings.
  2. Fantasy fiction unfortunately uses the word “race” when describing species. I’ll be using both terms interchangeably.

The human problem

Before the rest of this article can make sense, I have to establish that games are generally predicated on some kind of balance. If I choose a dwarf character and you choose an elf character, the advantages and disadvantages we gain from our choices should balance out, and neither of us should be strictly better than the other. This can be difficult to precisely quantify, but it’s possible to at least be reasonably balanced. This is fine, because games require some sort of balance to remain fun for everyone.

Of course, there’s a problem, and it’s humans. What are humans if not strictly worse elves? Elves get to see in the dark, have keen hearing, amazing agility, meditate instead of sleeping, and—in D&D—have resistance against certain types of magic. What do humans get to counterbalance that? From a narrative perspective: Squat.

From a mechanical perspective, game designers often give humans a flat increase to everything, or allow the player to assign some extra points to things of their own choice. The thinking goes that this evens the balancing scales—which is fine, because it’s a game after all. The narrative justification is that humans are exceptionally “adaptive” or “ambitious”.

It’s a lazy justification, but because we might like some explanation, maybe it will do, especially considering that there are worse offenders.

Conflation of culture and species

All elves in D&D receive proficiency in swords and bows. The thinking goes that these weapons hold some significant status in elven society, and therefore it is reasonable to assume that all elves raised in such a society would have received training in these weapons. This is wrong for so many reasons:

  • It assumes that there is such a thing as an elven society and culture.
  • It assumes that that culture values proficiency in swords and bows.
  • It assumes that the player character grew up in this culture.
  • It assumes that the player character actually took classes in these weapons.

If any of the above assumptions are false, then tough luck—you’re going to be good at those weapons regardless, whether or not you want to, as if elves are born with innate knowledge of swinging swords and loosing arrows.

Moreover, if you’re a human living in this elven culture—in spite of it making no narrative sense—you do not get automatic proficiency in these weapons.

Not by the rules, anyway. Put a pin in that.

The bell curve

If we can’t or shouldn’t conflate culture and species, then perhaps we should just stick to biology. This seems promising at first. It doesn’t seem so weird that elves might be a species with better eyesight and hearing—after all, dogs are a species with a veritably better sense of smell. And maybe it’s not so weird that elves can get their night’s rest through meditation instead of a human sleeping pattern.

The most difficult biological trait to justify would be the elves’ heightened dexterity. The stereotype of elves is that they are these highly agile beings with a certain finesse. And before elaborating on that, I think it’s good to stop for a moment to appreciate that this is aesthetically cool. It’s pleasing to imagine cool people do cool things.

Having stopped to appreciate the aesthetics, we can move on to question them. Are all elves dexterous? Judging by the rules—yes, all elves get a bonus to dexterity. But narratively, surely, this can’t be true. Maybe an elf was born with a physical disability, or acquired such a disability later in life. Maybe they simply don’t get any exercise in. It does not require a lot of imagination to come up with counter-examples. But like with the weapon proficiency, an elf is going to get their bonus to dexterity whether or not they want it. Hold onto that pin.

Let’s loop back to the other biological traits. If it was so easy to find a counter-example to dexterity, maybe it’s possible to do the same to other traits. To counter improved eyesight, maybe the elf is myopic and requires glasses to see. To counter improved hearing, maybe the elf has any sort of hearing disability. Countering their meditative sleep is possibly the hardest, but it’s not too far-fetched to imagine an excitable elf with an immensely short attention span who never quite got into the whole meditation thing. This elf might still technically biologically be capable of meditative sleep, but if they’ve never done it before, it’s a distinction without a difference.

If we now put all those pieces together, someone might play an elf who requires glasses to see, and gladly uses those glasses to read every magical tome they can find. In their advanced age, they have stopped exercising, and have slowly become hard of hearing. Instead of meditating, they often fall asleep on top of their books, long after the appropriate time to go to bed.

It’s worth noting that the above elf is not an especially exceptional character. They’re an old wizard with glasses who has no time for doing anything other than reading. Nevertheless, this elf gets all the bonuses that run contrary to their concept. And that just can’t be right.

The rationale for the racial bonuses of elves being as they are often comes down to the bell curve. The thinking goes that those bonuses represent the median elf at the centre of the statistical bell curve of the distribution of those traits. If you take the median elf and the median human, the elf will simply be more dexterous as a result of their natural physiology. And if you take the lowest 5th percentile of elves and compare them to the lowest 5th percentile of humans, those elves would still be more dexterous.

This of course completely ignores that the player character can be anywhere on the bell curve. The low-dexterity elf wizard from above could have a highly dexterous human companion. As a matter of fact, that human companion could even be more dexterous than the most dexterous elf. This would be statistically unlikely if we assume that the bell curve is real, but odds have never stopped fantasy heroes.

Note that the median (most common) elf is more dexterous than the median human, but that there exist humans on the right side of the curve that are more dexterous than elves on the left side of the curve.

A professionally drawn bell curve of the distribution of dexterity in humans and elves.

Note that the median (most common) elf is more dexterous than the median human, but that there exist humans on the right side of the curve that are more dexterous than elves on the left side of the curve.

In conclusion to this section: Even if traits in fantasy races are distributed along these bell curves, there would still be heaps of exceptions, and the system should support that.

Additionally, I’d like to put a differently coloured pin in the very concept of bell curves. I’ll get back to that later.

Race is real

So far, the problems posed in this article have been fictitious and trivial in nature. It’s past time to get to the point.

Orcs are the traditional fantasy bogeyman. They’re a species that are characterised by their superlatively barbarous and savage state, brutish virility to the point of bestiality, low intelligence, and dark brown-grey skin tones.

Unless you have been living under a rock, you may notice a blatant parallel to the real world. The above paragraph could—verbatim—be a racist description of real-world black people. And indeed, part of that paragraph was paraphrased from Types of Mankind (1854), and another part was paraphrased from White on Black: images of Africa and Blacks in Western popular culture (1992).

And, you know, that’s bad. Like really, really bad. And it only gets worse. Because unlike in the real world, that characterisation of orcs is real.

In the default campaign setting of Dungeons & Dragons, orcs really are barbarous monsters that roam the lands to plague civilisation as raiders and pillagers. Quoting from Volo’s Guide to Monsters (2016):

[Orcs are] savage and fearless, [and] are ever in search of elves, dwarves, and humans to destroy. […] Orcs survive through savagery and force of numbers. Orcs aren’t interested in treaties, trade negotiations or diplomacy. They care only for satisfying their insatiable desire for battle, to smash their foes and appease their gods.

[…]

[The orcs are led] on a mission of ceaseless slaughter, fuelled by an unending rage that seeks to lay waste to the civilised world and revel in its anguish. […] Orcs are naturally chaotic and [disorganised], acting on their emotions and instincts rather than out of reason and logic.

[…]

In order to replenish the casualties of their endless warring, orcs breed prodigiously (and they aren’t choosy about what they breed with, which is why such creatures as half-orcs and ogrillons are found in the world). Females that are about to give birth are […] taken to the lair’s whelping pens.

Orcs don’t take mates, and no pair-bonding occurs in a tribe other than at the moment when coupling takes place.

It is difficult to overstate how absolutely “savage” and evil these orcs are, as depicted. The above quotation outright states that orcs eschew civilisation in favour of war and destruction, and heavily implies that orcs know no love and leave human women pregnant with half-orcs in their wake. Note also the use of the words “females”, “whelping pens”, and “mates”, as though orcs are nothing short of beasts.

The rationale for this sheer evil is that orcs are under the influence of their creator Gruumsh, an evil god of destruction. “Even those orcs who turn away from his worship can’t fully escape his influence”, says the Player’s Handbook (2014).

If things couldn’t get any worse, they do, because the above is compounded by D&D’s alignment system, which is a system of absolute deterministic morality of good-and-evil that can be measured by magic. A character in D&D can be—by their very essence—veritably evil. In this case, the entire orc species is evil owing to the influence of this evil god.

Cutting to the chase, this effectively means that it is morally justifiable for people to commit genocide against orcs. They are evil, after all, without a shred of a doubt.

Worse still, this means that Dungeons & Dragons has effectively created a world in which the most wicked theories of racists are actually true:

  • “Race is real”, meaning that there are measurable differences in physiology between the races. This is represented by the game’s racial traits, which I earlier demonstrated don’t make a whole heap of sense.
  • Some races are evil and/or inferior. This is represented by the utter evil of orcs, and their inferior intelligence in the game’s racial traits.

This is where it might be expedient to take a look at that differently coloured pin with regard to the bell curve. The Bell Curve, as it happens, is a 1994 book written by two racists that states that intelligence is heritable through genes, and that the median intelligence of black people is lower than the median intelligence of white people by mere virtue of those genes. This claim is wrong in the real world, but it appears to be true in the fantasy world.

Now, if one could play pretend in any world, I think I’d like to play pretend in a world in which the racists are wrong. But that’s not the world of Dungeons & Dragons.

Redemption actually makes things worse

This section is a small tangent. Earlier I mentioned that the entire orc species is evil, thereby morally justifying a potential genocide against them. There is no real-world analogy for this—there exists no species on Earth whose sole purpose is the destruction of humans. But if such a species did hypothetically exist, driving it to extinction could realistically be justified as an act of self-defence, lest that species succeed in its goal of wiping out humans.

It’s a questionable thing to focus one’s story on, but at least adventurers in Dungeons & Dragons can rest easy after clearing an entire cave of orcs.

There’s just one small problem: Orcs can be good, actually. In the fantasy world, it is possible for an orc to free themself of the corrupting influence of Gruumsh, and become “one of the good ones”. If even a single orc is capable of attaining moral good, this means that moral determinism is false, and therefore every orc is potentially capable of attaining moral good. This twist just turned an uncomplicated story of a fight against objective evil into a story of the justified genocide of slaves who are forced to fight for their master.

And, you know, that’s a lot to take in. And it’s built right into the game’s premise, and it didn’t take a lot of thinking to come to this utterly disturbing conclusion. Heroes are supposed to slay orcs without giving it much thought, but burdened with this knowledge, is that a morally justifiable thing to do at all?

More importantly, is this a story we want to be telling?

Missing the point

I’m not the first person to point out Dungeons & Dragons’ problem with race and racism, especially as it pertains to orcs and drow (dark elves). Recently, the people behind the game have begun to take some small steps towards solving these fundamental issues, and that’s a good thing. But a lot of people disagree.

I’ve read far too many criticisms of these steps in the writing of this article. Altogether, I think that their arguments can be boiled down to these points:

  • Orcs are not analogous to real-world black people. They’re not even people. Orcs are orcs are orcs.
  • As a matter of fact, if you think that orcs are analogous to real-world black people, you are the racist for seeing the similarities.
  • I really just want monsters to slay in my game. Why are you overthinking this?

I feel that these arguments monumentally miss the point, and countering them head-on would be a good way to waste your time. One could pussyfoot about and argue about the first two points, and although I vehemently disagree with these conclusions, it really wouldn’t matter if one conceded these points. The third point isn’t even an argument—it’s the equivalent of throwing a tantrum because other people are discussing something you don’t like.

The reason the arguments miss the point is that the point of contention is not whether orcs specifically are “bad”. Rather, the point of contention is that they—and other races like them—exist at all in the way that they do. That is to say: Orcs would be bad even if they didn’t mirror real-world depictions of black people so closely.

Given a choice between anything at all, why choose racism?

When we play pretend, we could imagine any world at all. The only limit is our own imagination, and this is—crudely put—really cool. And when we play pretend, we tell each other stories. And again, we could be telling any story at all. Storytelling being as it is, we will require some conflict to drive the story forward, and we can do this through antagonists—the “bad guys”. Now, not all stories actually require conflict, but I’m going to let that be.

And here’s the point: A story in which the antagonists—the bad guys—are a race of sapient human-like people that are inherently evil through moral determinism is a shitty story.

Phrased differently, the story of “we must kill these people of a different race because that race is inherently evil” is a bad story that is far too close for comfort to real-world stories of racism and genocide. That story is especially bad because—within the context of the story—the protagonists are completely justified in their racial hatred and genocide. This is in stark contrast to the real world, where racism is always completely and utterly unjustifiable.

Phrased differently again: Given a choice to tell any sort of story whatsoever, why choose to tell a racist story?

Systemic racism

I think we’re past the central thesis of this article, but I want to try to actually answer the above question—why choose racism? The lazy answer would be to suppose that the players of the game are racists—wittingly or not. But I’m not very satisfied by that answer.

In an attempt to answer this question, I want to return to the first pin regarding playing by the rules. As a light refresher: We were creating an elf wizard, but none of the racial bonuses were suitable for our elf. The rules forced us to play a certain way, even if that way didn’t make sense for our character.

But that’s a lie, of course. Nobody is forcing us to play a certain way. We could just discard the book; ignore the system and go our own way. We can tweak away to our heart’s content.

But before we do that, I want to emphasise how unlikely it was that we found a flaw in the system in the first place. For most people, when they create a new character, it’s like going to an ice cream parlour. There’s a large selection of flavours available, and you simply pick and choose from the things that appeal to you. By the end, you leave the shop and have your ice cream—close the book and enjoy your character. You may add some additional custom flair, but that is usually after you have already chosen the foundation for your character.

For our elf wizard, this process went differently—atypically. Instead of choosing from a list of available options, we created a character in a free-form manner. Then when it was time to open the book, we found that the options did not support our concept. I want to emphasise here also two additional things: We may not have noticed the discrepancy between our concept and the rules in the first place, and simply gone ahead; or we may have noticed the discrepancy and thereafter discarded the concept in favour of something else that might work.

Regardless, having come so far, it’s time to begin the tweaking. There’s just a small problem… Nobody at the table is a game designer or has ever balanced a race before. Furthermore, the rules don’t exactly give robust hints on how to go about doing this. And if we’re discarding all of the elf racial traits, why are we an elf again? Why is nobody else tweaking their character’s race? Everyone else was perfectly capable of creating their characters within the constraints set by the rules, so why aren’t we? Is it such a big deal that the number next to Dexterity on our character sheet is a little higher? Can’t we simply ignore the additional weapon proficiency? If we never pick up a bow, it will be as if that proficiency was never there.

That is to say: Breaking the rules is hard. There’s a heavy inertia to overcome, and that inertia can stop creativity dead in its tracks.

In summary, any of the following things can stop a person from creating their character outside of the rules:

  • They simply stick to the options provided by the book.
  • They come up with a concept of their own, and just pick from the options provided by the book afterwards.
  • They come up with a concept of their own, find that it is not supported by the book, and discard the concept.
  • They come up with a concept of their own, and—due to peer pressure or fear of being the odd one out—do not want to pursue creating custom mechanics.
  • They come up with a concept of their own, find that the books do not give them any guidance on creating custom mechanics, and give up there.
  • They come up with a concept of their own, successfully create or tweak the mechanics to suit that concept, but the GM does not allow this type of customisation.

One can only conclude that the rulebook—the system—enables certain outcomes much more than others. Even if you encounter a problem with the way that Dungeons & Dragons handles race, the odds of doing anything about it are very much stacked against you.

Involuntary racism

So why choose racism? Because the system has chosen for you. The system all-but-assures that the players will buy into its racism. In this system, all elves are dexterous, all humans are adaptive and ambitious, and all orcs are big and strong. There’s no choice in the system, and any choice to the contrary has to be outside of the system, for which the rulebook offers little to no guidance.

This is further compounded by the default campaign setting, Forgotten Realms, that creates a world in which orcs are unambiguously evil—barbarous savages reminiscent of the worst racist depictions of real-world peoples. It systematically enables a story of justified genocide against a people—a story that might as well be a wet dream for this world’s racists.

And, you know, that sucks.

Creating a better system

I want to end this article with my personal solution. I like fantasy, even though I spent the last however-many words comparing it to racism of the highest order, and I would like to enjoy it without its worst aspects.

A better system has heaps of requirements, but I think it boils down to the following two things:

  • The campaign setting mustn’t enable justified racism, and must be playable without racism entirely.
  • Players must be able to easily decouple mechanics from races.

For the campaign setting, I chose Eberron. I’m not sure if the Forgotten Realms are salvageable. Perhaps Gruumsh could be slain and all orcs could be freed, but there would still be a lot of other racisms that need solving in that campaign setting.

Eberron, on the other hand, is a lot more redeemable. The world is divided into rival nations, and the world’s races are more-or-less evenly distributed throughout the nations, creating a cosmopolitan feel. Moreover, there are no deterministically evil peoples in the world—Eberron’s original druids were orcs, and orcs can be as good or as evil as any other person. Even more importantly, culture and race in Eberron can be completely decoupled. An elf from the main continent is generally of the local nation’s culture, and an elf from the “elven continent” will generally be of one of the two local cultures, and this racial-cultural fluidity is explicitly called out in the campaign setting’s books.

Of course, there are some less likeable aspects of the campaign setting. There exist people with heritable magical tattoos, effectively making them an objectively superior breed. There’s also the fact that the “elven continent” exists at all, when it could instead be mixed-race like the rest of the world (although the racism on this continent is called out as being bad in Exploring Eberron (2020)). There is also racism against the world’s robot people and shape changers, which may not be a theme you want to play around with. But by and large, that’s it, and it’s a huge improvement over other settings.

For the mechanics, I ditched Dungeons & Dragons. Savage Worlds is a system that—unlike Dungeons & Dragons—truly gives you the tools to tweak the system if something is not to your liking. It has an entire section on modifying and creating races, and the rulebook is littered with reminders that you can change things to fit your game, and suggestions on how to do that.

Of course, Savage Worlds is not perfect. Its name is a little ‘eh’, its first campaign setting imagines a world in which the Confederate States of America seceded, and it has this extremely annoying Outsider feature that makes no sense whatsoever. Moreover, for our purposes, it does not explicitly tell the player that they can freely adjust their character’s racial features, but it does give the player the tools to do so, so I guess that’s good enough. Perfect is the enemy of good, and Savage Worlds’ flaws are trivially easy to work around.

Just one problem remains: This imaginary world still holds on to the bell curve. It still imagines a world in which the racists are sort of right—where elves are more dexterous and orcs are taller and stronger. And although the player characters are no longer bound by the bell curve, it still feels a little wrong.

And in truth, I have no solution for this whatsoever. If we want to play in a world where the racists are wrong, then maybe we shouldn’t tell a story in which their central theory of race holds true. It’s completely possible to tell a fantasy story composed of just humans, after all.

But I also feel that we would be losing something if we simply ditched the fantasy concept of race. Earlier in this article, I stopped to appreciate that dexterous elves are cool. And I think that appreciation bears repeating—not just for elves, but for all the fantasy races. When I play an orc, maybe I want to lean into the really cool concept of being inhumanely strong—or break with that stereotype to explore what it means to be weak in a society where everybody can effortlessly lift a hundred kilos.

After all, it’s about the stories we tell each other. And doesn’t a world in which radically different peoples live together and work to oppose bad actors make for a beautiful story?

Thursday, 08 October 2020

20.12 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.12_Release_Schedule

Dependency freeze is in four weeks (November 5) and Feature Freeze a week after that, make sure you start finishing your stuff!

Tuesday, 06 October 2020

OpenBSD Worrying RAID

I wanted to move a couple of USB hard drives from one OpenBSD machine to another. They are configured with softraid(4) as RAID 1 (mirrored). When I plugged the drives into the new machine though, nothing happened with softraid. This was pretty worrying.

Both the drives showed in dmesg output so the issue was specifically to do with softraid. The man page for bioctl(8) talks about -c creating a “new” RAID device which sounded a little too destructive. I asked for help in #openbsd and apparently the language in the man page is misleading. The -d flag has recently been updated to say “detach” rather than “delete” to try to address this.

I went for it and did:

bioctl -c 1 -l /dev/sd5a,/dev/sd6a

It worked and I got the dmesg output:

softraid0: sd3 was not shutdown properly
sd7 at scsibus4 targ 1 lun 0: <OPENBSD, SR RAID 1, 006>
sd7: 7630885MB, 512 bytes/sector, 15628052512 sectors
softraid0: volume sd7 is roaming, it used to be sd3, updating metadata
softraid0: roaming device sd2a -> sd6a
softraid0: roaming device sd1a -> sd5a

I guess if it’s not cleanly shutdown it doesn’t just automatically set up the RAID device again, it could also have been the renumbering that stopped it.

Mounting the device:

mount_ffs: /dev/sd7i on /external: filesystem must be mounted read-only; you may need to run fsck
WARNING: R/W mount of /mnt denied.  Filesystem is not clean - run fsck

I guess that was expected. An fsck later and everything was working again.

Backup your stuff.

Sunday, 04 October 2020

Is okular-devel mailing list the correct way to reach the Okular developers? If not what do we use?

After my recent failure of gaining traction to get people to join a potential Okular Virtual Sprint i wondered, is the okular-devel mailing list representative of the current okular contributors?

 

Looking at the sheer number of subscribers one would think that probably. There's currently 128 people subscribed to the okular-devel mailing list, and we definitely don't have that many contributors, so it would seem the mailing list is a good place to reach all the contributors, but let's look at the actual numbers.

 

Okular git repo has had 46 people contributing code[*] in the last year.


Only 17% of those are subscribed to the okular-devel mailing list.


If we count commits instead of commiters, the number raises to 65% but that's just because I account for more than 50% of the commits, if you remove myself from the equation the number drops to 28%.


If we don't count people that only commited once (thinking that they may not be really interested in the project), the number is still at only 25% of commiters and 30% of commits (ignoring me again) subscribed to the mailing list.


So it would seem that the answer is leaning towards "no, i can't use okular-devel to contact the okular developers".


But if not the mailing list? What am i supposed to use? I don't see any other method that would be better.


Suggestions welcome!



[*] Yes I'm limiting contributors to git commiters at this point, it's the only thing i can easily count, i understand there's more contributions than code contributions

Sunday, 27 September 2020

Multicast IPTV

For almost a decade, I’ve been very slowly making progress on a multicast IPTV system. Recently I’ve made a significant leap forward in this project, and I wanted to write a little on the topic so I’ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn’t possible.

When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I’m now looking at IPv6 rather than IPv4.

Initially, I’d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I’m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC.

I’ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly.

So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that’s where the antenna comes down from the roof. There’s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network.

Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable.

The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode.

So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I’ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

Saturday, 12 September 2020

VMs on KVM with Terraform

many thanks to erethon for his help & support on this article.

Working on your home lab, it is quiet often that you need to spawn containers or virtual machines to test or develop something. I was doing this kind of testing with public cloud providers with minimal VMs and for short time of periods to reduce any costs. In this article I will try to explain how to use libvirt -that means kvm- with terraform and provide a simple way to run this on your linux machine.

Be aware this will be a (long) technical article and some experience is needed with kvm/libvirt & terraform but I will try to keep it simple so you can follow the instructions.

Terraform

Install Terraform v0.13 either from your distro or directly from hashicopr’s site.

$ terraform version
Terraform v0.13.2

Libvirt

same thing for libvirt

$ libvirtd --version
libvirtd (libvirt) 6.5.0

$ sudo systemctl is-active libvirtd
active

verify that you have access to libvirt

$ virsh -c qemu:///system version
Compiled against library: libvirt 6.5.0
Using library: libvirt 6.5.0
Using API: QEMU 6.5.0
Running hypervisor: QEMU 5.1.0

Terraform Libvirt Provider

To access the libvirt daemon via terraform, we need the terraform-libvirt provider.

Terraform provider to provision infrastructure with Linux’s KVM using libvirt

The official repo is on GitHub - dmacvicar/terraform-provider-libvirt and you can download a precompiled version for your distro from the repo, or you can download a generic version from my gitlab repo

ebal / terraform-provider-libvirt · GitLab

These are my instructions

mkdir -pv ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/
curl -sLo ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt https://gitlab.com/terraform-provider/terraform-provider-libvirt/-/jobs/artifacts/master/raw/terraform-provider-libvirt/terraform-provider-libvirt?job=run-build
chmod +x ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/terraform-provider-libvirt

Terraform Init

Let’s create a new directory and test that everything is fine.

mkdir -pv tf_libvirt
cd !$

cat > Provider.tf <<EOF
terraform {
 required_version = ">= 0.13"
 required_providers {
     libvirt = {
       source  = "dmacvicar/libvirt"
       version = "0.6.2"
     }
 }
}
EOF
$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

everything seems okay!

We can verify with tree or find

$ tree -a
.
├── Provider.tf
└── .terraform
    └── plugins
        ├── registry.terraform.io
        │   └── dmacvicar
        │       └── libvirt
        │           └── 0.6.2
        │               └── linux_amd64 -> /home/ebal/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
        └── selections.json

7 directories, 2 files

Provider

but did we actually connect to libvirtd via terraform ?
Short answer: No.

We just told terraform to use this specific provider.

How to connect ?
We need to add the connection libvirt uri to the provider section:

provider "libvirt" {
    uri = "qemu:///system"
}

so our Provider.tf looks like this

terraform {
  required_version = ">= 0.13"
  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

Libvirt URI

libvirt is a virtualization api/toolkit that supports multiple drivers and thus you can use libvirt against the below virtualization platforms

  • LXC - Linux Containers
  • OpenVZ
  • QEMU
  • VirtualBox
  • VMware ESX
  • VMware Workstation/Player
  • Xen
  • Microsoft Hyper-V
  • Virtuozzo
  • Bhyve - The BSD Hypervisor

Libvirt also supports multiple authentication mechanisms like ssh

virsh -c qemu+ssh://username@host1.example.org/system

so it is really important to properly define the libvirt URI in terraform!

In this article, I will limit it to a local libvirt daemon, but keep in mind you can use a remote libvirt daemon as well.

Volume

Next thing, we need a disk volume!

Volume.tf

resource "libvirt_volume" "ubuntu-2004-vol" {
  name = "ubuntu-2004-vol"
  pool = "default"
  #source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
  source = "ubuntu-20.04.img"
  format = "qcow2"
}

I have already downloaded this image and verified its checksum, I will use this local image as the base image for my VM’s volume.

By running terraform plan we will see this output:

  # libvirt_volume.ubuntu-2004-vol will be created
  + resource "libvirt_volume" "ubuntu-2004-vol" {
      + format = "qcow2"
      + id     = (known after apply)
      + name   = "ubuntu-2004-vol"
      + pool   = "default"
      + size   = (known after apply)
      + source = "ubuntu-20.04.img"
    }

What we expect is to use this source image and create a new disk volume (copy) and put it to the default disk storage libvirt pool.

Let’s try to figure out what is happening here:

terraform plan -out terraform.out
terraform apply terraform.out
terraform show
# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

and

$ virsh -c qemu:///system vol-list default
 Name              Path
------------------------------------------------------------
 ubuntu-2004-vol   /var/lib/libvirt/images/ubuntu-2004-vol

Volume Size

BE Aware: by this declaration, the produced disk volume image will have the same size as the original source. In this case ~2G of disk.

We will show later in this article how to expand to something larger.

destroy volume

$ terraform destroy
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # libvirt_volume.ubuntu-2004-vol will be destroyed
  - resource "libvirt_volume" "ubuntu-2004-vol" {
      - format = "qcow2" -> null
      - id     = "/var/lib/libvirt/images/ubuntu-2004-vol" -> null
      - name   = "ubuntu-2004-vol" -> null
      - pool   = "default" -> null
      - size   = 2361393152 -> null
      - source = "ubuntu-20.04.img" -> null
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.

verify

$ virsh -c qemu:///system vol-list default
 Name             Path
----------------------------------------------------------

reminder: always destroy!

Domain

Believe it or not, we are half way from our first VM!

We need to create a libvirt domain resource.

Domain.tf

cat > Domain.tf <<EOF
resource "libvirt_domain" "ubuntu-2004-vm" {
  name = "ubuntu-2004-vm"

  memory = "2048"
  vcpu   = 1

  disk {
    volume_id = libvirt_volume.ubuntu-2004-vol.id
  }

}

EOF

Apply the new tf plan

 terraform plan -out terraform.out
 terraform apply terraform.out
$ terraform show

# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
    arch        = "x86_64"
    autostart   = false
    disk        = [
        {
            block_device = ""
            file         = ""
            scsi         = false
            url          = ""
            volume_id    = "/var/lib/libvirt/images/ubuntu-2004-vol"
            wwn          = ""
        },
    ]
    emulator    = "/usr/bin/qemu-system-x86_64"
    fw_cfg_name = "opt/com.coreos/config"
    id          = "3a4a2b44-5ecd-433c-8645-9bccc831984f"
    machine     = "pc"
    memory      = 2048
    name        = "ubuntu-2004-vm"
    qemu_agent  = false
    running     = true
    vcpu        = 1
}

# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

Verify via virsh:

$ virsh -c qemu:///system list
 Id   Name             State
--------------------------------
 3    ubuntu-2004-vm   running

Destroy them!

$ terraform destroy

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

libvirt_domain.ubuntu-2004-vm: Destroying... [id=3a4a2b44-5ecd-433c-8645-9bccc831984f]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 2 destroyed.

That’s it !

We have successfully created a new VM from a source image that runs on our libvirt environment.

But we can not connect/use or do anything with this instance. Not yet, we need to add a few more things. Like a network interface, a console output and a default cloud-init file to auto-configure the virtual machine.

Variables

Before continuing with the user-data (cloud-init), it is a good time to set up some terraform variables.

cat > Variables.tf <<EOF

variable "domain" {
  description = "The domain/host name of the zone"
  default     = "ubuntu2004"
}

EOF

We are going to use this variable within the user-date yaml file.

Cloud-init

The best way to configure a newly created virtual machine, is via cloud-init and the ability of injecting a user-data.yml file into the virtual machine via terraform-libvirt.

user-data

#cloud-config

#disable_root: true
disable_root: false
chpasswd:
  list: |
       root:ping
  expire: False

# Set TimeZone
timezone: Europe/Athens

hostname: "${hostname}"

# PostInstall
runcmd:
  # Remove cloud-init
  - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

cloud init disk

Terraform will create a new iso by reading the above template file and generate a proper user-data.yaml file. To use this cloud init iso, we need to configure it as a libvirt cloud-init resource.

Cloudinit.tf

data "template_file" "user_data" {
  template = file("user-data.yml")
  vars = {
    hostname = var.domain
  }
}

resource "libvirt_cloudinit_disk" "cloud-init" {
  name           = "cloud-init.iso"
  user_data      = data.template_file.user_data.rendered
}

and we need to modify our Domain.tf accordingly

cloudinit = libvirt_cloudinit_disk.cloud-init.id

Terraform will create and upload this iso disk image into the default libvirt storage pool. And attach it to the virtual machine in the boot process.

At this moment the tf_libvirt directory should look like this:

$ ls -1
Cloudinit.tf
Domain.tf
Provider.tf
ubuntu-20.04.img
user-data.yml
Variables.tf
Volume.tf

To give you an idea, the abstract design is this:

tf_libvirt.png

apply

terraform plan -out terraform.out
terraform apply terraform.out
$ terraform show

# data.template_file.user_data:
data "template_file" "user_data" {
    id       = "cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00"
    rendered = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "ubuntu2004"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
    template = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "${hostname}"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
    vars     = {
        "hostname" = "ubuntu2004"
    }
}

# libvirt_cloudinit_disk.cloud-init:
resource "libvirt_cloudinit_disk" "cloud-init" {
    id        = "/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539"
    name      = "cloud-init.iso"
    pool      = "default"
    user_data = <<~EOT
        #cloud-config

        #disable_root: true
        disable_root: false
        chpasswd:
          list: |
               root:ping
          expire: False

        # Set TimeZone
        timezone: Europe/Athens

        hostname: "ubuntu2004"

        # PostInstall
        runcmd:
          # Remove cloud-init
          - apt-get -qqy autoremove --purge cloud-init lxc lxd snapd
          - apt-get -y --purge autoremove
          - apt -y autoclean
          - apt -y clean all

    EOT
}

# libvirt_domain.ubuntu-2004-vm:
resource "libvirt_domain" "ubuntu-2004-vm" {
    arch        = "x86_64"
    autostart   = false
    cloudinit   = "/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c"
    disk        = [
        {
            block_device = ""
            file         = ""
            scsi         = false
            url          = ""
            volume_id    = "/var/lib/libvirt/images/ubuntu-2004-vol"
            wwn          = ""
        },
    ]
    emulator    = "/usr/bin/qemu-system-x86_64"
    fw_cfg_name = "opt/com.coreos/config"
    id          = "3ade5c95-30d4-496b-9bcf-a12d63993cfa"
    machine     = "pc"
    memory      = 2048
    name        = "ubuntu-2004-vm"
    qemu_agent  = false
    running     = true
    vcpu        = 1
}

# libvirt_volume.ubuntu-2004-vol:
resource "libvirt_volume" "ubuntu-2004-vol" {
    format = "qcow2"
    id     = "/var/lib/libvirt/images/ubuntu-2004-vol"
    name   = "ubuntu-2004-vol"
    pool   = "default"
    size   = 2361393152
    source = "ubuntu-20.04.img"
}

Lots of output , but let me explain it really quick:

generate a user-data file from template, template is populated with variables, create an cloud-init iso, create a volume disk from source, create a virtual machine with this new volume disk and boot it with this cloud-init iso.

Pretty much, that’s it!!!

$ virsh  -c qemu:///system vol-list --details  default

 Name              Path                                      Type   Capacity     Allocation
---------------------------------------------------------------------------------------------
 cloud-init.iso    /var/lib/libvirt/images/cloud-init.iso    file   364.00 KiB   364.00 KiB
 ubuntu-2004-vol   /var/lib/libvirt/images/ubuntu-2004-vol   file   2.20 GiB     537.94 MiB

$ virsh  -c qemu:///system list
 Id   Name             State
--------------------------------
 1    ubuntu-2004-vm   running

destroy

$ terraform destroy -auto-approve

data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5cdc31-1d38-39cb-cc72-971e474ca539]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=3ade5c95-30d4-496b-9bcf-a12d63993cfa]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 3 destroyed.

Most important detail is:

Resources: 3 destroyed.

  • cloud-init.iso
  • ubuntu-2004-vol
  • ubuntu-2004-vm

Console

but there are a few things still missing.

To add a console for starters so we can connect into this virtual machine!

To do that, we need to re-edit Domain.tf and add a console output:

  console {
    target_type = "serial"
    type        = "pty"
    target_port = "0"
  }
  console {
    target_type = "virtio"
    type        = "pty"
    target_port = "1"
  }

the full file should look like:

resource "libvirt_domain" "ubuntu-2004-vm" {
  name = "ubuntu-2004-vm"

  memory = "2048"
  vcpu   = 1

 cloudinit = libvirt_cloudinit_disk.cloud-init.id

  disk {
    volume_id = libvirt_volume.ubuntu-2004-vol.id
  }

  console {
    target_type = "serial"
    type        = "pty"
    target_port = "0"
  }
  console {
    target_type = "virtio"
    type        = "pty"
    target_port = "1"
  }

}

Create again the VM with

terraform plan -out terraform.out
terraform apply terraform.out

And test the console:

$ virsh -c qemu:///system console ubuntu-2004-vm
Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

ubuntu_console

wow!

We have actually logged-in to this VM using the libvirt console!

Virtual Machine

some interesting details

root@ubuntu2004:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       2.0G  916M  1.1G  46% /
devtmpfs        998M     0  998M   0% /dev
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           200M  392K  200M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
/dev/vda15      105M  3.9M  101M   4% /boot/efi
tmpfs           200M     0  200M   0% /run/user/0

root@ubuntu2004:~# free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        73Mi       1.7Gi       0.0Ki       140Mi       1.8Gi
Swap:            0B          0B          0B

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

Do not forget to destroy

$ terraform destroy -auto-approve

data.template_file.user_data: Refreshing state... [id=cc82a7db4c6498aee21a883729fc4be7b84059d3dec69b92a210e046c67a9a00]
libvirt_volume.ubuntu-2004-vol: Refreshing state... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Refreshing state... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_domain.ubuntu-2004-vm: Refreshing state... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destroying... [id=69f75b08-1e06-409d-9fd6-f45d82260b51]
libvirt_domain.ubuntu-2004-vm: Destruction complete after 0s
libvirt_cloudinit_disk.cloud-init: Destroying... [id=/var/lib/libvirt/images/cloud-init.iso;5f5ce077-9508-3b8c-273d-02d44443b79c]
libvirt_volume.ubuntu-2004-vol: Destroying... [id=/var/lib/libvirt/images/ubuntu-2004-vol]
libvirt_cloudinit_disk.cloud-init: Destruction complete after 0s
libvirt_volume.ubuntu-2004-vol: Destruction complete after 0s

Destroy complete! Resources: 3 destroyed.

extend the volume disk

As mentioned above, the volume’s disk size is exactly as the origin source image.
In our case it’s 2G.

What we need to do, is to use the source image as a base for a new volume disk. In our new volume disk, we can declare the size we need.

I would like to make this a user variable: Variables.tf

variable "vol_size" {
  description = "The mac & iP address for this VM"
  # 10G
  default = 10 * 1024 * 1024 * 1024
}

Arithmetic in terraform!!

So the Volume.tf should be:

resource "libvirt_volume" "ubuntu-2004-base" {
  name = "ubuntu-2004-base"
  pool = "default"
  #source = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-disk-kvm.img"
  source = "ubuntu-20.04.img"
  format = "qcow2"
}

resource "libvirt_volume" "ubuntu-2004-vol" {
  name           = "ubuntu-2004-vol"
  pool           = "default"
  base_volume_id = libvirt_volume.ubuntu-2004-base.id
  size           = var.vol_size
}

base image –> volume image

test it

terraform plan -out terraform.out
terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

ubuntu2004 login: root
Password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Sep 12 18:27:46 EEST 2020

  System load: 0.29             Memory usage: 6%   Processes:       66
  Usage of /:  9.3% of 9.52GB   Swap usage:   0%   Users logged in: 0

0 updates can be installed immediately.
0 of these updates are security updates.

Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings

Last login: Sat Sep 12 18:26:37 EEST 2020 on ttyS0
root@ubuntu2004:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       9.6G  912M  8.7G  10% /
root@ubuntu2004:~#

10G !

destroy

terraform destroy -auto-approve

Swap

I would like to have a swap partition and I will use cloud init to create a swap partition.

modify user-data.yml

# Create swap partition
swap:
  filename: /swap.img
  size: "auto"
  maxsize: 2G

test it

terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

root@ubuntu2004:~# free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        86Mi       1.7Gi       0.0Ki       188Mi       1.8Gi
Swap:         2.0Gi          0B       2.0Gi

root@ubuntu2004:~#

success !!

terraform destroy -auto-approve

Network

How about internet? network?
Yes, what about it ?

I guess you need to connect to the internets, okay then.

The easiest way is to add this your Domain.tf

  network_interface {
    network_name = "default"
  }

This will use the default network libvirt resource

$ virsh -c qemu:///system net-list

 Name              State    Autostart   Persistent
----------------------------------------------------
 default           active   yes         yes

if you prefer to directly expose your VM to your local network (be very careful) then replace the above with a macvtap interface. If your ISP router provides dhcp, then your VM will take a random IP from your router.

network_interface {
  macvtap = "eth0"
}

test it

terraform plan -out terraform.out && terraform apply terraform.out
$ virsh -c qemu:///system console ubuntu-2004-vm

Connected to domain ubuntu-2004-vm
Escape character is ^] (Ctrl + ])

root@ubuntu2004:~#

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:36:66:96 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.228/24 brd 192.168.122.255 scope global dynamic ens3
       valid_lft 3544sec preferred_lft 3544sec
    inet6 fe80::5054:ff:fe36:6696/64 scope link
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0

root@ubuntu2004:~# ping -c 5 google.com
PING google.com (172.217.23.142) 56(84) bytes of data.
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=1 ttl=115 time=43.4 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=2 ttl=115 time=43.9 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=3 ttl=115 time=43.0 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=4 ttl=115 time=43.1 ms
64 bytes from fra16s18-in-f142.1e100.net (172.217.23.142): icmp_seq=5 ttl=115 time=43.4 ms

--- google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 42.977/43.346/43.857/0.311 ms
root@ubuntu2004:~#

destroy

$ terraform destroy -auto-approve

Destroy complete! Resources: 4 destroyed.

SSH

Okay, now that we have network it is possible to setup ssh to our virtual machine and also auto create a user. I would like cloud-init to get my public key from github and setup my user.

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        AcceptEnv LANG LC_*
        AllowUsers ebal
        ChallengeResponseAuthentication no
        Compression NO
        MaxSessions 3
        PasswordAuthentication no
        PermitRootLogin no
        Port "${sshdport}"
        PrintMotd no
        Subsystem sftp  /usr/lib/openssh/sftp-server
        UseDNS no
        UsePAM yes
        X11Forwarding no

Notice, I have added a new variable called sshdport

Variables.tf

variable "ssh_port" {
  description = "The sshd port of the VM"
  default     = 12345
}

and do not forget to update your cloud-init tf

Cloudinit.tf

data "template_file" "user_data" {
  template = file("user-data.yml")
  vars = {
    hostname = var.domain
    sshdport = var.ssh_port
  }
}

resource "libvirt_cloudinit_disk" "cloud-init" {
  name           = "cloud-init.iso"
  user_data      = data.template_file.user_data.rendered
}

Update VM

I would also like to update & install specific packages to this virtual machine:

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion
  - ncdu

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet ${hostname} > /etc/motd
  - updatedb
  # Firewall
  - ufw allow "${sshdport}"/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

Yes, I prefer to uninstall cloud-init at the end.

user-date.yaml

the entire user-date.yaml looks like this:

#cloud-config
disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        AcceptEnv LANG LC_*
        AllowUsers ebal
        ChallengeResponseAuthentication no
        Compression NO
        MaxSessions 3
        PasswordAuthentication no
        PermitRootLogin no
        Port "${sshdport}"
        PrintMotd no
        Subsystem sftp  /usr/lib/openssh/sftp-server
        UseDNS no
        UsePAM yes
        X11Forwarding no

# Set TimeZone
timezone: Europe/Athens

hostname: "${hostname}"

# Create swap partition
swap:
  filename: /swap.img
  size: "auto"
  maxsize: 2G

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion
  - ncdu

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet ${hostname} > /etc/motd
  - updatedb
  # Firewall
  - ufw allow "${sshdport}"/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

Output

We need to know the IP to login so create a new terraform file to get the IP

Output.tf

output "IP" {
  value = libvirt_domain.ubuntu-2004-vm.network_interface.0.addresses
}

but that means that we need to wait for the dhcp lease. Modify Domain.tf to tell terraform to wait.

  network_interface {
    network_name = "default"
    wait_for_lease = true
  }

Plan & Apply

$ terraform plan -out terraform.out && terraform apply terraform.out

Outputs:

IP = [
  "192.168.122.79",
]

Verify

$ ssh 192.168.122.79 -p 12345 uptime
 19:33:46 up 2 min,  0 users,  load average: 0.95, 0.37, 0.14
$ ssh 192.168.122.79 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1023-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat Sep 12 19:34:45 EEST 2020

  System load:  0.31              Processes:             72
  Usage of /:   33.1% of 9.52GB   Users logged in:       0
  Memory usage: 7%                IPv4 address for ens3: 192.168.122.79
  Swap usage:   0%

0 updates can be installed immediately.
0 of these updates are security updates.

       _                 _         ____   ___   ___  _  _
 _   _| |__  _   _ _ __ | |_ _   _|___  / _  / _ | || |
| | | | '_ | | | | '_ | __| | | | __) | | | | | | | || |_
| |_| | |_) | |_| | | | | |_| |_| |/ __/| |_| | |_| |__   _|
 __,_|_.__/ __,_|_| |_|__|__,_|_____|___/ ___/   |_|

Last login: Sat Sep 12 19:34:37 2020 from 192.168.122.1

ebal@ubuntu2004:~$
ebal@ubuntu2004:~$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       9.6G  3.2G  6.4G  34% /

ebal@ubuntu2004:~$ free -hm
              total        used        free      shared  buff/cache   available
Mem:          2.0Gi        91Mi       1.7Gi       0.0Ki       197Mi       1.8Gi
Swap:         2.0Gi          0B       2.0Gi

ebal@ubuntu2004:~$ ping -c 5 libreops.cc
PING libreops.cc (185.199.108.153) 56(84) bytes of data.
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=1 ttl=55 time=48.4 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=2 ttl=55 time=48.7 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=3 ttl=55 time=48.5 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=4 ttl=55 time=48.3 ms
64 bytes from 185.199.108.153 (185.199.108.153): icmp_seq=5 ttl=55 time=52.8 ms

--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 48.266/49.319/52.794/1.743 ms

what !!!!

awesome

destroy

terraform destroy -auto-approve

Custom Network

One last thing I would like to discuss is how to create a new network and provide a specific IP to your VM. This will separate your VMs/lab and it is cheap & easy to setup a new libvirt network.

Network.tf

resource "libvirt_network" "tf_net" {
  name      = "tf_net"
  domain    = "libvirt.local"
  addresses = ["192.168.123.0/24"]
  dhcp {
    enabled = true
  }
  dns {
    enabled = true
  }
}

and replace network_interface in Domains.tf

  network_interface {
    network_id     = libvirt_network.tf_net.id
    network_name   = "tf_net"
    addresses      = ["192.168.123.${var.IP_addr}"]
    mac            = "52:54:00:b2:2f:${var.IP_addr}"
    wait_for_lease = true
  }

Closely look, there is a new terraform variable

Variables.tf

variable "IP_addr" {
  description = "The mac & iP address for this VM"
  default     = 33
}
$ terraform plan -out terraform.out && terraform apply terraform.out

Outputs:

IP = [
  "192.168.123.33",
]
$ ssh 192.168.123.33 -p 12345
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1021-kvm x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

 System information disabled due to load higher than 1.0

12 updates can be installed immediately.
8 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Last login: Sat Sep 12 19:56:33 2020 from 192.168.123.1

ebal@ubuntu2004:~$ ip addr show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:b2:2f:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.123.33/24 brd 192.168.123.255 scope global dynamic ens3
       valid_lft 3491sec preferred_lft 3491sec
    inet6 fe80::5054:ff:feb2:2f33/64 scope link
       valid_lft forever preferred_lft forever
ebal@ubuntu2004:~$

Terraform files

you can find every terraform example in my github repo

tf/0.13/libvirt/0.6.2/ubuntu/20.04 at master · ebal/tf · GitHub

That’s it!

If you like this article, consider following me on twitter ebalaskas.

3 days to sends your talks to Linux App Summit 2020!

Head to https://linuxappsummit.org/cfp/ and talk about all those nice [Linux] Apps you're working on!


Call for paper ends in 3 days (September 15th)

Tuesday, 01 September 2020

PGPainless 0.1.0 released

After two years and a dozen alpha versions I am very glad to announce the first stable release of PGPainless! The release is available on maven central.

PGPainless aims to make using OpenPGP with Bouncycastle fun again by abstracting away most of the complexity and overhead that normally comes with it. At the same time PGPainless remains configurable by making heavy use of the builder pattern for almost everything.

Lets take a look at how to create a fresh OpenPGP key:

        PGPKeyRing keyRing = PGPainless.generateKeyRing()
                .simpleEcKeyRing("alice@wonderland.lit", "password123");

That is all it takes to generate an OpenPGP keypair that uses ECDH+ECDSA keys for encryption and signatures! You can of course also configure a more complex key pair with different algorithms and attributes:

        PGPainless.generateKeyRing()
                .withSubKey(KeySpec.getBuilder(RSA_ENCRYPT.withLength(RsaLength._4096))
                        .withKeyFlags(KeyFlag.ENCRYPT_COMMS, KeyFlag.ENCRYPT_STORAGE)
                        .withDefaultAlgorithms())
                .withSubKey(KeySpec.getBuilder(ECDH.fromCurve(EllipticCurve._P256))
                        .withKeyFlags(KeyFlag.ENCRYPT_COMMS, KeyFlag.ENCRYPT_STORAGE)
                        .withDefaultAlgorithms())
                .withSubKey(KeySpec.getBuilder(RSA_SIGN.withLength(RsaLength._4096))
                        .withKeyFlags(KeyFlag.SIGN_DATA)
                        .withDefaultAlgorithms())
                .withMasterKey(KeySpec.getBuilder(RSA_SIGN.withLength(RsaLength._8192))
                        .withKeyFlags(KeyFlag.CERTIFY_OTHER)
                        .withDetailedConfiguration()
                        .withPreferredSymmetricAlgorithms(SymmetricKeyAlgorithm.AES_256)
                        .withPreferredHashAlgorithms(HashAlgorithm.SHA512)
                        .withPreferredCompressionAlgorithms(CompressionAlgorithm.BZIP2)
                        .withFeature(Feature.MODIFICATION_DETECTION)
                        .done())
                .withPrimaryUserId("alice@wonderland.lit")
                .withPassphrase(new Passphrase("password123".toCharArray()))
                .build();

The API is designed in a way so that the user can very hardly make mistakes. Inputs are typed, so that as an example the user cannot input a wrong key length for an RSA key. The “shortcut” methods (eg. withDefaultAlgorithms()) uses sane, secure defaults.

Now that we have a key, lets encrypt some data!

        byte[] secretMessage = message.getBytes(UTF8);
        ByteArrayOutputStream envelope = new ByteArrayOutputStream();

        EncryptionStream encryptor = PGPainless.createEncryptor()
                .onOutputStream(envelope)
                .toRecipients(recipientPublicKey)
                .usingSecureAlgorithms()
                .signWith(keyDecryptor, senderSecretKey)
                .asciiArmor();

        Streams.pipeAll(new ByteArrayInputStream(secretMessage), encryptor);
        encryptor.close();
        byte[] encryptedSecretMessage = envelope.toByteArray();

As you can see there is almost no boilerplate code! At the same time, above code will create a stream that will encrypt and sign all the data that is passed through. In the end the envelope stream will contain an ASCII armored message that can only be decrypted by the intended recipients and that is signed using the senders secret key.

Decrypting data and/or verifying signatures works very similar:

        ByteArrayInputStream envelopeIn = new ByteArrayInputStream(encryptedSecretMessage);
        DecryptionStream decryptor = PGPainless.createDecryptor()
                .onInputStream(envelopeIn)
                .decryptWith(keyDecryptor, recipientSecretKey)
                .verifyWith(senderPublicKey)
                .ignoreMissingPublicKeys()
                .build();

        ByteArrayOutputStream decryptedSecretMessage = new ByteArrayOutputStream();

        Streams.pipeAll(decryptor, decryptedSecretMessage);
        decryptor.close();
        OpenPgpMetadata metadata = decryptor.getResult();

The decryptedSecretMessage stream now contains the decrypted message. The metadata object can be used to get information about the message, eg. which keys and algorithms were used to encrypt/sign the data and if those signatures were valid.

In summary, PGPainless is now able to create different types of keys, read encrypted and unencrypted keys, encrypt and/or sign data streams as well as decrypt and/or verify signatures. The latest additions to the API contain support for creating and verifying detached signatures.

PGPainless is already in use in Smacks OpenPGP module which implements XEP-0373: OpenPGP for XMPP and it has been designed primarily with the instant messaging use case in mind. So if you want to add OpenPGP support to your application, feel free to give PGPainless a try!

Monday, 24 August 2020

Hacking GOG.com for fun and profit

If you have a GOG account, you might have received an email announcing a Harvest Sale. While it’s unusual for harvest to last only 48 hours, but apart from that naming blunder, the sale is no different than many that came before it. What caught my attention was somewhat creative spot the difference puzzle that accompanied it. Specifically, as pretext to share some image processing insights.

Portion of the spot the difference Harvest Sale puzzle

Each identified difference presents a discount code for an exciting game. Surely, one cannot let it go to waste! Alas, years of sitting in a cave in front of a computer destroyed our eyesight; how could we possibly succeed‽ Simple!

Firstly, make a screenshot of the puzzle. The drawing in the email is rather tall so you might need to take a few of them (each capturing portion of the picture) or zoom out until the entire illustration fits. To practice feel free to grab the above image of the portion of the puzzle.

With the screenshot ready, open it in an image manipulation program. To make things easier crop it — the default key binding for Crop tool is Shift+C — to discard irrelevant parts. Having done that, duplicate the layer. That can be accomplished through Layer menu or by pressing Ctrl+Shift+D. Now, flip one of the layers. This operation is accessible through the Layer → Transform → Flip Horizontally menu option as well as inside of the Layers dialogue.

alt

The Layers dialogue is what we need now as that’s where the magic happen. if you haven’t brought it up yet do it now for example with Ctrl+L shortcut. Make sure the top layer is selected and finally choose Difference from the Mode drop-down. Et voilà.

Depending on the cropping done at the beginning, the top layer might need to be moved a little — key binding for the Move tool is M. If done correctly, the differences should be highlighted with bright colours.

Friday, 21 August 2020

Flathub stats for KDE applications [that are part of the release service]

I just discovered https://gitlab.gnome.org/Jehan/gimp-flathub-stats that tells you the download (including updates) stats for a given flathub application.


Ran it over the KDE applicatitions that are part of the release service that we have in flathub.


The results are somewhat surprising:


org.kde.kdenlive
Total:   22649 downloads in 8 days
org.kde.kalzium
Total:   16571 downloads in 9 days
org.kde.kgeography
Total:   16343 downloads in 9 days
org.kde.kbruch
Total:   15744 downloads in 9 days
org.kde.kapman
Total:   15473 downloads in 9 days
org.kde.kblocks
Total:   15426 downloads in 9 days
org.kde.katomic
Total:   15385 downloads in 9 days
org.kde.khangman
Total:   15370 downloads in 9 days
org.kde.kbounce
Total:   15280 downloads in 9 days
org.kde.kdiamond
Total:   15242 downloads in 9 days
org.kde.kwordquiz
Total:   15173 downloads in 9 days
org.kde.ksudoku
Total:   15155 downloads in 9 days
org.kde.kigo
Total:   15125 downloads in 9 days
org.kde.kgoldrunner
Total:   15062 downloads in 9 days
org.kde.knetwalk
Total:   15030 downloads in 9 days
org.kde.palapeli
Total:   14939 downloads in 9 days
org.kde.klickety
Total:   14917 downloads in 9 days
org.kde.klines
Total:   14866 downloads in 9 days
org.kde.knavalbattle
Total:   14848 downloads in 9 days
org.kde.kjumpingcube
Total:   14831 downloads in 9 days
org.kde.ksquares
Total:   14829 downloads in 9 days
org.kde.killbots
Total:   14772 downloads in 9 days
org.kde.kubrick
Total:   14696 downloads in 9 days
org.kde.ktuberling
Total:   14652 downloads in 9 days
org.kde.kontact
Total:   3728 downloads in 458 days
org.kde.kolourpaint
Total:   2542 downloads in 9 days
org.kde.okular
Total:   2304 downloads in 3 days
org.kde.kpat
Total:   1203 downloads in 8 days
org.kde.ktouch
Total:   880 downloads in 9 days
org.kde.kate
Total:   638 downloads in 9 days
org.kde.ark
Total:   571 downloads in 9 days
org.kde.dolphin
Total:   566 downloads in 3 days
org.kde.elisa
Total:   387 downloads in 3 days
org.kde.minuet
Total:   103 downloads in 9 days
org.kde.kwrite
Total:   84 downloads in 9 days
org.kde.kcachegrind
Total:   35 downloads in 9 days
org.kde.kcalc
Total:   23 downloads in 4 days
org.kde.lokalize
Total:   10 downloads in 3 days

kdenlive is the clear winner.

After that there's a compact block of games/edu apps that i think were/are part of the Endless default install and that shows, since after that the next app has like 6 times less downloads.

"New" (in the flathub sense) apps like lokalize or kcalc are the ones with less downloads, i guess people haven't seen them there yet.

Thursday, 20 August 2020

Curse of knowledge

[Original Published at Linkedin on October 28, 2018]

 

The curse of knowledge is a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand.

 

Let’s talk about documentation

This is the one big elephant in every team’s room.

TLDR; Increment: Documentation

Documentation empowers users and technical teams to function more effectively, and can promote approachability, accessibility, efficiency, innovation, and more stable development.

Bad technical guides can cause frustration, confusion, and distrust in your software, support channels, and even your brand—and they can hinder progress and productivity internally

 

so to avoid situations like these:

wisdom_of_the_ancients.png

xkcd - wisdom_of_the_ancients

or/and

TitsMcGee4782

Optipess - TitsMcGee4782

 

documentation must exist!

 

Myths

  • Self-documenting code
  • No time to write documentation
  • There are code examples, what else you need?
  • There is a wiki page (or 300.000 pages).
  • I’m not a professional writer
  • No one reads the manual

 

Problems

  • Maintaining the documentation (up2date)
  • Incomplete or Confusing documentation
  • Documentation is platform/version oriented
  • Engineers who may not be fluent in English (or dont speak your language)
  • Too long
  • Too short
  • Documentation structure

 

Types of documentation

  • Technical Manual (system)
  • Tutorial (mini tutorials)
  • HowTo (mini howto)
  • Library/API documentation (reference)
  • Customer Documentation (end user)
  • Operations manual
  • User manual (support)
  • Team documentation
  • Project/Software documentation
  • Notes
  • FAQ

 

Why Documentation Is Important

Communication is a key to success. Documentation is part of the communication process. We either try to communicate or collaborate with our customers or even within our own team. We use our documentation to inform customers of new feautures and how to use them, to train our internal team (colleagues), collaborate with them, reach-out, help-out, connect, communicate our work with others.

When writing code, documentation should be the “One-Truth” instead of the code repository. I love working with projects that they will not deploy a new feature before updating the documentation first. For example I read the ‘Release Notes for Red Hat’ and the ChangeLog instead of reading myriads of code repositories.

 

Know Your Audience

Try to answer these questions:

  • Who is reading this documentation ?
  • Is it for internal or external users/customers ?
  • Do they have a dev background ?
  • Arey they non-technical people ?

Use personas to create diferrent material. Try to remember this one gold rule:

Audidence should get value from documentation (learning or something).

 

Guidelines

Here are some guidelines:

  • Tell a story
  • Use a narative voice
  • Try to solve a problem
  • Simplify - KISS philosophy
  • Focus on approachability and usability

Even on a technical document try to:

  • Write documentation agnostic - Independent Platform
  • Reproducibility
  • Not writing in acronyms and technical jargon (explain)
  • Step Approach
  • Towards goal achievement
  • Routines

 

UX

A picture is worth a thousand words

so remember to:

  • visual representation of the information architecture
  • use code examples
  • screencasts
  • CLI –help output
  • usage
  • clear error messages

Customers and Users do want to write nothing.
Reducing user input, your project will be more fault tolerant.

Instead of providing a procedure for a deploy pipeline, give them a deploy bot, a next-next-install Gui/Web User-Interface and focus your documentation around that.

 

Content

So what to include in the documentation.

  • How technical should be or not ?
  • Use cases ?
  • General-Purpose ?
  • Article size (small pages are more manageable set to maintain).
  • Minimum Viable Content Vs Too much detail
  • Help them to understand

imagine your documentation as microservices instead of a huge monolith project.

Usally a well defined structure, looks like this:

  • Table of Contents (toc)
  • Introduction
  • Short Description
  • Sections / Modules / Chapters
  • Conclusion / Summary
  • Glossary
  • Index
  • References

 

Tools

I prefer wiki pages instead of a word-document, because of the below features:

  • Version
  • History
  • Portability
  • Convertibility
  • Accountability

btw if you are using Confluence, there is a Markdown plugin.

 

Measurements & Feedback

To understand if your documentation is good or not, you need feedback. But first there is an evaluation process of Review. It is the same thing as writing code, you cant review your own code! The first feedback should come within your team.

Use analytics to measure if people reads your documentation, from ‘Hits per page’ to more advance analytics as Matomo (formerly Piwik). Profiling helps you understand your audience. What they like in documentation?

Customer satisfaction (CSat) are important in documentation metrics.

  • Was this page helpful? Yes/No
  • Allowing customers to submit comments.
  • Upvote/Downvote / Like
  • or even let them to contribute in your documentation

make it easy for people to share their feedbak and find a way to include their comments in it.

 

FAQ

Frequently Asked Questions should answering questions in the style of:

  • What would customers ask ?
  • What if
  • How to

FAQ or QA should be really straight forward, short and simple as it can be. You are writing a FAQ because you are here to help customers to learn how to use this specific feature not the entire software. Use links for more detail info, that direct them to your main documentation.

Conclusion

Sharing knowledge & shaping the culture of your team/end users. Your documentation should reflect your customers needs. Everything you do in your business is to satisfy your customers. Documentation is one way to communicate this.

So here are some final points on documentation:

  • User Oriented
  • Readability
  • Comprehensive
  • Keep it up to date
Tag(s): knowledge

Wednesday, 12 August 2020

Computers can't sustain themselves

The situation where an unskilled user can enjoy a well-working computer does only last so long.1 Either the user becomes good at maintaining the computer, or it will stop working correctly. That’s because computers are not reliable. If not used carefully, at some point, they will behave unexpectedly or stop working. Therefore, one will have to get their hands dirty and most likely learn something along the way.

Some operating systems are more prone to gathering cruft, though. Windows computers are known to decay over time. This is caused by file system fragmentation, the registry getting cluttered, OS / software updates,2 or unwanted software installation. Furthermore, users can install software from any source. As a result, they have to check the software quality by themselves, whether it’s compatible with their system, perform the installation procedure and the maintenance. This may create problems if any of these tasks are not done well. Conversely, despite giving users a lot of power, GNU/Linux is more likely to stay stable. Even though it depends on the distributions policies, software installation and updates are done through package manager repositories that are administered and curated by skilled maintainers. Breaking the system is thus more difficult, but not impossible. After all, with great power comes great responsibility.

Becoming knowledgeable about computers and being able to manage them efficiently is easier with Free Software than proprietary software. Free Software creates a healthy relationship between developers and users whereby the former plays nice with the latter because he or she may redesign the software to better fit their needs or completely stop using it. Its openness allows everyone to dig into technical details and discover how it works. Therefore, Free Software puts users in control, allowing them to better understand technology.

Regardless of the reason, leaving a computer badly managed will result in cruft creeping in. The solution to this is simple: don’t be afraid of your tools, master them!


  1. Unless the computer is managed remotely. ↩︎

  2. Citing only the latest of a long list: Windows 10 printing breaks due to Microsoft June 2020 updates. ↩︎

Tuesday, 28 July 2020

The power of git-sed

In the recent weeks and months, the FSFE Web Team has been doing some heavy work on the FSFE website. We moved and replaced thousands of files and their respective links to improve the structure of a historically grown website (19+ years, 23243 files, almost 39k commits). But how to do that most efficiently in a version controlled system like Git?

In our scenarios, the steps executed often looked like the following:

  1. Move/rename a directory full of XML files representing website pages
  2. Find all links that pointed to this directory, and change them
  3. Create a rewrite rule

For the first step, using the included git mv is perfectly fine.

For the second, we would usually need a combination of grep and sed, e.g.:

grep -lr "/old/page.html" | xargs sed 's;/old/page.html;/new/page.html;g'

This has a few major flaws:

  • In a Git repository, this also greps inside the .git directory where we do not want to edit files directly
  • The grep is slow in huge repositories
  • The searched old link has to be mentioned two times, so hard for semi-manual replacement of a large number of links
  • Depending on the Regex complexity we need, the command becomes long, and we need to take care of using the correct flags for grep and sed.

git-sed to the rescue

After some research, I found git-sed, basically a Bash file in the git-extras project. With some modifications (pull request pending) it’s the perfect tool for mass search and replacement.

It solves all of the above problems:

  • It uses git grep that ignores the .git/ directory, and is much faster because it uses git’s index.
  • The command is much shorter and easier to understand and write
  • Flags are easy to add, and this only has to be done once per command

Install

You can just install the git-extras package which also contains a few other scripts.

I opted for using it standalone, so downloaded the shell file, put it in a directory which is in my $PATH, and removed one dependency on a script which is only available in git-extras (see my aforementioned PR). So for instance, you could copy git-sed.sh in /usr/local/bin/ and make it executable. To enable calling it via git sed, put in your ~/.gitconfig:

[alias]
  sed = !sh git-sed.sh

Usage

After installing git-sed, the command above would become:

git sed -f g "/old/page.html" "/new/page.html"

My modifications also allow people to use extended Regex, so things like reference captures, so I hope these will be merged soon. With this, some more advanced replacements are possible:

# Use reference capture (save absolute link as \1)
git sed -f g "http://fsfe.org(/.*?\.html)" "https://fsfe.org\1"

# Optional tokens (.html is optional here)
git sed -f g "/old/page(\.html)?" "/new/page.html"

And if you would like to limit git-sed to a certain directory, e.g. news/, that’s also no big deal:

git sed -f g "oldstring" "newstring" -- news/

You may have notived the -f flag with the g argument. People used to sed know that g replaces all appearances of the searched pattern in a file, not only the first one. You could also make it gi if you want a case-insensitive search and replace.

Conclusion

As you can see, using git-sed is really a time and nerve saver when doing mass changes on your repositories. Of course, there is also room for improvement. For instance, it could be useful to use the Perl Regex library (PCRE) for the grep and sed to also allow for look-aheads or look-behinds. I encourage you to try git-sed and make suggestions to upstream directly to improve this handy tool.

Sunday, 26 July 2020

Nextcloud and OpenID-Connect

%!s()

If you looked at the Nextcloud app store you could already find OpenID-Connect connectors before but since Nextcloud 19 it is an officially supported user back-end. So it was time for me to have a closer look at it and to try it out.

Get a OpenID Connect provider

First step was to get an OpenID-Connect provider, sure I could have chosen one of the public services. But why not have a small nice provider running directly on my machine? Keycloak makes this really simple. By following their Getting Started Guide I could setup a OpenID-Connect provider in just a few minutes and run it directly on my local demo machine. I will show you how I configured Keycloak as an OpenID-Connect provider for Nextcloud but please keep in mind, this is the first time I configured Keycloak and my main goal was to get it running quickly to test the Nextcloud connector. It is quite likely that I missed some important security setting which you would like to enable for a productive system.

After installing Keycloak we go to http://localhost:8080/auth/ which is the default URL in “standalone” mode and login as admin. The first thing we do is to configure a new Realm in the “Realm Settings”. Only “Name” and “Display name” need to be set, the rest can be kept as it is:

Next we move on to the “Clients” tab, and created a new client:

Set a random “Client ID”, I chose “nextcloud” in this example, and the root URL of your Nextcloud which is http://localhost/stable in this case. After that we get redirected to the client configuration page. Most settings are already set correctly. We only need to adjusted two more settings.

First we set the “Access Type” to “confidential”, this is needed in order to get a client secret which we need for the Nextcloud setup later on. While the “Valid Redirection URIs” work as it is with the wildcard, I used the one proposed by the Nextcloud OIDC app http://localhost/stable/index.php/apps/user_oidc/code. This is the Nextcloud end-point to which Keycloak will redirect the user back after a successful login.

Finally we create a user who should be able to login to Nextcloud later.

While technically the “Username” is enough I directly set E-Mail address, first- and second name. Nextcloud will reuse this information later to pre-fill the users profile nicely. Don’t forget to go to the “Credentials” tab and set a password for your new user.

That’s it, now we just need to grab a few information to complete the Nextcloud configuration later on.

First we go back to the “Realm Settings” of the “OIDCDemo”, under “OpenID Endpoint Configuration” we get a JSON document with all the parameter of our OpenID-Connect end-point. For Nextcloud we only need the “authorization_endpoint” which we find in the second line of the JSON file.

The second value is the client secret. We can find this in the credential tab of the “nextcloud” client settings:

Nextcloud setup

Before we continue, make sure to have this line in your config.php 'allow_local_remote_servers' => true,, otherwise Nextcloud will refuse to connect to Keycloak on localhost.

Now we can move on and configure Nextcloud. As mentioned before, the official OpenID-Connect app was released together with Nextcloud 19, so you need Nextcloud 19 or later. If you go to the Nextcloud apps management and search for “openid” you will not only find the official app but also the community apps. Make sure to chose the app called “OpenID Connect user backend”. Just to avoid misunderstandings at this point, the Nextcloud community does an awesome job! I’m sure the community apps work great too, they may even have more features compared to the new official app. But the goal of this article was to try out the officially supported OpenID-Connect app.

After installing the app we go to the admin settings where we will find a new menu entry called “OpenID Connect” on the left sidebar. The setup is quite simple but contains everything needed:

The app supports multiple OpenID Connect providers in parallel, so the first thing we do is to chose a “Identifier” which will be shown on the login page to let the user chose the right provider. For the other fields we enter the “Client ID”, “Client secret” and “Discovery endpoint” from Keycloak. After accepting the setting by clicking on “Register” we are done. Now let’s try to login with OpenID Connect:

As you can see, we have now an additional button called “Login with Keycloak”. Once clicked we get re-directed to Keycloak:

After we successfully logged-in to Keycloak we get directly re-directed back to Nextcloud and are logged-in. A look into our personal settings shows us that all our account detail like the full name and the email address where added correctly to our Nextcloud account:

Saturday, 18 July 2020

Book your BoF for Akademy 2020 now!

BoF sessions are an integral part of Akademy, it's the "let's discuss and plan how to make things happen" oriented part after the more "this is what we've done" oriented part of the talks.

So go to https://community.kde.org/Akademy/2020/AllBoF and book a session to discuss with the rest of the community about something you're passionate! :)

Saturday, 11 July 2020

20.08 releases branches created

Make sure you commit anything you want to end up in the 20.08 releases to them

We're already past the dependency freeze.

The Feature Freeze and Beta is this Thursday 16 of July.

More interesting dates
    July 30, 2020: 20.08 RC (20.07.90) Tagging and Release
  August  6, 2020: 20.08 Tagging
  August 13, 2020: 20.08 Release

https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Cheers,
  Albert

Friday, 10 July 2020

Light OpenStreetMapping with GPS

Now that lockdown is lifting a bit in Scotland, I’ve been going a bit further for exercise. One location I’ve been to a few times is Tyrebagger Woods. In theory, I can walk here from my house via Brimmond Hill although I’m not yet fit enough to do that in one go.

Instead of following the main path, I took a detour along some route that looked like it wanted to be a path but it hadn’t been maintained for a while. When I decided I’d had enough of this, I looked for a way back to the main path but OpenStreetMap didn’t seem to have the footpaths mapped out here yet.

I’ve done some OpenStreetMap surveying before so I thought I’d take a look at improving this, and moving some of the tracks on the map closer to where they are in reality. In the past I’ve used OSMTracker which was great, but now I’m on iOS there doesn’t seem to be anything that matches up.

My new handheld radio, a Kenwood TH-D74 has the ability to record GPS logs so I thought I’d give this a go. It records the logs to the SD card with one file per session. It’s a very simple logger that records the NMEA strings as they are received. The only sentences I see in the file are GPGGA (Global Positioning System Fix Data) and GPRMC (Recommended Minimum Specific GPS/Transit Data).

I tried to import this directly with JOSM but it seemed to throw an error and crash. I’ve not investigated this, but I thought a way around could be to convert this to GPX format. This was easier than expected:

apt install gpsbabel
gpsbabel -i nmea -f "/sdcard/KENWOOD/TH-D74/GPS_LOG/25062020_165017.nme" \
                 -o gpx,gpxver=1.1 -F "/tmp/tyrebagger.gpx"

This imported into JOSM just fine and I was able to adjust some of the tracks to better fit where they actually are.

I’ll take the radio with me when I go in future and explore some of the other paths, to see if I can get the whole woods mapped out nicely. It is fun to just dive into the trees sometimes, along the paths that looks a little forgotten and overgrown, but also it’s nice to be able to find your way out again when you get lost.

Tuesday, 23 June 2020

How to build a SSH Bastion host

[this is a technical blog post, but easy to follow]

recently I had to setup and present my idea of a ssh bastion host. You may have already heard this as jump host or a security ssh hoping station or ssh gateway or even something else.

The main concept

SSH bastion

Disclaimer: This is just a proof of concept (PoC). May need a few adjustments.

The destination VM may be on another VPC, perhaps it does not have a public DNS or even a public IP. Think of this VM as not accessible. Only the ssh bastion server can reach this VM. So we need to first reach the bastion.

SSH Config

To begin with, I will share my initial sshd_config to get an idea of my current ssh setup

AcceptEnv LANG LC_*
ChallengeResponseAuthentication no
Compression no
MaxSessions 3
PasswordAuthentication no
PermitRootLogin no
Port 12345
PrintMotd no
Subsystem sftp  /usr/lib/openssh/sftp-server
UseDNS no
UsePAM yes
X11Forwarding no
AllowUsers ebal
  • I only allow, user ebal to connect via ssh.
  • I do not allow the root user to login via ssh.
  • I use ssh keys instead of passwords.

This configuration is almost identical to both VMs

  • bastion (the name of the VM that acts as a bastion server)
  • VM (the name of the destination VM that is behind a DMZ/firewall)

~/.ssh/config

I am using the ssh config file to have an easier user experience when using ssh

Host bastion
     Hostname 127.71.16.12
     Port 12345
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     Port 23456

Host *
    User ebal
    ServerAliveInterval 300
    ServerAliveCountMax 10
    ConnectTimeout=60

Create a new user to test this

Let us create a new user for testing.

User/Group

$ sudo groupadd ebal_test
$ sudo useradd -g ebal_test -m ebal_test

$ id ebal_test
uid=1000(ebal_test) gid=1000(ebal_test) groups=1000(ebal_test)

Perms

Copy .ssh directory from current user (<== lazy sysadmin)

$ sudo cp -ravx /home/ebal/.ssh/ /home/ebal_test/
$ sudo chown -R ebal_test:ebal_test /home/ebal_test/.ssh

$ sudo ls -la ~ebal_test/.ssh/
total 12
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 .
drwx------. 3 ebal_test ebal_test 4096 Jun 23 15:56 ..
-r--r-----. 1 ebal_test ebal_test  181 Sep 20  2019 authorized_keys

$ sudo ls -ld ~ebal_test/.ssh/
drwxr-x---. 2 ebal_test ebal_test 4096 Sep 20  2019 /home/ebal_test/.ssh/

bastion sshd config

Edit the ssh daemon configuration file to append the below entries

cat /etc/ssh/sshd_config

AllowUsers ebal ebal_test

Match User ebal_test
   AllowAgentForwarding no
   AllowTcpForwarding yes
   X11Forwarding no
   PermitTunnel no
   GatewayPorts no
   ForceCommand echo 'This account can only be used for ProxyJump (ssh -J)'

Don’t forget to restart sshd

systemctl restart sshd

As you have seen above, I now allow two (2) users to access the ssh daemon (AllowUsers). This can also work with AllowGroups

Testing bastion

Let’s try to connect to this bastion VM

$ ssh bastion -l ebal_test uptime
This account can only be used for ProxyJump (ssh -J)
$ ssh bastion -l ebal_test
This account can only be used for ProxyJump (ssh -J)
Connection to 127.71.16.12 closed.

Interesting …

We can not login into this machine.

Let’s try with our personal user

$ ssh bastion -l ebal uptime
 18:49:14 up 3 days,  9:14,  0 users,  load average: 0.00, 0.00, 0.00

Perfect.

Let’s try from windows (mobaxterm)

mobaxterm is putty on steroids! There is also a portable version, so there is no need of installation. You can just download and extract it.

mobaxterm_proxyjump.png

Interesting…

Destination VM

Now it is time to test our access to the destination VM

$ ssh VM
ssh: connect to host 192.13.13.186 port 23456: Connection refused

bastion

$ ssh -J ebal_test@bastion ebal@vm uptime
 19:07:25 up 22:24,  2 users,  load average: 0.00, 0.01, 0.00
$ ssh -J ebal_test@bastion ebal@vm
Last login: Tue Jun 23 19:05:29 2020 from 94.242.59.170
ebal@vm:~$
ebal@vm:~$ exit
logout

Success !

Explain Command

Using this command

ssh -J ebal_test@bastion ebal@vm

  • is telling the ssh client command to use the ProxyJump feature.
  • Using the user ebal_test on bastion machine and
  • connect with the user ebal on vm.

So we can have different users!

ssh/config

Now, it is time to put everything under our ~./ssh/config file

Host bastion
     Hostname 127.71.16.12
     Port 12345
     User ebal_test
     IdentityFile ~/.ssh/id_ed25519.vm

Host vm
     Hostname 192.13.13.186
     ProxyJump bastion
     User ebal
     Port 23456

and try again

$ ssh vm uptime
 19:17:19 up 22:33,  1 user,  load average: 0.22, 0.11, 0.03

mobaxterm with bastion

mobaxterm_settings.png

mobaxterm_proxyjump_final.png

Thursday, 11 June 2020

20.08 releases schedule finalized

It is available at the usual place https://community.kde.org/Schedules/release_service/20.08_Release_Schedule

Dependency freeze is in four weeks (July 9) and Feature Freeze a week after that, make sure you start finishing your stuff!

Wednesday, 10 June 2020

How to use cloud-init with Edis

It is a known fact, that my favorite hosting provider is edis. I’ve seen them improving their services all these years, without forgeting their customers. Their support is great and I am really happy working with them.

That said, they dont offer (yet) a public infrastructre API like hetzner, linode or digitalocean but they offer an Auto Installer option to configure your VPS via a post-install shell script, put your ssh key and select your basic OS image.

edis_auto_installer.png

I am experimenting with this option the last few weeks, but I wanted to use my currect cloud-init configuration file without making many changes. The goal is to produce a VPS image that when finished will be ready to accept my ansible roles without making any addition change or even login to this VPS.

So here is my current solution on how to use the post-install option to provide my current cloud-init configuration!

cloud-init

cloud-init documentation

cloud-init.png
Josh Powers @ DebConf17

I will not get into cloud-init details in this blog post, but tldr; has stages, has modules, you provide your own user-data file (yaml) and it supports datasources. All these things is for telling cloud-init what to do, what to configure and when to configure it (in which step).

NoCloud Seed

I am going to use NoCloud datastore for this experiment.

so I need to configure these two (2) files

/var/lib/cloud/seed/nocloud-net/meta-data
/var/lib/cloud/seed/nocloud-net/user-data

Install cloud-init

My first entry in the post-install shell script should be

apt-get update && apt-get install cloud-init

thus I can be sure of two (2) things. First the VPS has already network access and I dont need to configure it, and second install cloud-init software, just to be sure that is there.

Variables

I try to keep my user-data file very simple but I would like to configure hostname and the sshd port.

HOSTNAME="complimentary"
SSHDPORT=22422

Users

Add a single user, provide a public ssh key for authentication and enable sudo access to this user.

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

Hardening SSH

  • Change sshd port
  • Disable Root
  • Disconnect Idle Sessions
  • Disable Password Auth
  • Disable X Forwarding
  • Allow only User (or group)
write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        ClientAliveInterval 600
        ClientAliveCountMax 3
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

enable firewall

ufw allow $SSHDPORT/tcp && ufw enable

remove cloud-init

and last but not least, I need to remove cloud-init in the end

apt-get -y autoremove --purge cloud-init lxc lxd snapd

Post Install Shell script

let’s put everything together

#!/bin/bash

apt-get update && apt-get install cloud-init

HOSTNAME="complimentary"
SSHDPORT=22422

mkdir -p /var/lib/cloud/seed/nocloud-net

# Meta Data
cat > /var/lib/cloud/seed/nocloud-net/meta-data <<EOF
dsmode: local
EOF

# User Data
cat > /var/lib/cloud/seed/nocloud-net/user-data <<EOF
#cloud-config

disable_root: true
ssh_pwauth: no

users:
  - name: ebal
    ssh_import_id:
      - gh:ebal
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

write_files:
  - path: /etc/ssh/sshd_config
    content: |
        Port $SSHDPORT
        PermitRootLogin no
        ChallengeResponseAuthentication no
        UsePAM yes
        UseDNS no
        X11Forwarding no
        PrintMotd no
        AcceptEnv LANG LC_*
        Subsystem sftp  /usr/lib/openssh/sftp-server
        PasswordAuthentication no
        AllowUsers ebal

# Set TimeZone
timezone: Europe/Athens

HOSTNAME: $HOSTNAME

# Install packages
packages:
  - figlet
  - mlocate
  - python3-apt
  - bash-completion

# Update/Upgrade & Reboot if necessary
package_update: true
package_upgrade: true
package_reboot_if_required: true

# PostInstall
runcmd:
  - figlet $HOSTNAME > /etc/motd
  - updatedb
  # Firewall
  - ufw allow $SSHDPORT/tcp && ufw enable
  # Remove cloud-init
  - apt-get -y autoremove --purge cloud-init lxc lxd snapd
  - apt-get -y --purge autoremove
  - apt -y autoclean
  - apt -y clean all

EOF

cloud-init clean --logs

cloud-init init --local

That’s it !

After a while (needs a few reboot) our VPS is up & running and we can use ansible to configure it.

Tag(s): edis, cloud-init

Monday, 08 June 2020

Racism is bad - a Barcelona centered reflection

I hope most of people reading this will not find "Racism is bad" to be controversial.

The problem is even if most (i'd hope to say all) of us think racism is bad, some of us are still consciously or unconsciously racist.

I am not going to speak about the Black Lives Matters movement, because it's mostly USA centered (which is kind of far for me) and there's much better people to listen than me, so go and listen to them.

In this case I'm going to speak about the Romani/Roma/gypsies in Barcelona (and from what i can see in this Pew Research article, most of Europe).

Institutionalized hate against them is so deep that definition 2 of 6 in the Catalan dictionary for the Romani word (gitano) is "those that act egoistically and try deceive others" and definition 5 of 8 in the Spanish dictionary is "those that with wits and lies try to cheat someone in a particular matter"

Here it is "common" to hear people say "don't be a gitano" meaning to say "don't cheat/play me" and nobody bats an eye when hearing that phrase.

It's a fact that this "community" tends to live in ghettos and has a higher percent crime ratio. I've heard people that probably think themselves as non racist say "that's just the lifestyle they like".

SARCASM: Sure, people love living in unsafe neighbourhoods and crappy houses and risking going to jail just to be able to eat.

The thing is when 50% of the population has an unfavourable view of you just because of who your parents are, or your surname is, it's hard to get a [nice] job, a place to live outside the ghetto, etc.

So please, in addition to saying that you're not a racist (which is a good first step), try to actually not be racist too.

Thursday, 28 May 2020

Send your talks for Akademy 2020 *now*

The Call for Participation is still open for two weeks more, but please make us a favour and send yours *now*.

This way we don't have to panic thinking if we are going to need to go chasing people or not, or if we're going to have too few or too many proposals.

Also if you ask the talks committee for review, we can review your talk early, give you feedback and improve it, so it's a win-win.

So head over to https://akademy.kde.org/2020/cfp, find the submit link in the middle of that wall of text and click it ;)

Sunday, 24 May 2020

chmk a simple CHM viewer

Okular can view CHM files, to do so it uses KHTML, makes sense CHM is basically HTML with images all compressed into a single file.

This is somewhat problematic since KHTML is largely unmaintained and i doubt it'll get a Qt6 port.

The problem is that the only other Qt based HTML rendering engine is QtWebEngine and while great it doesn't support stuff we would need to use it in Okular, since Okular needs to access to the rendered image of the page and also to the text since it uses the same API for all formats, be it CHM, PDF, epub, wathever.

The easiest plan to move forward is probably drop CHM from Okular, but that means no more chm viewing in KDE software, which would be a bit sad.

So I thought, ok maybe I can do a quick CHM viewer just based in QtWebEngine without trying to fit it into the Okular backend to support different formats.

And ChmK was born https://invent.kde.org/aacid/chmk.

It's still very simple, but the basics work, if you give it a file in the command line, it'll open it and you'll be able to browse it.



As you can see it doesn't have *any* UI yet, so Merge Requests more than welcome.

Saturday, 16 May 2020

Network Namespaces - Part Three

Previously on … Network Namespaces - Part Two we provided internet access to the namespace, enabled a different DNS than our system and run a graphical application (xterm/firefox) from within.

The scope of this article is to run vpn service from this namespace. We will run a vpn-client and try to enable firewall rules inside.

ip-netns07

dsvpn

My VPN choice of preference is dsvpn and you can read in the below blog post, how to setup it.

dsvpn is a TCP, point-to-point VPN, using a symmetric key.

The instructions in this article will give you an understanding how to run a different vpn service.

Find your external IP

Before running the vpn client, let’s see what is our current external IP address

ip netns exec ebal curl ifconfig.co

62.103.103.103

The above IP is an example.

IP address and route of the namespace

ip netns exec ebal ip address show v-ebal

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip netns exec ebal ip route show

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Firefox

Open firefox (see part-two) and visit ifconfig.co we noticed see that the location of our IP is based in Athens, Greece.

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-before.png

Run VPN client

We have the symmetric key dsvpn.key and we know the VPN server’s IP.

ip netns exec ebal dsvpn client dsvpn.key 93.184.216.34 443

Interface: [tun0]
Trying to reconnect
Connecting to 93.184.216.34:443...
net.ipv4.tcp_congestion_control = bbr
Connected

Host

We can not see this tunnel vpn interface from our host machine

# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 94:de:80:6a:de:0e brd ff:ff:ff:ff:ff:ff

376: v-eth0@if375: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:f7:c2:fb:60:ea brd ff:ff:ff:ff:ff:ff link-netns ebal

netns

but it exists inside the namespace, we can see tun0 interface here

ip netns exec ebal ip link

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 500
    link/none

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Find your external IP again

Checking your external internet IP from within the namespace

ip netns exec ebal curl ifconfig.co

93.184.216.34

Firefox netns

running again firefox, we will noticed that our the location of our IP is based in Helsinki (vpn server’s location).

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ip-netns-ifconfig-after.png

systemd

We can wrap the dsvpn client under a systemcd service

[Unit]
Description=Dead Simple VPN - Client

[Service]
ExecStart=ip netns exec ebal /usr/local/bin/dsvpn client /root/dsvpn.key 93.184.216.34 443
Restart=always
RestartSec=20

[Install]
WantedBy=network.target

Start systemd service

systemctl start dsvpn.service

Verify

ip -n ebal a

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none
    inet 192.168.192.1 peer 192.168.192.254/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 64:ff9b::c0a8:c001 peer 64:ff9b::c0a8:c0fe/96 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::ee69:bdd8:3554:d81/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

375: v-ebal@if376: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c2:f3:a4:8a:41:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::c0f3:a4ff:fe8a:4147/64 scope link
       valid_lft forever preferred_lft forever

ip -n ebal route

default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20
192.168.192.254 dev tun0 proto kernel scope link src 192.168.192.1

Firewall

We can also have different firewall policies for each namespace

outside

# iptables -nvL | wc -l
127

inside

ip netns exec ebal iptables -nvL

Chain INPUT (policy ACCEPT 9 packets, 2547 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 2 packets, 216 bytes)
 pkts bytes target     prot opt in     out     source        destination

So for the VPN service running inside the namespace, we can REJECT every network traffic, except the traffic towards our VPN server and of course the veth interface (point-to-point) to our host machine.

iptable rules

Enter the namespace

inside

ip netns exec ebal bash

Before

verify that iptables rules are clear

iptables -nvL

Chain INPUT (policy ACCEPT 25 packets, 7373 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 4 packets, 376 bytes)
 pkts bytes target     prot opt in     out     source        destination

Enable firewall

./iptables.netns.ebal.sh

The content of this file

## iptable rules

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP
iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT

## netns - incoming
iptables -A INPUT -i v-ebal -s 10.0.0.0/8 -j ACCEPT

## Reject incoming traffic
iptables -A INPUT -j REJECT

## DSVPN
iptables -A OUTPUT -p tcp -m tcp -o v-ebal -d 93.184.216.34 --dport 443 -j ACCEPT

## net-ns outgoing
iptables -A OUTPUT -o v-ebal -d 10.0.0.0/8 -j ACCEPT

## Allow tun
iptables -A OUTPUT -o tun+ -j ACCEPT

## Reject outgoing traffic
iptables -A OUTPUT -p tcp -j REJECT --reject-with tcp-reset
iptables -A OUTPUT -p udp -j REJECT --reject-with icmp-port-unreachable

After

iptables -nvL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    1   349 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0     0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate RELATED,ESTABLISHED
    0     0 DROP       all  --  *      *       0.0.0.0/0     0.0.0.0/0     ctstate INVALID
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0     0.0.0.0/0     icmptype 8 ctstate NEW
    0     0 ACCEPT     all  --  v-ebal *       10.0.0.0/8    0.0.0.0/0
    0     0 REJECT     all  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source        destination
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable
    0     0 ACCEPT     tcp  --  *      v-ebal  0.0.0.0/0     95.216.215.96 tcp dpt:8443
    0     0 ACCEPT     all  --  *      v-ebal  0.0.0.0/0     10.0.0.0/8
    0     0 ACCEPT     all  --  *      tun+    0.0.0.0/0     0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0     0.0.0.0/0     reject-with icmp-port-unreachable

PS: We reject tcp/udp traffic (last 2 linew), but allow icmp (ping).

ip-netns08

End of part three.

Tuesday, 12 May 2020

How fun does my Ring Fit Adventure² look like after some time

So after a full “Ring Fit” month – i.e. 30 days played1 – let us see how this workout game held up for me.

Historic context: COVID-19 and Ring Fit Adventure

This might be less interesting for those reading now, but might be relevant information for anyone reading this in a few years.

The first part of the year 2020 has been marked by COVID-19. This disease has since spread to a pandemic, impacting the whole globe, with many countries going into a more or less full lockdown. This situation is slowly changing, as in May many countries in Europe are removing the lockdown.

As everyone who could had to stay at home, the demand for Ring Fit Adventure rose drastically. Not only is it almost impossible to get one, due to it being out of stock pretty much everywhere, there are even rumours that in China you can only get one on the black/grey market for 250 $ (i.e. more than 3x the MSRP).

30 workout days in 76 days

The main question is obviously how much did Ring Fit Adventure get me to work out.

First off, I have to admit that I did not train every work day, as I hoped, but still much more than usual.

One reason I skipped two full weeks was that I had to send the Joy-Cons for repair, as their notorious drift started to show quite badly. This prevented me to work out, as without joy-cons I could not play itt

I took another fortnightly break from it before that, due to not feeling so well.

Now let us look at some data.

According to Ring Fit Adventure itself:

  • I reached world 9 of the story3
  • my character is now at level 75
  • and my difficulty setting is at level 20
  • the total time I exercised2 is 8h 18'
  • in which I burned 2670 kcal of energy
  • and ran a total of 28 km

Looking at my own log, this is how much I trained in the past few months:

yoga gymnastics rowing Ring Fit
2019-12 3
2020-01 1 1
2020-02 2 10
2020-03 12
2020-04 7
2020-05 3 1

This table needs some comment:

  • I got my Ring Fit Adventure in early February. But as you can see, my training regime was a bit lackluster in the two months before it. It was somewhat better the previous year, but I lost that piece of paper.
  • Rowing is something I did go to last year, but have not mustered the courage to go in several months … and then COVID-19 kicked in, so I am not even allowed to go (ha! another good excuse).
  • We are only half-way through May and I just got my joy-cons back from repair yesterday, so I was not able to play Ring Fit Adventure for the whole first half of the month. And the rest of the month is still ahead of me.

In my opinion, this clearly shows that Ring Fit Adventure did trick me into training more often. Including all the hiccups it took me 76 days to work out 30 days (or 36, if you ask the game1), which is roughly every second day. As a comparison, reaching my goal of exercising every work day, I would have to play it for 55 days in the same time frame.

In sum, I am happy with the results, but will try to train more in the future, with the hope to soon be able to go work out with others (e.g. go rowing).

Adventure content so far

As mentioned, in those 30 days I reached World 93 – in fact, I could have taken on the boss today and reached World 10, but decided to instead revisit some old levels and side quests.

Following is a short summary of what Worlds 5 to 9 introduce new to the game. I have to say that so far every world brought something new and refreshing, so the gameplay never got stale. In fact, I have a small problem that I have almost too many fighting skills to choose from. Good thing that by now old fighting skills are getting levelled up, so they are again a reasonable choice to bring to battle.

What also proved as a great feature – especially after a longer break – is that when you start up Adventure mode, it summarises the story so far.

Update: Voices and languages

In the update, they also included more voice languages and the choice for Ring to have either a male or female voice.

I find that as a neat addition, and I am currently using a male French voice with English subtitles to brush up on my French while I am working out.

World 5 & 6

  • introduce days that target only specific skills/muscle groups and does so in a very nice, fun and varied way
  • introduces skill points and a skill tree
  • gyms now have also skill sets – a great and easy way to get some targeted work out in Adventure mode as well

World 7

  • the plot twist is becoming apparent and the story is becoming more varied – while nothing award-winning it is actually enjoyable
  • unlocks All Sets as a choice for setting fighting skills – i.e. choosing to go to fights with workout skills that focus on a certain muscle group, posture, cardio workout, etc.
  • new enemy that can buff defense

World 8 & 9

  • expand the skill tree, including upgrades to previous fighting skills
  • adds a new movement skill that enables to reach alternative paths (also in previous) levels, adding to replayability of past levels; and if timed right, doubles as battle evasion

Other modes

Since my last blog entry about Ring Fit Adventure, the game has received an update that introduces two new modes:

  • Rhythm game – in this mode you press and pull the RingCon and/or twist your torso to the beat of a song, while also occassionaly having to squat or stand up. It seems like a pretty fun party game, especially if they introduce more songs. The only issue I have with it is that I have trouble keeping the RingCon in place when doing the fast twists.
  • Running mode – in this mode, you simply run through a level, without any mob encounters or similar. I have not played this mode much, but for a more lazy day, it seems fitting and relaxing.

Final general thoughts

To start with the negaive, in the past weeks my RingCon did not recognise the transition from presses to pulls really well. But that was easily fixed with a simple recalibration from the options menu. I have not had those issues since.

So, after a full Ring Fit month, how do I feel about it?

While it did not make me lose much weight or gain any major muscles, my back has not ached since. Well, except maybe a little bit when I did not train for two weeks. But even then it took only one or two days of playing for it to stop aching again. I also do feel more fit in general.

In that regard I would say it is a great success.

Did it trick me into training more? Perhaps not as often as were my ambitious expectations, but it has undeniably improved my workout regime immensely.

I do think the story mode is a great way to pull me in, and apart from the fun story and overall atmosphere, I think this is in great part to the RPG elements. I am not a big fan of gamification of everything, and “adding RPG elements” is usually just an euphemism for introducing some levels, badges and collectibles to trick the player into investing more time. But in my opinion this game does it right.

That being said, I am continuing to work out using Ring Fit Adventure and will continue to bump up the difficulty to keep my workout to Moderate Workout.

hook out → very happy to be able to move and play again


  1. The game claims I have been playing for 36 days, but I suspect this includes non-adventure modes. 

  2. Ring Fit Adventure records only time actually spent exercising. For comparison, my Switch records that I played the game for 20 h. 

  3. Apparently there are 20 worlds in total (encompassing 100 levels all together) with a NG+ and NG++ as well. So with 30 full days, I am probably not even half way through. If the trend continues, the worlds are only going to get longer, not shorter. 

Network Namespaces - Part Two

Previously on… Network Namespaces - Part One we discussed how to create an isolated network namespace and use a veth interfaces to talk between the host system and the namespace.

In this article we continue our story and we will try to connect that namespace to the internet.

recap previous commands

ip netns add ebal
ip link add v-eth0 type veth peer name v-ebal
ip link set v-ebal netns ebal
ip addr add 10.10.10.10/24 dev v-eth0
ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal
ip link set v-eth0 up
ip netns exec ebal ip link set v-ebal up

Access namespace

ip netns exec ebal bash

# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e2:07:60:da:d5:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever
    inet6 fe80::e007:60ff:feda:d5cf/64 scope link
       valid_lft forever preferred_lft forever

# ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping Veth

It’s not a gateway, this is a point-to-point connection.

# ping -c3 10.10.10.10

PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.415 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.107 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.126 ms

--- 10.10.10.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2008ms
rtt min/avg/max/mdev = 0.107/0.216/0.415/0.140 ms

ip-netns03

Ping internet

trying to access anything else …

ip netns exec ebal ping -c2 192.168.122.80
ip netns exec ebal ping -c2 192.168.122.1
ip netns exec ebal ping -c2 8.8.8.8
ip netns exec ebal ping -c2 google.com
root@ubuntu2004:~# ping 192.168.122.80
ping: connect: Network is unreachable

root@ubuntu2004:~# ping 8.8.8.8
ping: connect: Network is unreachable

root@ubuntu2004:~# ping google.com
ping: google.com: Temporary failure in name resolution

root@ubuntu2004:~# exit
exit

exit from namespace.

Gateway

We need to define a gateway route from within the namespace

ip netns exec ebal ip route add default via 10.10.10.10

root@ubuntu2004:~# ip netns exec ebal ip route list
default via 10.10.10.10 dev v-ebal
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

test connectivity - system

we can reach the host system, but we can not visit anything else

# ip netns exec ebal ping -c1 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.075 ms

--- 192.168.122.80 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms

# ip netns exec ebal ping -c3 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from 192.168.122.80: icmp_seq=3 ttl=64 time=0.126 ms

--- 192.168.122.80 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.026/0.093/0.128/0.047 ms

# ip netns exec ebal ping -c3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms

root@ubuntu2004:~# ip netns exec ebal ping -c3 google.com
ping: google.com: Temporary failure in name resolution

ip-netns05

Forward

What is the issue here ?

We added a default route to the network namespace. Traffic will start from our v-ebal (veth interface inside the namespace), goes to the v-eth0 (veth interface to our system) and then … then nothing.

The eth0 receive the network packages but does not know what to do with them. We need to create a forward rule to our host, so the eth0 network interface will know to forward traffic from the namespace to the next hop.

echo 1 > /proc/sys/net/ipv4/ip_forward

or

sysctl -w net.ipv4.ip_forward=1

permanent forward

If we need to permanent tell the eth0 to always forward traffic, then we need to edit /etc/sysctl.conf and add below line:

net.ipv4.ip_forward = 1

To enable this option without reboot our system

sysctl -p /etc/sysctl.conf

verify

root@ubuntu2004:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

Masquerade

but if we test again, we will notice that nothing happened. Actually something indeed happened but not what we expected. At this moment, eth0 knows how to forward network packages to the next hope (perhaps next hope is the router or internet gateway) but next hop will get a package from an unknown network.

Remember that our internal network, is 10.10.10.20 with a point-to-point connection to 10.10.10.10. So there is no way for network 192.168.122.0/24 to know how to talk to 10.0.0.0/8.

We have to Masquerade all packages that come from 10.0.0.0/8 and the easiest way to do this if via iptables.
Using the postrouting nat table. That means the outgoing traffic with source 10.0.0.0/8 will have a mask, will pretend to be from 192.168.122.80 (eth0) before going to the next hop (gateway).

# iptables --table nat --flush
iptables --table nat --append POSTROUTING --source 10.0.0.0/8 --jump MASQUERADE
iptables --table nat --list
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.0.0/8           anywhere

Test connectivity

test again the namespace connectivity

# ip netns exec ebal ping -c2 192.168.122.80
PING 192.168.122.80 (192.168.122.80) 56(84) bytes of data.
64 bytes from 192.168.122.80: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 192.168.122.80: icmp_seq=2 ttl=64 time=0.139 ms

--- 192.168.122.80 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1017ms
rtt min/avg/max/mdev = 0.054/0.096/0.139/0.042 ms

# ip netns exec ebal ping -c2 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=63 time=0.242 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=63 time=0.636 ms

--- 192.168.122.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1015ms
rtt min/avg/max/mdev = 0.242/0.439/0.636/0.197 ms

# ip netns exec ebal ping -c2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=51 time=57.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=51 time=58.0 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 57.805/57.896/57.988/0.091 ms

# ip netns exec ebal ping -c2 google.com
ping: google.com: Temporary failure in name resolution

success

ip-netns06.png

DNS

almost!

If you carefully noticed above, ping on the IP works.
But no with name resolution.

netns - resolv

Reading ip-netns manual

# man ip-netns | tbl | grep resolv

  resolv.conf for a network namespace used to isolate your vpn you would name it /etc/netns/myvpn/resolv.conf.

we can create a resolver configuration file on this location:
/etc/netns/<namespace>/resolv.conf

mkdir -pv /etc/netns/ebal/
echo nameserver 88.198.92.222 > /etc/netns/ebal/resolv.conf

I am using radicalDNS for this namespace.

Verify DNS

# ip netns exec ebal cat /etc/resolv.conf
nameserver 88.198.92.222

Connect to the namespace

ip netns exec ebal bash

root@ubuntu2004:~# cat /etc/resolv.conf
nameserver 88.198.92.222

root@ubuntu2004:~# ping -c 5 ipv4.balaskas.gr
PING ipv4.balaskas.gr (158.255.214.14) 56(84) bytes of data.
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=1 ttl=50 time=64.3 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=2 ttl=50 time=64.2 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=3 ttl=50 time=66.9 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=4 ttl=50 time=63.8 ms
64 bytes from ns14.balaskas.gr (158.255.214.14): icmp_seq=5 ttl=50 time=63.3 ms

--- ipv4.balaskas.gr ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 63.344/64.502/66.908/1.246 ms

root@ubuntu2004:~# ping -c3 google.com
PING google.com (172.217.22.46) 56(84) bytes of data.
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=1 ttl=51 time=57.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=2 ttl=51 time=55.4 ms
64 bytes from fra15s16-in-f14.1e100.net (172.217.22.46): icmp_seq=3 ttl=51 time=55.2 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 55.209/55.984/57.380/0.988 ms

bonus - run firefox from within namespace

xterm

start with something simple first, like xterm

ip netns exec ebal xterm

or

ip netns exec ebal xterm -fa liberation -fs 11

ipnetns_xterm.png

test firefox

trying to run firefox within this namespace, will produce an error

# ip netns exec ebal firefox
Running Firefox as root in a regular user's session is not supported.  ($XAUTHORITY is /home/ebal/.Xauthority which is owned by ebal.)

and xauth info will inform us, that the current Xauthority file is owned by our local user.

# xauth info
Authority file:       /home/ebal/.Xauthority
File new:             no
File locked:          no
Number of entries:    4
Changes honored:      yes
Changes made:         no
Current input:        (argv):1

okay, get inside this namespace

ip netns exec ebal bash

and provide a new authority file for firefox

XAUTHORITY=/root/.Xauthority firefox

# XAUTHORITY=/root/.Xauthority firefox

No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0.0

xhost

xhost provide access control to the Xorg graphical server.
By default should look like this:

$ xhost
access control enabled, only authorized clients can connect

We can also disable access control

xhost +

but what we need to do, is to disable access control on local

xhost +local:

firefox

and if we do all that

ip netns exec ebal bash -c "XAUTHORITY=/root/.Xauthority firefox"

ipnetns-firefox.png

End of part two

Saturday, 09 May 2020

Network Namespaces - Part One

Have you ever wondered how containers work on the network level? How they isolate resources and network access? Linux namespaces is the magic behind all these and in this blog post, I will try to explain how to setup your own private, isolated network stack on your linux box.

notes based on ubuntu:20.04, root access.

current setup

Our current setup is similar to this

ip-netns00

List ethernet cards

ip address list

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

List routing table

ip route list

default via 192.168.122.1 dev eth0 proto static
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

ip-netns01

Checking internet access and dns

ping -c 5 libreops.cc

PING libreops.cc (185.199.111.153) 56(84) bytes of data.
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=1 ttl=54 time=121 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=2 ttl=54 time=124 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=3 ttl=54 time=182 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=4 ttl=54 time=162 ms
64 bytes from 185.199.111.153 (185.199.111.153): icmp_seq=5 ttl=54 time=168 ms

--- libreops.cc ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 121.065/151.405/181.760/24.299 ms

linux network namespace management

In this article we will use the below programs:

so, let us start working with network namespaces.

list

To view the network namespaces, we can type:

ip netns
ip netns list

This will return nothing, an empty list.

help

So quicly view the help of ip-netns

# ip netns help

Usage:  ip netns list
  ip netns add NAME
  ip netns attach NAME PID
  ip netns set NAME NETNSID
  ip [-all] netns delete [NAME]
  ip netns identify [PID]
  ip netns pids NAME
  ip [-all] netns exec [NAME] cmd ...
  ip netns monitor
  ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT

monitor

To monitor in real time any changes, we can open a new terminal and type:

ip netns monitor

Add a new namespace

ip netns add ebal

List namespaces

ip netns list

root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal

We have one namespace

Delete Namespace

ip netns del ebal

Full example

root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~# ip netns add ebal
root@ubuntu2004:~# ip netns list
ebal
root@ubuntu2004:~# ip netns
ebal
root@ubuntu2004:~# ip netns del ebal
root@ubuntu2004:~#
root@ubuntu2004:~# ip netns
root@ubuntu2004:~# ip netns list
root@ubuntu2004:~#

monitor

root@ubuntu2004:~# ip netns monitor
add ebal
delete ebal

Directory

When we create a new network namespace, it creates an object under /var/run/netns/.

root@ubuntu2004:~# ls -l /var/run/netns/
total 0
-r--r--r-- 1 root root 0 May  9 16:44 ebal

exec

We can run commands inside a namespace.

eg.

ip netns exec ebal ip a

root@ubuntu2004:~# ip netns exec ebal ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

bash

we can also open a shell inside the namespace and run commands throught the shell.
eg.

root@ubuntu2004:~# ip netns exec ebal bash

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

root@ubuntu2004:~# exit
exit

ip-netns02

as you can see, the namespace is isolated from our system. It has only one local interface and nothing else.

We can bring up the loopback interface up

root@ubuntu2004:~# ip link set lo up

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

root@ubuntu2004:~# ip r

veth

The veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices.

Think Veth as a physical cable that connects two different computers. Every veth is the end of this cable.

So we need 2 virtual interfaces to connect our system and the new namespace.

ip link add v-eth0 type veth peer name v-ebal

ip-netns03

eg.

root@ubuntu2004:~# ip link add v-eth0 type veth peer name v-ebal

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

3: v-ebal@v-eth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff

4: v-eth0@v-ebal: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff

Attach veth0 to namespace

Now we are going to move one virtual interface (one end of the cable) to the new network namespace

ip link set v-ebal netns ebal

ip-netns03

we will see that the interface is not showing on our system

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside the namespace

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Connect the two virtual interfaces

outside

ip addr add 10.10.10.10/24 dev v-eth0

root@ubuntu2004:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:ea:50:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.80/24 brd 192.168.122.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feea:5087/64 scope link
       valid_lft forever preferred_lft forever

4: v-eth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal
    inet 10.10.10.10/24 scope global v-eth0
       valid_lft forever preferred_lft forever

inside

ip netns exec ebal ip addr add 10.10.10.20/24 dev v-ebal

root@ubuntu2004:~# ip netns exec ebal ip a 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

3: v-ebal@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.10.20/24 scope global v-ebal
       valid_lft forever preferred_lft forever

Both Interfaces are down!

But both interfaces are down, now we need to set up both interfaces:

outside

ip link set v-eth0 up

root@ubuntu2004:~# ip link set v-eth0 up 

root@ubuntu2004:~# ip link show v-eth0
4: v-eth0@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether 3e:85:9b:dd:c7:96 brd ff:ff:ff:ff:ff:ff link-netns ebal

inside

ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link set v-ebal up

root@ubuntu2004:~# ip netns exec ebal ip link show v-ebal
3: v-ebal@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:86:88:3f:eb:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0

did it worked?

Let’s first see our routing table:

outside

root@ubuntu2004:~# ip r
default via 192.168.122.1 dev eth0 proto static
10.10.10.0/24 dev v-eth0 proto kernel scope link src 10.10.10.10
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.80

inside

root@ubuntu2004:~# ip netns exec ebal ip r
10.10.10.0/24 dev v-ebal proto kernel scope link src 10.10.10.20

Ping !

outside

root@ubuntu2004:~# ping -c 5 10.10.10.20
PING 10.10.10.20 (10.10.10.20) 56(84) bytes of data.
64 bytes from 10.10.10.20: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 10.10.10.20: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.10.10.20: icmp_seq=4 ttl=64 time=0.042 ms
64 bytes from 10.10.10.20: icmp_seq=5 ttl=64 time=0.071 ms

--- 10.10.10.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4098ms
rtt min/avg/max/mdev = 0.028/0.047/0.071/0.014 ms

inside

root@ubuntu2004:~# ip netns exec ebal bash
root@ubuntu2004:~#
root@ubuntu2004:~# ping -c 5 10.10.10.10
PING 10.10.10.10 (10.10.10.10) 56(84) bytes of data.
64 bytes from 10.10.10.10: icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from 10.10.10.10: icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from 10.10.10.10: icmp_seq=3 ttl=64 time=0.041 ms
64 bytes from 10.10.10.10: icmp_seq=4 ttl=64 time=0.044 ms
64 bytes from 10.10.10.10: icmp_seq=5 ttl=64 time=0.053 ms

--- 10.10.10.10 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4088ms
rtt min/avg/max/mdev = 0.041/0.045/0.053/0.004 ms
root@ubuntu2004:~# exit
exit

It worked !!

ip-netns03

End of part one.

Thursday, 07 May 2020

HamBSD Development Log 2020-05-07

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on writing unit tests for aprsisd.

I’ve added a few unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. There’s still some todo entries there as I’ve not made up packets for all the cases I wanted to check yet.

These are the first unit tests I’ve written for HamBSD and it’s definitely a different experience compared to writing Python unit tests for example. The framework for the tests uses bsd.regress.mk(5). The tests are C programs that include functions from aprsisd.

In order to do this I’ve had to split the function that converts AX.25 packets to TNC2 packets out into a seperate file. This is the sort of thing that I’d be more comfortable doing if I had more unit test coverage. It seemed to go OK and hopefully the coverage will improve as I get more used to writing tests in this way.

I also corrected a bug from yesterday where AX.25 3rd-party packets would have their length over-reported, leaking stack memory to RF.

I’ve been reading up on the station capabilities packet and it seems a lot of fields have been added by various software over time. Successful APRS IGate Operation (WB2OSZ, 2017) has a list of some of the fields and where they came from under “IGate Status Beacon” which seems to be what this packet is used for, not necessarily advertising the capabilities of the whole station.

If this packet were to become quite long, there is the possibility for an amplification attack. Someone with a low power transmitter can send an IGate query, and then get the IGate to respond with a much longer packet at higher power. It’s not even clear in the reply why this packet would be being sent as the requestor is not mentioned.

I think this will be the first place where I implement some rate limiting and see how that works. Collecting some simple statistics like the number of packets uploaded/downloaded would also be useful for monitoring.

Next steps:

  • Keep track of number of packets uploaded and downloaded
  • Add those statistics to station capabilities packet

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

    Albrechts Blog  Alessandro's blog  Andrea Scarpino's blog  André Ockers on Free Software  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Bobulate  Brian Gough’s Notes  Chris Woolfrey — FSFE UK Team Member  Ciarán’s free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Daniel Martí's blog  David Boddie - Updates (Full Articles)  ENOWITTYNAME  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Alessandro at FSFE  English – Alina Mierlus – Building the Freedom  English – Being Fellow #952 of FSFE  English – Blog  English – FSFE supporters Vienna  English – Free Software for Privacy and Education  English – Free speech is better than free beer  English – Jelle Hermsen  English – Nicolas Jean's FSFE blog  English – Repentinus  English – The Girl Who Wasn't There  English – Thinking out loud  English – Viktor's notes  English – With/in the FSFE  English – gollo's blog  English – mkesper's blog  Escape to freedom  Evaggelos Balaskas - System Engineer  FSFE interviews its Fellows  FSFE – Frederik Gladhorn (fregl)  FSFE – Matej's blog  Fellowship News  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – hesa's Weblog  Free as LIBRE  Free, Easy and Others  FreeSoftware – egnun's blog  From Out There  Giacomo Poderi  Green Eggs and Ham  Handhelds, Linux and Heroes  HennR’s FSFE blog  Henri Bergius  Hook’s Humble Homepage  Inductive Bias  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matthias Kirschner's Web log - fsfe  Max Mehl (English)  Michael Clemens  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nikos Roussos - opensource  Planet FSFE on Iain R. Learmonth  Po angielsku — mina86.com  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli – blog  Saint’s Log  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The trunk  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Vincent Lequertier's blog  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  en – Florian Snows Blog  en – PB's blog  en – rieper|blog  english – Davide Giunchi  english – Torsten's FSFE blog  foss – vanitasvitae's blog  free software blog  freedom bits  freesoftware – drdanzs blog  fsfe – Thib's Fellowship Blog  julia.e.klein’s blog  marc0s on Free Software  pichel’s blog  planet-en – /var/log/fsfe/flx  polina's blog  softmetz' anglophone Free Software blog  stargrave's blog  tobias_platen's blog  tolld's blog  wkossen’s blog  yahuxo’s blog