Thoughts of the FSFE Community

Tuesday, 17 October 2017

Fiduciary License Agreement 2.0

Hook’s Humble Homepage | 19:00, Tuesday, 17 October 2017

After many years of working on it, it is with immense pleasure to see the FLA-2.0 – the full rewrite of the Fiduciary License Agreementofficially launch.

What is the FLA?

In short, the FLA is a well-balanced contributor agreement, which gives the trustee responsible for managing the rights within a FOSS project, power and responsiblity to make sure the contributed software always remain free and open. This way the project, together with all the respective contributors, are protected against misuse of power by a new holder of exclusive rights.

If you are more of a audio-visual type, you can see my 15' intro at Akademy 2013 or my 5' intro at Akademy 2015 to understand the basics of the FLA. The talks are about FLA-1.2, but the basic gist of it is the same.

Reasons for update and changes

In the decade since the last update of the FLA (version 1.2, back in 2007), the world of IT has changed quite a bit and, apart from copyright, patents and trade marks have become a serious concern for FOSS projects.

For my LL.M. thesis I analysed the FLA-1.2 within its historic context and use in practice. The following topics that should be improved have been identified in the thesis:

  • include patents;
  • better compatibility with other jurisdictions (e.g. Belgium and India);
  • more practical selection of outbound licensing options;
  • usability and readability of the text itself.

Trade marks were also identified as an important issue, but not a topic a CA could fix. For that a project might want to look at the FOSSmarks website instead.

To implement the changes, there were two possibilities – either modernise the text of the FLA-1.x to meet modern needs or tweak a more modern CA to include all checks and balances of the FLA.

In the true spirit of FOSS, I decided to re-base the FLA-2.0 on the best researched CA I could find – the ContributorAgreements.org templates. Luckily, Catharina Maracke was not only merely OK with it, but very supportive as well. In fact, several of the changes that FLA brought with it trickled down into the new versions of (the rest of) the ContributorAgreements.org templates as well.

Changes inherited from ContributorAgreements.org

With simply re-basing the FLA-2.0 on the ContributorAgreements.org templates, we inherited some very cool features:

  • improved compatibility with more jurisdictions – thanks to the academic research invested already into it;
  • changed to an exclusive CLA (previously: a full-on copyright assignment, with an exclusive CLA as a fall-back) – which is both easier to manage as well as less scary to the contributors;
  • added a patent license (based on Apache CLA) – so the project itself can be protected from a potential patent troll contributing to it.

Further changes to FLA-2.0

But we did not stop there! With the combined enthusiasm of both Catharina and yours truly, as well as ample support of a number of very smart people1, we pushed onward and introduced fixes and new features both for the FLA and the ContributorAgreements.org.

Below is a list only the biggest ones:

  • improved both the legibility of the wording and the usability of the CA chooser;
  • compatibility with even more jurisdictions – we improved the wording even further, so it should work as expected also in countries like India and Belgium;
  • narrower list of allowed outbound licenses – i.e. intersection between Free Software and Open Source licenses (instead of Free Software OR Open Source licenses as is more common);
  • introduced more outbound licensing options:
    • any FOSS license;
    • specific FOSS license(s);
    • separate (re)licensing policy – this is particularly useful for larger and longer-standing projects such as KDE, who have (to have) their own (re)licensing policies in place.

Future plans

While the 2.0 is a huge leap forward, we do not plan to leave it at rest. We are already gathering ideas for a 2.1 update, which we plan to launch much faster than in a decade. Of course, the changes in the minor update will not be as huge either, but more fine-tuning. Still, for a legal document such as a license it is in general not a good idea to release soon and release often, so if you are in need of a well-balanced CLA, the FLA-2.0 is here and ready to be used.

hook out → blog is back online, and I’m in Prague for OSSEU. Woot²! \o/


  1. At this point I would like to humbly apologise if I left anyone out. I tried my best to list everyone. 

Monday, 16 October 2017

MiniDebConf Prishtina 2017

English – Kristi Progri | 09:18, Monday, 16 October 2017

On 7th of October in Prishtina, Kosova’s capital, was hosted the first mini deb conference.
The MiniDebConf Prishtina was an event open to everyone, regardless of their level of knowledge about Debian or other free and open source projects. At MiniDebConf Prishtina there were organized a range of topics incidental to Debian and free software, including any free software project, Outreachy internship, privacy, security, digital rights and diversity in IT.

I was happy to be the first speaker and open the presentations with my talk: “Outreachy”

It was the first MiniDeb conf where naturally 50% of talks were held by women(without having any goals for that number) and it feels always so good when diversity in Free Software events are diverse in any perspective and happens by default.
Part of the event were also a group of women from Prizren (codergals.com). In August they successfully organized a hackathon with more then 25 women involved. The Mini DebConf was a great environment and opportunity to spread the word for Outreachy and other internships opportunities for women and people from underrepresented groups.
I was not the only one Outreachy alumni in the audience, Renata Gega was also part of the audience and speaker.
We both shared our experience and gave tips on how to make a successful application and how to explore which project was best for them and fit their level of knowledge.
I presented also the work that I did with my mentors and other Mozilla interns in my round, working for the “Diversity and Inclusion” team, how our work was structured and the product we came out with after 3 months and how it is going now.
Personally, I thought that a presentation with this topic would be with a high interest since the call for applications in Outreachy are still open and giving a hand in this moment would be helpful for everyone who aspired to have a spot.

It is definitely one of the talks that I have enjoyed the most, talking about something for which you have been working to improve and empower for the last 4 years is always a wonderful experience, where words can hardly describe the feelings I have when I see women inspired after watching examples that WOMAN CAN DO IT TOO!

See you in the next “Outreachy”  experiences( hopefully next time as a mentor)

#FreeasinFreeSoftware.

Sunday, 15 October 2017

Free Software Efforts (2017W41)

Planet FSFE on Iain R. Learmonth | 22:00, Sunday, 15 October 2017

Here’s my weekly report for week 41 of 2017. In this week I have explored some Java 8 features, looked at automatic updates in a few Linux distributions and decided that actually I don’t need swap anymore.

Debian

The issue that was preventing the migration of the Tasktools Packaging Team’s mailing list from Alioth to Savannah has now been resolved.

Ana’s chkservice package that I sponsored last week has been ACCEPTED into unstable and since MIGRATED to testing.

Tor Project

I have produced a patch for the Tor Project website to update links to the Onionoo documentation now this has moved (#23802 ). I’ve updated the Debian and Ubuntu relay configuration instructions to use systemctl instead of service where appropriate (#23048 ).

When a Tor relay is less than 2 years old, an alert will now appear on Atlas to link to the new relay lifecycle blog post (#23767 ). This should hopefully help new relay operators understand why their relay is not immediately fully loaded but instead it takes some time to ramp up.

I have gone through the tickets for Tor Cloud and did not find any tickets that contain any important information that would be useful to someone reviving the project. I have closed out these tickets and the Tor Cloud component no longer has any non-closed tickets (#7763, #8544, #8768, #9064, #9751, #10282, #10637, #11153, #11502, #13391, #14035, #14036, #14073, #15821 ).

I’ve continued to work on turning the Atlas application into an integrated part of Tor Metrics (#23518 ) and you can see some progress here.

Finally, I’ve continued hacking on a Twitter bot to tweet factoids about the public Tor network and you can now enjoy some JavaDoc documentation if you’d like to learn a little about its internals. I am still waiting for a git repository to be created (#23799 ) but will be publishing the sources shortly after that ticket is actioned.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

I’d like to thank Digital Ocean for providing me with futher credit for their platform to support my open source work.

I do not find it likely that I’ll be travelling to Cambridge for the miniDebConf as the train alone would be around £350 and hotel accomodation a further £600 (to include both me and Ana).

Secure and flexible backup server with dm-crypt and btrfs

Seravo | 17:29, Sunday, 15 October 2017

In our previous article we described an idea setup for a modern server with btrfs for flexibility and redundancy. In this article we describe another kind of setup that is ideal only for a backup server. For a backup server redundancy and high availability are not important, but instead maximal disk space capacity and the […]

Tuesday, 10 October 2017

Automatic Updates

Planet FSFE on Iain R. Learmonth | 18:00, Tuesday, 10 October 2017

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

We have instructions for setting up new Tor relays on Debian. The only time the word “upgrade” is mentioned here is:

Be sure to set your ContactInfo line so we can contact you if you need to upgrade or something goes wrong.

This isn’t great. We should have some decent instructions for keeping your relay up to date too. I’ve been compiling a set of documentation for enabling automatic updates on various Linux distributions, here’s a taste of what I have so far:


Debian

Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):

apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later

Beginning with Fedora 22, you can enable automatic updates via:

dnf install dnf-automatic

In /etc/dnf/automatic.conf set:

apply_updates = yes

Now enable and start automatic updates via:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS

For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:

yum install yum-cron

In /etc/yum/yum-cron.conf set:

apply_updates = yes

Enable and start automatic updates via:

systemctl start yum-cron.service

I’d like to collect together instructions also for other distributions (and *BSD and Mac OS). Atlas knows which platform a relay is running on, so there could be a link in the future to some platform specific instructions on how to keep your relay up to date.

Sunday, 08 October 2017

Free Software Efforts (2017W40)

Planet FSFE on Iain R. Learmonth | 22:00, Sunday, 08 October 2017

Here’s my weekly report for week 40 of 2017. In this week I have looked at censorship in Catalonia and had my “deleted” Facebook account hacked (which made HN front page). I’ve also been thinking about DRM on the web.

Debian

I have prepared and uploaded fixes for the measurement-kit and hamradio-maintguide packages.

I have also sponsored uploads for gnustep-base (to experimental) and chkservice.

I have given DM upload privileges to Eric Heintzmann for the gnustep-base package as he has shown to care for the GNUstep packages well. In the near future, I think we’re looking at a transition for gnustep-{base,back,gui} as these packages all have updates.

Bugs filed: #877680

Bugs closed (fixed/wontfix): #872202, #877466, #877468

Tor Project

This week I have participated in a discussion around renaming the “Operations” section of the Metrics website.

I have also filed a new ticket on Atlas, which I am planning to implement, to link to the new relay lifecycle post on the Tor Project blog if a relay is less than a week old to help new relay operators understand the bandwidth usage they’ll be seeing.

Finally, I’ve been hacking on a Twitter bot to tweet factoids about the public Tor network. I’ve detailed this in a separate blog post.

Bugs closed (fixed/wontfix): #23683

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I have not had any free software related expenses this week. The current funds I have available for equipment, travel and other free software expenses remains £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

A step change in managing your calendar, without social media

DanielPocock.com - fsfe | 17:36, Sunday, 08 October 2017

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

Tor Relays on Twitter

Planet FSFE on Iain R. Learmonth | 14:00, Sunday, 08 October 2017

A while ago I played with a Twitter bot that would track radio amateurs using a packet radio position reporting system, tweet their location and a picture from Flickr that was taken near to their location and a link to their packet radio activity on aprs.fi. It’s really not that hard to put these things together and they can be a lot of fun. The tweets looked like this:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

This isn’t about building a system that serves any critical purpose, it’s about fun. As the radio stations were chosen essentially at random, there could be some cool things showing up that you wouldn’t otherwise have seen. Maybe you’d spot a callsign of a station you’ve spoken to before on HF or perhaps you’d see stations in areas near you or in cool places.

On Friday evening I took a go at hacking together a bot for Tor relays. The idea being to have regular snippets of information from the Tor network and perhaps you’ll spot something insightful or interesting. Not every tweet is going to be amazing, but it wasn’t running for very long before I spotted a relay very close to its 10th birthday:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The relays are chosen at random, and tweet templates are chosen at random too. So far, tweets about individual relays can be about age or current bandwidth contribution to the Tor network. There are also tweets about how many relays run in a particular autonomous system (again, chosen at random) and tweets about the total number of relays currently running. The total relays tweets come with a map:

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

The maps are produced using xplanet. The Earth will rotate to show the current side in daylight at the time the tweet is posted.

Unfortunately, the bot currently cannot tweet as the account has been suspended. You should still be able to though and tweets will begin appearing again once I’ve resolved the suspension.

I plan to rewrite the mess of cron-activated Python scripts into a coherent Python (maybe Java) application and publish the sources soon. There are also a number of new templates for tweets I’d like to explore, including number of relays and bandwidth contributed per family and statistics on operating system diversity.

Update (2017-10-08): The @TorAtlas account should now be unsuspended.

Saturday, 07 October 2017

Twitter for Websites

Planet FSFE on Iain R. Learmonth | 14:00, Saturday, 07 October 2017

In yesterday’s post, I tried out the Hugo shortcode for embedding tweets from Twitter.

After having gone to some effort to remove external assets from my website, it’s not great that this shortcode will automatically include JavaScript from the Twitter website. The way that Twitter for Websites seems to work is that the JavaScript provides enhancement but the JavaScript is not required for the content to work. This is great, as it means that content still works when syndicated on planets or viewed in an RSS reader or through a text-only browser.

I haven’t looked at the JavaScript in detail yet, but I did see that there is an option for websites to opt-out of tracking for all users loading the JavaScript as a result of visiting that website. All you need to do is include the following <meta> tag on any page that uses the Twitter widgets.js:

<meta name="twitter:dnt" content="on">

To be safe, I’m including this on every page generated as part of my Hugo site.

In the past, Twitter used to honour the Do Not Track setting in browsers but have now replaced this with granular controls which make it more difficult to generally opt-out of tracking. While I think I trust that the twitter:dnt value will be honoured for now, I don’t believe it will be forever.

I’m thinking about writing a cut-down widgets.js that maybe isn’t as functional but could be self-hosted. This would also allow for it to be fetched via an Onion service. If this already exists, you’ve found another solution, or would like to collaborate on a solution then please let me know.

Thursday, 05 October 2017

Building an IoT dashboard with NASA Open MCT

Henri Bergius | 00:00, Thursday, 05 October 2017

One important aspect of any Internet of Things setup is being able to collect and visualize data for analysis. Seeing trends in sensor readings over time can be useful for identifying problems, and for coming up with new ways to use the data.

We wanted an easy solution for this for the c-base IoT setup. Since the c-base backstory is that of a crashed space station, using space technology for this made sense.

OpenMCT view on c-base

NASA Open MCT is a framework for building web-based mission control tools and dashboards that they’ve released as open source. It is intended for bringing together tools and both historical and real-time data, as can be seen in their Mars Science Laboratory dashboard demo.

c-beam telemetry server

As a dashboard framework, Open MCT doesn’t really come with batteries included. You get a bunch of widgets and library functionality, but out of the box there is no integration with data sources.

However, they do provide a tutorial project for integrating data sources. We started with that, and built the cbeam-telemetry-server project which gives a very easy way to integrate Open MCT with an existing IoT setup.

With the c-beam telemetry server we combine Open MCT with the InfluxDB timeseries database and the MQTT messaging bus. This gives a “turnkey” setup for persisting and visualizing IoT information.

Getting started

The first step is to install the c-beam telemetry server. If you want to do a manual setup, first install a MQTT broker, InfluxDB and Node.js. Optionally you can also install CouchDB for sharing custom dashboard layouts between users.

Then just clone the c-beam telemetry server repo:

$ git clone https://github.com/c-base/cbeam-telemetry-server.git

Install the dependencies and build Open MCT with:

$ npm install

Now you should be able to start the service with:

$ npm start

Running with Docker

There is also an easier way to get going: we provide pre-built Docker images of the c-beam telemetry server for both x86 and ARM.

There are also docker-compose configuration files for both environments. To install and start the whole service with all its dependencies, grab the docker-compose.yml file (or the Raspberry Pi 3 version) and start with:

$ docker-compose up -d

We’re building these images as part of our continuous integration pipeline (ARM build with this recipe), so they should always be reasonably up-to-date.

Configuring your data

The next step is to create a JavaScript configuration file for your Open MCT. This is where you need to provide a “dictionary” listing all data you want your dashboard to track.

Data sets are configured like the following (configuring a temperature reading tracked for the 2nd floor):

var floor2 = new app.Dictionary('2nd floor', 'floor2');
floor2.addMeasurement('temperature', 'floor2_temperature', [
  {
    units: 'degrees',
    format: 'float'
  }
], {
  topic: 'bitraf/temperature/1'
});

You can have multiple dictionaries in the same Open MCT installation, allowing you to group related data sets. Each measurement needs to have a name and a unit.

Getting data in

In the example above we also supply a MQTT topic to read the measurement from. Now sending data to the dashboard is as easy as writing numbers to that MQTT topic. On command-line that would be done with:

$ mosquitto_pub -t bitraf/temperature/1 -m 27.3

If you were running the telemetry server when you sent that message, you should’ve seen it appear in the appropriate dashboard.

Bitraf temperature graph with Open MCT

There are MQTT libraries available for most programming languages, making it easy to connect existing systems with this dashboard.

The telemetry server is also compatible with our MsgFlo framework, meaning that you can also configure the connections between your data sources and Open MCT visually in Flowhub.

This makes it possible to utilize the existing MsgFlo libraries for implementing data sources. For example, with msgflo-arduino you can transmit sensor data from Tiva-C or NodeMcu microcontrollers to the dashboard.

Status and how you can help

The c-beam telemetry server is currently in production use in a couple of hackerspaces, and seems to run quite happily.

We’d love to get feedback from other deployments!

If you’d like to help with the project, here are couple of areas that would be great:

  • Adding tests to the project
  • Implementing downsampling of historical data
  • Figuring out ways to control IoT devices via the dashboard (so, to write to MQTT instead of just reading)

Please file issues or make pull requests to the repository.

Wednesday, 04 October 2017

MAC Catching

Planet FSFE on Iain R. Learmonth | 08:00, Wednesday, 04 October 2017

As we walk around with mobile phones in our pockets, there are multiple radios each with identifiers that can be captured and recorded just through their normal operation. Bluetooth and Wifi devices have MAC addresses and can advertise their presence to other devices merely by sending traffic, or by probing for devices to connect to if they’re not connected.

I found a simple tool, probemon that allows for anyone with a wifi card to track who is at which location at any given time. You could deploy a few of these with Raspberry Pis or even go even cheaper with a number of ESP8266.

In the news recently was a report from TfL about their WiFi data collection. Sky News reported that TfL “plans to make £322m by collecting data from passengers’ mobiles”. TfL have later denied this but the fact remains that collecting this data is trivial.

I’ve been thinking about ideas for spoofing mass amounts of wireless devices making the collected data useless. I’ve found that people have had success in using Scapy to forge WiFi frames. When I have some free time I plan to look into some kind of proof-of-concept for this.

On the underground, this is the way to do this, but above ground I’ve also heard of systems that use the TMSI from 3G/4G, not WiFi data, to identify mobile phones. You’ll have to be a bit more brave if you want to forge these (please do not, unless using alternative licensed frequencies, you may interfere with mobile service and prevent 999 calls).

If you wanted to spy on mobile phones near to you, you can do this with the gr-gsm package now available in Debian.

Tuesday, 03 October 2017

Facebook Lies

Planet FSFE on Iain R. Learmonth | 12:00, Tuesday, 03 October 2017

In the past, I had a Facebook account. Long ago I “deleted” this account through the procedure outlined on their help pages. In theory, 14 days after I used this process my account would be irrevocably gone. This was all lies.

My account was not deleted and yesterday I received an email:

<figure> Screenshot of the email I received from Facebook <figcaption>

Screenshot of the email I received from Facebook

</figcaption> </figure>

It took me a moment to figure it out, but what had happened here is someone had logged into my Facebook account using my email address and password. Facebook simply reactivated the account, which had not had its data deleted, as if I had logged in.

This was possible because:

  1. Facebook was clinging to the hope that I would like to return
  2. The last time I used Facebook I didn’t know what a password manager was and was using the same password for basically everything

When I logged back in, all I needed to provide to prove I was me was my date of birth. Given that old Facebook passwords are readily available from dumps (people think their accounts are gone, so why should they be changing their passwords?) and my date of birth is not secret either, this is not great.

I followed the deletion procedure again and in 2 weeks (you can’t immediately request deletion apparently) I’ll check to see if the account is really gone. I’ve updated the password so at least the deletion process can’t be interrupted by whoever has that password (probably lots of people - it’ll be in a ton of dumps where databases have been hacked).

If it’s still not gone, I hear you can just post obscene and offensive material until Facebook deletes you. I’d rather not have to take that route though.

If you’re interested to see if you’ve turned up in a hacked database dump yourself, I would recommend hibp.

Update (2017-10-04): Thanks for all the comments. Sorry I haven’t been able to reply to all of them. Discussion around this post occured at Hacker News if you would like to read more there. You can also read about a similar, and more frustrating, case that came up in the HN discussion.

Monday, 02 October 2017

CopyCamp: Public Money, Public Code

Posts - Carmen Bianca Bakker | 00:00, Monday, 02 October 2017

This weekend, I attended CopyCamp in Warsaw. I arrived in a hurry and on a whim, because I was substituting for someone who could not attend last-minute.

Erik Da Silva and I together held a talk on the FSFE’s latest campaign, «Public Money, Public Code». It is a campaign that postulates that software used or created by public institutions ought become Free Software and available to the public that paid for it.

As part of the campaign, we compiled an open letter that will be sent to representatives in the European Parliament and in national parliaments. You can sign the open letter to add your support:

Click

I have uploaded the talk here ☺:

<video controls="controls" height="360" width="640"> <source src="https://www.carmenbianca.eu/videos/copycamp-pmpc.webm"> </video>

Sunday, 01 October 2017

Free Software Efforts (2017W39)

Planet FSFE on Iain R. Learmonth | 22:00, Sunday, 01 October 2017

Here’s my weekly report for week 39 of 2017. In this week I have travelled to Berlin and caught up on some podcasts in doing so. I’ve also had some trouble with the RSS feeds on my blog but hopefully this is all fixed now.

Thanks to Martin Milbret I now have a replacement for my dead workstation, an HP Z600, and there will be a blog post about this new set up to come next week. Thanks also to Sýlvan and a number of others that made donations towards getting me up and running again. A breakdown of the donations and expenses can be found at the end of this post.

Debian

Two of my packages measurement-kit from OONI and python-azure-devtools used to build the Azure Python SDK (packaged as python-azure) have been accepted by ftp-master into Debian’s unstable suite.

I have also sponsored uploads for comptext, comptty, fllog, flnet and gnustep-make.

I had previously encouraged Eric Heintzmann to become a DM and I have given him DM upload privileges for the gnustep-make package as he has shown to care for the GNUstep packages well.

Bugs closed (fixed/wontfix): #8751251, #8751261, #861753, #873083

Tor Project

My Tor Project contributions this week were primarily attending the Tor Metrics meeting which I have reported on in a separate blog post.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The replacement workstation arrived on Friday and is now up and running. In total I received £308.73 in donations and spent £36.89 on video adapters and £141.94 on replacement hard drives for my NAS (which includes my local Debian mirror and backups).

For the Tor Metrics meeting in Berlin, Tor Project paid my flights and accommodation and I paid only for ground transport and food myself. The total cost for ground transport during the trip was £45.92 (taxi to airport, 1 Tageskarte) and total cost for food was £23.46.

The current funds I have available for equipment, travel and other free software expenses is now £60.52. I do not believe that any hardware I rely on is looking at imminent failure.


  1. Fixed by a sponsored upload, not by my changes [return]

Sorry for Spain

TSDgeos' blog | 21:17, Sunday, 01 October 2017

Today the Spanish police has committed in Catalonia what can only be described as barbarism.

Beware of the videos, they may hurt your feelings.

They have hit people on the street and fought catalan police over it https://twitter.com/jaumeclotet/status/914484855450333184
They have hit people sitting on stairs https://twitter.com/LaVanguardia/status/914447807754448896
They have hit old ladies https://twitter.com/asiercorc/status/914395993193504768
They have hit people standing on the street https://twitter.com/julia_otero/status/914466508570595329
Did i mention they hit people on the street? https://twitter.com/isaacfcorrales/status/914508531654676480
They also hit someone that was already injured and walking away https://twitter.com/Ulldebou1/status/914497525033390080
They have broken (on purpose) all the fingers of a woman that was already on the floor https://twitter.com/hectorjuanatey/status/914538706299707392
They have hit some more people https://twitter.com/QuicoSalles/status/914504909508218880
They have hit firefighters https://twitter.com/Jorfs_/status/914482953954177024

Currently we're officially speaking of more than 800 injured people https://twitter.com/emergenciescat/status/914584719060275200 but i wouldn't be surprised if the count was much higher.

Meanwhile a dude voting wrapped in a spanish+bull flag gets a round of clapping https://twitter.com/JoseJPriego/status/914485977158209536

And I'm saying sorry for Spain, because it's obvious that after today Catalonia will leave Spain, sooner or later but it's going to happen, but the rest of Spain will have to live with these beasts ingrained in their police and politics.

Sorry and good luck.

Saturday, 30 September 2017

HTML presentations as OER from Org mode with Emacs

Jens Lechtenbörger » English | 13:04, Saturday, 30 September 2017

This post has an exceptional topic given the main theme of my blog, but I’d like to advertise and share what I created during summer term 2017, supported by a fellowship for innovation in digital university teaching funded by the Ministry of Innovation, Science and Research of the State of North Rhine-Westphalia, Germany, and Stifterverband.

I switched my course on Operating Systems from more traditional lectures to Just-in-Time Teaching (JiTT; see here for the Wikipedia entry) as teaching and learning strategy, where students prepare class meetings at home. In a nutshell, students work through educational resources (texts, presentations, videos, etc.) on their own and submit solutions to pre-class assignments. Students’ solutions are corrected prior to class meetings to identify misunderstandings and incorrect prior beliefs. Based on those finding, class meetings are adjusted just-in-time to create a feedback loop with increased students’ learning.

As part of the course preparations, I adopted a different textbook, namely “Operating Systems and Middleware: Supporting Controlled Interaction” by Max Hailperin, whose LaTeX sources are available under a Creative Commons license on GitHub, and I decided to publish my teaching and learning material as Open Educational Resources (OER).

I briefly experimented with LaTeX with the Beamer package and LibreOffice Impress to create slides with embedded audio, but eventually I decided to go for the HTML presentation framework reveal.js. To simplify creation of such presentations, I developed my own infrastructure, whose main part, emacs-reveal, is available as free software on GitLab and satisfies the following requirements:

  • Self-contained presentations embedding audio, usable on lots of (including mobile and offline) devices with free software
  • Separation of layout and content for ease of creation and collaboration
  • Text format for diff and merge for ease of collaboration

Technically, presentations are written down in Org mode. The recommended editor to do so is, of course, GNU Emacs. In theory, you could use other editors because HTML presentations are generated from Org files, and you are free to use my infrastructure on GitLab (which, under the hood, is based on a Docker image containing Emacs and other necessary software).

You can find the source files for my presentations on Operating Systems on GitLab. The resulting presentations on Operating Systems as OER are published as GitLab Pages.

I created a Howto on GitLab explaining the use of emacs-reveal based on a simple presentation. The Org file of that Howto is translated by a so-called CI runner into an HTML presentation whenever changes are committed. The resulting presentation is then published as Howto on emacs-reveal as GitLab Page.

I hope this to be useful for somebody else’s talks or teaching as well.

Breaking RSS Change in Hugo

Planet FSFE on Iain R. Learmonth | 12:00, Saturday, 30 September 2017

My website and blog are managed by the static site generator Hugo. I’ve found this to be a stable and flexible system, but at the last upgrade a breaking change has occurred that broken the syndication of my blog on various planets.

At first I thought perhaps with my increased posting rate the planets were truncating my posts but this was not the case. The problem was in Hugo pull request #3129 where for some reason they have changed the RSS feed to contain only a “lead” instead of the full article.

I’ve seen other content management systems offer a similar option but at least they point out that it’s truncated and offer a “read more” link. Here it just looks like I’m publishing truncated unfinished really short posts.

If you take a look at the post above, you’ll see that the change is in an embedded template and it took a little reading the docs to work out how to revert the change. The steps are actually not that difficult, but it’s still annoying that the change occurred.

In a Hugo site, you will have a layouts directory that will contain your overrides from your theme. Create a new file in the path layouts/_default/rss.xml (you may need to create the _default directory) with the following content:

<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>{{ if eq  .Title  .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}</title>
    <link>{{ .Permalink }}</link>
    <description>Recent content {{ if ne  .Title  .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }}</description>
    <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
    <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
    <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
    <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
    <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
    <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
    {{ with .OutputFormats.Get "RSS" }}
        {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
    {{ end }}
    {{ range .Data.Pages }}
    <item>
      <title>{{ .Title }}</title>
      <link>{{ .Permalink }}</link>
      <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
      {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
      <guid>{{ .Permalink }}</guid>
      <description>{{ .Content | html }}</description>
    </item>
    {{ end }}
  </channel>
</rss>

If you like my new Hugo theme, please let me know and I’ll bump tidying it up and publishing it further up my todo list.

Friday, 29 September 2017

Tor Metrics Team Meeting in Berlin

Planet FSFE on Iain R. Learmonth | 14:00, Friday, 29 September 2017

We had a meeting of the Metrics Team in Berlin yesterday to organise a roadmap for the next 12 months. This roadmap isn’t yet finalised as it will now be taken to the main Tor developers meeting in Montreal where perhaps there are things we thought were needed but aren’t, or things that we had forgotten. Still we have a pretty good draft and we were all quite happy with it.

We have updated tickets in the Metrics component on the Tor trac to include either “metrics-2017“ or “metrics-2018“ in the keywords field to identify tickets that we expect to be able to resolve either by the end of this year or by the end of next year (again, not yet finalised but should give a good idea). In some cases this may mean closing the ticket without fixing it, but only if we believe that either the ticket is out of scope for the metrics team or that it’s an old ticket and no one else has had the same issue since.

Having an in-person meeting has allowed us to have easy discussion around some of the more complex tickets that have been sitting around. In many cases these are tickets where we need input from other teams, or perhaps even just reassigning the ticket to another team, but without a clear plan we couldn’t do this.

My work for the remainder of the year will be primarily on Atlas where we have a clear plan for integrating with the Tor Metrics website, and may include some other small things relating to the website.

I will also be triaging the current Compass tickets as we look to shut down compass and integrate the functionality into Atlas. Compass specific tickets will be closed but some tickets relating to desirable functionality may be moved to Atlas with the fix implemented there instead.

Wednesday, 27 September 2017

REUSE templates and examples

free software - Bits of Freedom | 14:01, Wednesday, 27 September 2017

REUSE templates and examples

The FSFE's REUSE initiative, in which we're encouraging the uptake of practices which enable computer-readable licensing and copyright information is progressing well. In the next couple of days, I'll be working on implementing these practices for a few different projects I know of, to make some examples for what a project needs to do to adhere to the REUSE practices and get a nice REUSE compliant badge!

REUSE templates and examples

What we've already done is to create three different Git repositories, each of which is REUSE compliant, and which demonstrate different parts of the REUSe practices. You can already have a look at them here, here and here. Here's more information about each one:

Simple Hello

https://git.fsfe.org/reuse/simple-hello/
This repository contains perhaps the simplest example of a REUSE compliant program. It has a single source code file, a single license and copyright holder. As you can see if you browse it, it has a single LICENSE file, which contains a copy of the license, the GPLv3 in this case.

The LICENSE file is unchanged and used in verbatim format, which makes it possible to get an MD5/SHA1 hash of it to verify it has not been changed from the original.

There's no way to include a reasonable comment in a Markdown file, so rather than placing the license header in the README.md file, we place it separately, in README.md.license. The format of the header follow a standard format and is the same also in the src/server.js source code file.

What's important to keep in mind is that aside from having a consistent style, each header also includes the SPDX-License-Identifier tag which signals which license the file is covered by, and the License-Filename tag which gives a reference to the exact license file in use (relative to the project root).

And that's pretty much it! This is a simple, REUSE compliant, project. It may not look like much, but this is now a project which any software tool supporting the REUSE practices can understand.

Included Hello

https://git.fsfe.org/reuse/included-hello/

Building on the simple version before it, this repository looks much the same. The difference is that there are two different licenses involved. The src/index.js file is licensed under an MIT license, and the README.md under GPLv3. Since two license files are involves, we put both of them in the LICENSES/ directory and make sure to explicitly refer to them from the source files.

SPDX Hello

https://git.fsfe.org/reuse/spdx-hello/

The final practice recommended by the REUSE project is to use the best available information in a repository and automatically create an SPDX file with license and copyright information. You should never try to do this manually: the SPDX file gets very difficult to update if you do it manually, and generating it automatically is the only sensible way to make sure it's continuously updated.

The SPDX Hello example is a repository which does exactly this. It's extraordinarily hack-ish and will break on anything which doesn't look exactly like the example, but it may serve as inspiration for further work.

The repository uses two hooks, a pre-commit and a post-commit, which anyone with commit access to the repository must make sure to enable. On each commit, the post-commit hook uses the lint-bom program from https://git.fsfe.org/reuse/lint/ (this is the very hackish part), which goes through all inluded files, picks out the license headers, looks at the SPDX-License-Identifier and License-Filename tags and assembles what is meant to be a complete SPDX file.

Since this is run automatically on each commit, it should always be accurate. In practice, you would want to do more than this repository does. You may want to verify the SPDX file after creation, look into adding concluded license information, and adding more metadata to the SPDX file than what I currently have.

But this is still a functional example of what we hope REUSE will lead to: repositories, big and small, with copyrights and licenses which can be read not by humans, but by computers too!

"Security Scanners" Again

Planet FSFE on Iain R. Learmonth | 14:00, Wednesday, 27 September 2017

Early this morning I was flying from Aberdeen Airport to Berlin for the Tor Metrics Team meeting. I noticed that they have finally put up some signage before the security area and writing this blog post I really wish I’d taken a picture of it just to show how ridiculous it was.

It didn’t have much information on it, but the information it had was almost laughable. For example: “The scanner is lower than a mobile phone”. As someone who understands radio, I assume they mean field strength, but they don’t specify this so they could mean height or long distance call prices.

There is also a notice explaining that you are allowed to opt-out from use of the scanner. This is welcome, but the wording of the alternative “enhanced private search” is not great. In practice, this routine involves waiting around for someone to become available as they are understaffed (I’ve waited up to 20 minutes before for someone to become available) and then being taken off into a side room where someone will pat you down and go over you with a handheld metal detector.

Given that every other country I’ve visited in Europe seems to be able to not be dreadful at this, I have no idea how the UK can be so dreadful. If you’re interested in this area, I have previously written up some research at the Open Rights Group wiki.

Nextcloud gets End to End Encryption

Free Software – Frank Karlitschek_ | 12:45, Wednesday, 27 September 2017

Today is a special day for Nextcloud and me because Nextcloud gets a cool and important new capability. This is end to end encryption for file sync and share. Nextcloud supports server side encryption for a long time and all file transfer over the internet is encrypted with

TLS/SSL of course. But there is still the need for full end to end encryption to make sure that the data is secure in all scenarios. One

example is that the current server side encryption doesn’t protect the data against an evil server admin or if the server is hacked. The new end to end solution does that.

This feature is more important then ever in the light of Trump and other governments including western ones like the UK who want to have access to the private data of users.

Please read this blog post about the upcoming dangers in the next few months. European datacenter is no solution, recent developments show

Most requested feature

End to end encryption is our most ever requested feature. Users and customers have been asking for this for many many years so I am super happy that we finally do this now. So you might ask “what took you so long?” There are many reasons.

The first is that it is hard. This needs to be done without compromising the user experience. Then we wanted to support as many core Nextcloud features as possible, for example sharing. And we wanted to do this in a way that doesn’t compromise performance. Obviously security is the highest priority and that is hard in itself. But another must have requirement is to make the feature truly enterprise ready. So real key management is necessary and it has to be designed with the assumption that users make mistakes. We don’t need another solution that is aimed at technical users, losing their data when they forget their password for example… Our solution doesn’t even let users pick their own password, taking away the risk of passwords that are easy to hack due to reuse or shortness! We also wanted to implement this feature fully transparent and native in all clients and fully open source instead of integrating a third party tool. It was hard to find a solution that balanced all these requirements. But I’m happy to say that Björn, who already designed and developed the server side architecture and Lukas our security lead, found a good architecture, with a lot of feedback from a number of other team members of course. This has been a real collaborative effort, building on our years of experience and a good understanding of the needs of our users and customers.

How does it work?

The feature consists of several components. There is the actual encryption and decryption code which is implemented in the Nextcloud iOS and Android apps and in the Mac, Windows and Linux clients. And then there is a server component which is implemented as a Nextcloud app to do the key management. This is useful to make it easy for the users to distribute private and public keys to all clients and share with each other. Obviously the private keys are encrypted with very strong auto generated passwords which are only known by the users and clients and are never accessible by the server. The key server also supports an optional recovery key which can be activated to make it possible to recover lost passwords/keys. This feature can be activated or deactivated to balance user convenience and security. The clients will warn users when the feature is or gets enabled.

End to end encryption can be activated by the users on a folder by folder basis. Once a user decided to encrypt a folder everything inside the folder will be encryption including the content of the files and folder and the metadata like filenames. From now on the folder is no longer accessible from the Nextcloud web-interface and WebDAV. But it is still fully readable and writable from iOS, Android and Mac, Windows, Linux. Sharing still works via public keys of other users. The full design is explained here and the architecture is further documented here

Enterprise ready

It was a key requirement to implement this feature in a way that it is not only useful for home users who want to protect their data on home-servers or at service providers. It had to be done in a way that it is useful for companies and other large organisation. We had conversations with some of our bigger customers over the last few month to make sure that this integrated nicely into the enterprise infrastructure and is compliant with existing policies. One example is that we will try to integrate this into Desktops like KDE, Gnome, Mac and Windows and will support Hardware Security Modules.

Where are we today?

This feature will be fully production ready and included in Nextcloud 13 which will be out later this year. But we didn’t want to wait until then and announce and release something as soon as possible so we can get feedback from encryption experts and the wider infosec community. So today we have our architecture document ready here. The server component is fully implemented and can be found in our github. There is a preview version of the Android app available which is fully working. The Desktop client and the iOS app are in the middle of the development. You can expect preview builds in the next few days. You can see the development and give feedback in the repositories in github.

More information can be found here:

The software can be found here:

So please give feedback about the architecture and the code if you want to get involved. This is a big step forward to protect the data of users and companies against hackers and organisations who want to abuse it in various ways!

 

 

Rspamd Fast, free and open-source spam filtering system

Evaggelos Balaskas - System Engineer | 00:05, Wednesday, 27 September 2017

Fighting Spam

Fighting email spam in modern times most of the times looks like this:

1ab83c40625d102da1b3001438c0f03b.gif

Rspamd

Rspamd is a rapid spam filtering system. Written in C with Lua script engine extension seems to be really fast and a really good solution for SOHO environments.

In this blog post, I'’ll try to present you a quickstart guide on working with rspamd on a CentOS 6.9 machine running postfix.

DISCLAIMER: This blog post is from a very technical point of view!

Installation

We are going to install rspamd via know rpm repositories:

Epel Repository

We need to install epel repository first:

# yum -y install http://fedora-mirror01.rbc.ru/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Rspamd Repository

Now it is time to setup the rspamd repository:

# curl https://rspamd.com/rpm-stable/centos-6/rspamd.repo -o /etc/yum.repos.d/rspamd.repo

Install the gpg key

# rpm --import http://rspamd.com/rpm-stable/gpg.key

and verify the repository with # yum repolist

repo id     repo name
base        CentOS-6 - Base
epel        Extra Packages for Enterprise Linux 6 - x86_64
extras      CentOS-6 - Extras
rspamd      Rspamd stable repository
updates     CentOS-6 - Updates

Rpm

Now it is time to install rspamd to our linux box:

# yum -y install rspamd


# yum info rspamd

Name        : rspamd
Arch        : x86_64
Version     : 1.6.3
Release     : 1
Size        : 8.7 M
Repo        : installed
From repo   : rspamd
Summary     : Rapid spam filtering system
URL         : https://rspamd.com
License     : BSD2c
Description : Rspamd is a rapid, modular and lightweight spam filter. It is designed to work
            : with big amount of mail and can be easily extended with own filters written in
            : lua.

Init File

We need to correct rspamd init file so that rspamd can find the correct configuration file:

# vim /etc/init.d/rspamd

# ebal, Wed, 06 Sep 2017 00:31:37 +0300
## RSPAMD_CONF_FILE="/etc/rspamd/rspamd.sysvinit.conf"
RSPAMD_CONF_FILE="/etc/rspamd/rspamd.conf"

Start Rspamd

We are now ready to start for the first time rspamd daemon:

# /etc/init.d/rspamd restart

syntax OK
Stopping rspamd:                                           [FAILED]
Starting rspamd:                                           [  OK  ]

verify that is running:

# ps -e fuwww | egrep -i rsp[a]md


root      1337  0.0  0.7 205564  7164 ?        Ss   20:19   0:00 rspamd: main process
_rspamd   1339  0.0  0.7 206004  8068 ?        S    20:19   0:00  _ rspamd: rspamd_proxy process
_rspamd   1340  0.2  1.2 209392 12584 ?        S    20:19   0:00  _ rspamd: controller process
_rspamd   1341  0.0  1.0 208436 11076 ?        S    20:19   0:00  _ rspamd: normal process   

perfect, now it is time to enable rspamd to run on boot:

# chkconfig rspamd on

# chkconfig --list | egrep rspamd
rspamd          0:off   1:off   2:on    3:on    4:on    5:on    6:off

Postfix

In a nutshell, postfix will pass through (filter) an email using the milter protocol to another application before queuing it to one of postfix’s mail queues. Think milter as a bridge that connects two different applications.

rspamd_milter_direct.png

Rspamd Proxy

In Rspamd 1.6 Rmilter is obsoleted but rspamd proxy worker supports milter protocol. That means we need to connect our postfix with rspamd_proxy via milter protocol.

Rspamd has a really nice documentation: https://rspamd.com/doc/index.html
On MTA integration you can find more info.

# netstat -ntlp | egrep -i rspamd

output:

tcp        0      0 0.0.0.0:11332               0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 0.0.0.0:11333               0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 127.0.0.1:11334             0.0.0.0:*                   LISTEN      1451/rspamd
tcp        0      0 :::11332                    :::*                        LISTEN      1451/rspamd
tcp        0      0 :::11333                    :::*                        LISTEN      1451/rspamd
tcp        0      0 ::1:11334                   :::*                        LISTEN      1451/rspamd  

# egrep -A1 proxy /etc/rspamd/rspamd.conf


worker "rspamd_proxy" {
    bind_socket = "*:11332";
    .include "$CONFDIR/worker-proxy.inc"
    .include(try=true; priority=1,duplicate=merge) "$LOCAL_CONFDIR/local.d/worker-proxy.inc"
    .include(try=true; priority=10) "$LOCAL_CONFDIR/override.d/worker-proxy.inc"
}

Milter

If you want to know all the possibly configuration parameter on postfix for milter setup:

# postconf | egrep -i milter

output:

milter_command_timeout = 30s
milter_connect_macros = j {daemon_name} v
milter_connect_timeout = 30s
milter_content_timeout = 300s
milter_data_macros = i
milter_default_action = tempfail
milter_end_of_data_macros = i
milter_end_of_header_macros = i
milter_helo_macros = {tls_version} {cipher} {cipher_bits} {cert_subject} {cert_issuer}
milter_macro_daemon_name = $myhostname
milter_macro_v = $mail_name $mail_version
milter_mail_macros = i {auth_type} {auth_authen} {auth_author} {mail_addr} {mail_host} {mail_mailer}
milter_protocol = 6
milter_rcpt_macros = i {rcpt_addr} {rcpt_host} {rcpt_mailer}
milter_unknown_command_macros =
non_smtpd_milters =
smtpd_milters = 

We are mostly interested in the last two, but it is best to follow rspamd documentation:

# vim /etc/postfix/main.cf

Adding the below configuration lines:

# ebal, Sat, 09 Sep 2017 18:56:02 +0300

## A list of Milter (mail filter) applications for new mail that does not arrive via the Postfix smtpd(8) server.
on_smtpd_milters = inet:127.0.0.1:11332

## A list of Milter (mail filter) applications for new mail that arrives via the Postfix smtpd(8) server.
smtpd_milters = inet:127.0.0.1:11332

## Send macros to mail filter applications
milter_mail_macros = i {auth_type} {auth_authen} {auth_author} {mail_addr} {client_addr} {client_name} {mail_host} {mail_mailer}

## skip mail without checks if something goes wrong, like rspamd is down !
milter_default_action = accept

Reload postfix

# postfix reload

postfix/postfix-script: refreshing the Postfix mail system

Testing

netcat

From a client:

$ nc 192.168.122.96 25

220 centos69.localdomain ESMTP Postfix
EHLO centos69
250-centos69.localdomain
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
MAIL FROM: <root@example.org>
250 2.1.0 Ok
RCPT TO: <root@localhost>
250 2.1.5 Ok
DATA
354 End data with <CR><LF>.<CR><LF>
test
.
250 2.0.0 Ok: queued as 4233520144
^]

Logs

Looking through logs may be a difficult task for many, even so it is a task that you have to do.

MailLog

# egrep 4233520144 /var/log/maillog


Sep  9 19:08:01 localhost postfix/smtpd[1960]: 4233520144: client=unknown[192.168.122.1]
Sep  9 19:08:05 localhost postfix/cleanup[1963]: 4233520144: message-id=<>
Sep  9 19:08:05 localhost postfix/qmgr[1932]: 4233520144: from=<root@example.org>, size=217, nrcpt=1 (queue active)
Sep  9 19:08:05 localhost postfix/local[1964]: 4233520144: to=<root@localhost.localdomain>, orig_to=<root@localhost>, relay=local, delay=12, delays=12/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to mailbox)
Sep  9 19:08:05 localhost postfix/qmgr[1932]: 4233520144: removed

Everything seems fine with postfix.

Rspamd Log

# egrep -i 4233520144 /var/log/rspamd/rspamd.log

2017-09-09 19:08:05 #1455(normal) <79a04e>; task; rspamd_message_parse: loaded message; id: <undef>; queue-id: <4233520144>; size: 6; checksum: <a6a8e3835061e53ed251c57ab4f22463>

2017-09-09 19:08:05 #1455(normal) <79a04e>; task; rspamd_task_write_log: id: <undef>, qid: <4233520144>, ip: 192.168.122.1, from: <root@example.org>, (default: F (add header): [9.40/15.00] [MISSING_MID(2.50){},MISSING_FROM(2.00){},MISSING_SUBJECT(2.00){},MISSING_TO(2.00){},MISSING_DATE(1.00){},MIME_GOOD(-0.10){text/plain;},ARC_NA(0.00){},FROM_NEQ_ENVFROM(0.00){;root@example.org;},RCVD_COUNT_ZERO(0.00){0;},RCVD_TLS_ALL(0.00){}]), len: 6, time: 87.992ms real, 4.723ms virtual, dns req: 0, digest: <a6a8e3835061e53ed251c57ab4f22463>, rcpts: <root@localhost>

It works !

Training

If you have already a spam or junk folder is really easy training the Bayesian classifier with rspamc.

I use Maildir, so for my setup the initial training is something like this:

 # cd /storage/vmails/balaskas.gr/evaggelos/.Spam/cur/ 

# find . -type f -exec rspamc learn_spam {} \;

Auto-Training

I’ve read a lot of tutorials that suggest real-time training via dovecot plugins or something similar. I personally think that approach adds complexity and for small companies or personal setup, I prefer using Cron daemon:


 @daily /bin/find /storage/vmails/balaskas.gr/evaggelos/.Spam/cur/ -type f -mtime -1 -exec rspamc learn_spam {} \;

That means every day, search for new emails in my spam folder and use them to train rspamd.

Training from mbox

First of all seriously ?

Split mbox

There is a nice and simply way to split a mbox to separated files for rspamc to use them.

# awk '/^From / {i++}{print > "msg"i}' Spam

and then feed rspamc:

# ls -1 msg* | xargs rspamc --verbose learn_spam

Stats

# rspamc stat


Results for command: stat (0.068 seconds)
Messages scanned: 2
Messages with action reject: 0, 0.00%
Messages with action soft reject: 0, 0.00%
Messages with action rewrite subject: 0, 0.00%
Messages with action add header: 2, 100.00%
Messages with action greylist: 0, 0.00%
Messages with action no action: 0, 0.00%
Messages treated as spam: 2, 100.00%
Messages treated as ham: 0, 0.00%
Messages learned: 1859
Connections count: 2
Control connections count: 2157
Pools allocated: 2191
Pools freed: 2170
Bytes allocated: 542k
Memory chunks allocated: 41
Shared chunks allocated: 10
Chunks freed: 0
Oversized chunks: 736
Fuzzy hashes in storage "rspamd.com": 659509399
Fuzzy hashes stored: 659509399
Statfile: BAYES_SPAM type: sqlite3; length: 32.66M; free blocks: 0; total blocks: 430.29k; free: 0.00%; learned: 1859; users: 1; languages: 4
Statfile: BAYES_HAM type: sqlite3; length: 9.22k; free blocks: 0; total blocks: 0; free: 0.00%; learned: 0; users: 1; languages: 1
Total learns: 1859

X-Spamd-Result

To view the spam score in every email, we need to enable extended reporting headers and to do that we need to edit our configuration:

# vim /etc/rspamd/modules.d/milter_headers.conf

and just above use add :

    # ebal, Wed, 06 Sep 2017 01:52:08 +0300
    extended_spam_headers = true;

   use = [];

then reload rspamd:

# /etc/init.d/rspamd reload

syntax OK
Reloading rspamd:                                          [  OK  ]

View Source

If your open the email in view-source then you will see something like this:


X-Rspamd-Queue-Id: D0A5728ABF
X-Rspamd-Server: centos69
X-Spamd-Result: default: False [3.40 / 15.00]

Web Server

Rspamd comes with their own web server. That is really useful if you dont have a web server in your mail server, but it is not recommended.

By-default, rspamd web server is only listening to local connections. We can see that from the below ss output

# ss -lp | egrep -i rspamd

LISTEN     0      128                    :::11332                   :::*        users:(("rspamd",7469,10),("rspamd",7471,10),("rspamd",7472,10),("rspamd",7473,10))
LISTEN     0      128                     *:11332                    *:*        users:(("rspamd",7469,9),("rspamd",7471,9),("rspamd",7472,9),("rspamd",7473,9))
LISTEN     0      128                    :::11333                   :::*        users:(("rspamd",7469,18),("rspamd",7473,18))
LISTEN     0      128                     *:11333                    *:*        users:(("rspamd",7469,16),("rspamd",7473,16))
LISTEN     0      128                   ::1:11334                   :::*        users:(("rspamd",7469,14),("rspamd",7472,14),("rspamd",7473,14))
LISTEN     0      128             127.0.0.1:11334                    *:*        users:(("rspamd",7469,12),("rspamd",7472,12),("rspamd",7473,12))

127.0.0.1:11334

So if you want to change that (dont) you have to edit the rspamd.conf (core file):

# vim +/11334 /etc/rspamd/rspamd.conf

and change this line:

bind_socket = "localhost:11334";

to something like this:

bind_socket = "YOUR_SERVER_IP:11334";

or use sed:

# sed -i -e 's/localhost:11334/YOUR_SERVER_IP/' /etc/rspamd/rspamd.conf

and then fire up your browser:

rspamd_worker.png

Web Password

It is a good tactic to change the default password of this web-gui to something else.

# vim /etc/rspamd/worker-controller.inc

  # password = "q1";
  password = "password";

always a good idea to restart rspamd.

Reverse Proxy

I dont like having exposed any web app without SSL or basic authentication, so I shall put rspamd web server under a reverse proxy (apache).

So on httpd-2.2 the configuration is something like this:

ProxyPreserveHost On

<Location /rspamd>
    AuthName "Rspamd Access"
    AuthType Basic
    AuthUserFile /etc/httpd/rspamd_htpasswd
    Require valid-user

    ProxyPass http://127.0.0.1:11334
    ProxyPassReverse http://127.0.0.1:11334 

    Order allow,deny
    Allow from all 

</Location>

Http Basic Authentication

You need to create the file that is going to be used to store usernames and password for basic authentication:

# htpasswd -csb /etc/httpd/rspamd_htpasswd rspamd rspamd_passwd
Adding password for user rspamd

restart your apache instance.

bind_socket

Of course for this to work, we need to change the bind socket on rspamd.conf
Dont forget this ;)

bind_socket = "127.0.0.1:11334";

Selinux

If there is a problem with selinux, then:

# setsebool -P httpd_can_network_connect=1

or

# setsebool httpd_can_network_connect_db on

Errors ?

If you see an error like this:
IO write error

when running rspamd, then you need explicit tell rspamd to use:

rspamc -h 127.0.0.1:11334

To prevent any future errors, I’ve created a shell wrapper:

/usr/local/bin/rspamc

#!/bin/sh
/usr/bin/rspamc -h 127.0.0.1:11334 $*

Final Thoughts

I am using rspamd for a while know and I am pretty happy with it.

I’ve setup a spamtrap email address to feed my spam folder and let the cron script to train rspamd.

So after a thousand emails:

rspamd1k.jpg

Tuesday, 26 September 2017

SMS Verification

Planet Fsfe on Iain R. Learmonth | 14:00, Tuesday, 26 September 2017

I’ve received an email today from Barclaycard with the following: “From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.” The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

SMS Verification

Planet FSFE on Iain R. Learmonth | 14:00, Tuesday, 26 September 2017

I’ve received an email today from Barclaycard with the following:

“From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.”

The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

Due to this lack of trust, I’ve often held back from using my phone or tablet for certain tasks where I can still get away with not doing so. I have experimented with having read-only access to my calendars and contacts to ensure that if my phone is compromised they can’t just be wiped out, though in the end I had to give in as my calendar was becoming too difficult to manage using a paper system as part of entry for new events.

I wanted to try to reduce the attractiveness of compromising my phone. Anyone that really wants to have a go at my phone could probably get in. It’s an older Samsung Android phone on a UK network and software updates rarely come through in a timely manner. Anything that I give my phone access to is at risk and that risk needs to be balanced by some real world benefits.

These are just the problems with the phone itself. When you’re using SMS authentication, even with the most secure phone ever, you’re still going to be using the phone network. SMS authentication is about equivalent, in terms of the security it really offers, to your mobile phone number being your password when it comes to an even mildly motivated attacker. You probably don’t treat your mobile phone number as a password, nor does the provider or anyone you’ve given it to, so you can assume that it’s compromised.

Why are mobile phones so popular for two factor (on in increasing numbers of cases, single factor) authentication? Not because they improve security but because they’re convenient and everyone has one. This seems like a bad plan.

The Python March

free software - Bits of Freedom | 07:01, Tuesday, 26 September 2017

The Python March

A month ago, I asked the FSFE's core team about the interest in a friendly walking competition. If you're not familiar with the concept, it's a health initiative whereby groups of people get together to compete regarding who can walk the most over a period. Typically run in companies and public administrations where employees can group in teams to compete against others.

You set the start and end date for the competition, then ask everyone to log (on paper or otherwise) their daily step counts with whichever step counter they happen to have (of either hardware or software kind), and then transfer those counters to a web application where they can track their progress against others.

This is all honour-based of course. No one is asked to track that others walk as much as they say they do :-)

I thought it might be fun to run a competition for those in the free software community who's interested, and so yesterday, some of our FSFE core team members set out to try it out.

It's long been said you should walk 10,000 steps a day to maintain a healthy life, but recent studies would indicate we should up that a bit more. To entice you further to participate in this challenge, here are some of the changes you could experience from participating!

  • Better focus is tightly connected to physical exercise, so you'll get more work done, in less time.
  • Trouble falling asleep? Not much longer. You'll be out before you know it.
  • As if that's not enough, you'll rise earlier in the morning and might even see the sun rise more often!

If you want to walk with us, visit https://pythonmarch.party/ and help us discover the bugs in this Django app! (The name? Oh, some of our core team members have an unhealthy attitude towards Monthy Python).

Monday, 25 September 2017

Nextcloud Conference 2017: Free Software licenses in a Nutshell

English on Björn Schießle - I came for the code but stayed for the freedom | 13:44, Monday, 25 September 2017

At this years Nextcloud conference I gave a lightening talk about Free Software licenses. Free Software developers often like to ignore the legal aspects of their project, still I think it is important to know at least some basics. The license you chose and other legal decisions you make are a important cornerstone to define the basic rules of the community around your code. Making good choices can enable a level playing field for a large, diverse and growing community.

Explaining this huge topic in just five minutes was a tough challenge. The goal was to explain why we are doing things the way we are doing it. For example why we introduced the Developer Certificate of Origin, a tool to create legal certainty, used by many large Free Software initiatives such as Linux, Docker or Eclipse these days. Further the goal was to transfer some knowledge about license compatibility and give some useful pointers for app developers how to decide whether a third party license is compatible or not. If the five minute lightening talk was to fast (and yes, I talked quite fast to match the time limit) or if you couldn’t attend, here are the slides to reread it:


(This blog contain some presentation slides, you can see them here.)

Sunday, 24 September 2017

Free Software Efforts (2017W38)

Planet Fsfe on Iain R. Learmonth | 11:00, Sunday, 24 September 2017

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Free Software Efforts (2017W38)

Planet FSFE on Iain R. Learmonth | 11:00, Sunday, 24 September 2017

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Debian

I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite.

In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of git.tasktools.org with a view to moving our git repositories there.

I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog.

Bugs closed (fixed/wontfix): #774962, #850570

Tor Project

I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists.

I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517.

Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting.

Bugs closed (fixed/wontfix): #22146, #22297, #23511

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed.

Onion Services

Planet Fsfe on Iain R. Learmonth | 09:45, Sunday, 24 September 2017

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS. Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button.

Onion Services

Planet FSFE on Iain R. Learmonth | 09:45, Sunday, 24 September 2017

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS.

Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button.

For now though, I have my onion service hosted elsewhere. I’ve got a regular onion service (version 2) and also now a next generation onion service (version 3). My setup works like this:

  • A cronjob polls my website’s git repository that contains a Hugo static site
  • Two versions of the site are built with different base URLs set in the Hugo configuration, one for the regular onion service domain and one for the next generation onion service domain
  • Apache is configured for two virtual hosts, one for each domain name
  • tor from the Debian archives is configured for the regular onion service
  • tor from git (to have next generation onion service support) is configured for the next generation onion service

The main piece of advice I have for anyone that would like to have an onion service version of their static website is to make sure that your static site generator is handling URLs for you and that your sources have relative URLs as far as possible. Hugo is great at this and most themes should be using the baseURL configuration parameter where appropriate.

There may be some room for improvement here in the polling process, perhaps this could be triggered by a webhook instead.

I’m not using HTTPS on these services as the HTTPS private key for the domain isn’t even controlled by me, it’s controlled by Netlify, so wouldn’t really be a great method of authentication and Tor already provides strong encryption and its own authentication through the URL of the onion service.

Of course, this means you need a secure way to get the URL, so here’s a PGP signed couple of URLs:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

As of 2017-09-23, the website at iain.learmonth.me is mirrored by me at
the following onion addresses:

w6d6vblb6vhuqxt6.onion
tvin5bvfwew3ldttg5t6ynlif4t53y3mbmb7sgbyud7h5q6gblrpsnyd.onion

This declaration was written and signed for publication in my blog.
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCgAdFiEEfGEElJRPyB2mSFaW0hedW4oe0BEFAlnG1FMACgkQ0hedW4oe
0BGtTwgAp9PK6x1X9lnPLaeOOEALxn2BkDK5Q6PBt7OfnTh+f53oRrrxf0fmfNMH
Qz/IDY+tULX3TZYbjDsuu+aDpk6YIdOnOzFpIYW9Qhm6jAsX4RDfn1cZoHg1IeM7
bCvrYHA5u753U3Mm+CsLbGihpYZE/FBdc/nE5S6LxYH83QZWLIW19EPeiBpBp3Hu
VB6hUrDz3XU23dXn2U5/7faK7GKbC6TrBG/Z6dUtaXB62xgDIrPEMorwfsAZnWv4
3mAEsYJv9rnIyLbWamXDas8fJG04DOT+2C1NYmZ5CNJ4C7PKZuIYkaoVAp+pzLGJ
6BEBYaRvYIjd5g8xdVC3kmje6IM9cg==
=lUvh
-----END PGP SIGNATURE-----

Note: For the next generation onion service, I do currently have some logging enabled in the tor daemon as I’m running this service as an experiment to uncover any bugs that appear. There is no logging beyond the default for the version 2 hidden service’s tor daemon.

Another note: Current stable releases of Tor Browser do not support next generation onion services, you’ll have to grab an experimental build to try them out.

<figure> Viewing my next generation onion service in Tor Browser <figcaption>

Viewing my next generation onion service in Tor Browser

</figcaption> </figure>

Planet FSFE (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Kristi Progri  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Planet FSFE on Iain R. Learmonth  Posts - Carmen Bianca Bakker  Posts on Hannes Hauswedell's homepage  Pressreview  Ramblings of a sysadmin (Posts about planet-fsfe)  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog