Planet Fellowship (en)

Monday, 21 August 2017

Technoshamanism in the Dome of Visions, Aarhus – review

agger's Free Software blog | 07:35, Monday, 21 August 2017

by Fabiane M. Borges

(originally)

DCIM100GOPROG0181439.

In 12 of August of 2017 we did the second meeting of technoshamanism in Aarhus/ Denmark, here is the open call “Technoshamanism in Aarhus, Rethinking ancestrality and technology” (2017).

The first one was in november of 2014 with the name “technomagic and technoshamanism meeting in Aarhus“. It was made at the Open Space, a hacker space in Aarhus.

The second one was made at Dome of Visions,  a ecological geodesic located in the region of Port of Aarhus, supported by a group of eco-activists. The meeting was organized by Carsten Agger with  Ariane Stolfi, Fabiane M. Borges and Raisa Inocêncio. With the participation of: Amalia FonfaraRune Hjarno RasmussenWinnie Soon and Sebastian Tranekær. Here you can see the complete programme.

First, we did a radio discussion, after performance/ritual presentations, in the end a jam session of voice, noise with analogical and digital instruments, Smoking and clay bath.

AUDIO (by Finetanks): https://archive.org/details/II-tcnxmnsm-aarhus

PHOTOS (by Domo of Visions):

 

 

 

VIDEOS (by tcnxmsnm):

Part 1

Part 2

 

“The venue was really beautiful and well-equipped, its staff was helpful
and people in the audience were friendly and interested. Everything went
completely smoothly and according to plan, and the final ritual was
wonderful with its combination of Arab flute, drumming, noise and visual
performance. All in all a wonderful event.” (Carsten Agger)

“I think it was really instructive and incredibly cool to be with people who have so much knowledge and passion about the subjects they are dealing with. Communication seems to be the focal point, and there was a great willingness to let people express their minds.” (Sebastian Tranekær)

“The meeting was very diverse, the afternoon with speeches and discussion of some topics linked to the network of technoshamanism, such as self-organization, decolonization of thought, then we discussed technology and future cyborg and at the end we talked about noise and feminism. Ritual open with the participation of other people and was very curious to see the engagement, – it was a rite of rock!! ” (Raisa Inocêncio)”

It was so nice to see Aarhus again, this dome of vision is really special place, thank you to all of you!! We did just one day of meeting and we could not listen everybody, but I am sure it is just the beginning!!! I agree with Raisa, it was a rite of rock-noise. (Fabiane M. Borges)

Saturday, 19 August 2017

Technoethical T400s review

FSFE Fellowship Vienna » English | 16:54, Saturday, 19 August 2017

T400s review

This is just to share my experience. (I am in no way affiliated with Technoethical.)

My background

I am a satisfied Debian user since I moved away from Windows in 2008. Back then I thought I could trick the market by ordering one of the very few systems that didn’t come pre-installed with proprietary software. Therefore I went for a rather cheap Acer Extensa 5220 that came with Linplus Linux. Unfortunately it didn’t even have a GUI and I was totally new to GNU/Linux. So the first thing I did was to install Debian because I value the concept this community driven project. I never regretted it. But the laptop had the worst possible wireless card built in. It never really worked with free software.

In the mean time I have learned a lot and I started to help others to switch to free software. In my experience it is rather daunting to check new hardware for compatibility and even if you manage to avoid all possible issues you end up with a system that you can not fully trust because of the bios and the built in hardware (Intel ME for example).

The great laptop

Therefore I am very excited that you can actually order hardware nowadays that others have checked for best compatibility already. Since my old laptop got very unreliable recently I wanted to do better this time and I went for the Technoethical T400s, which comes pre-installed with Trisquel.

I am very pleased with the excellent customer care and the quality of the laptop itself. I was especially surprised how lightweight and slim this not so recent device is.

When the ThinkPad T400s was first released in 2009 it was reviewed as an excellent, well built but rather expensive system for about 2000 Euros. The weakest point was considered the mediocre screen. The Technoethical team put in a brand new screen which has perfectly neutral colours, very good contrast as well as good viewing angles. I’ve got 8 GB RAM (the maximum possible), an 128 GB SSD (instead of 64 GB) and the stronger dualcore SP9600 with 2.53 GHz (instead of the SP9400 with 2.40 GHz) CPU. In addition I’ve received a caddy adapter for replacing the CD/DVD drive with another hard disk. And all this for less than 900 Euros.

This is the most recent laptop of the very few devices worldwide that come with Libreboot and the FSF RYF label out of the box. The wireless does flawlessly work right away with totally free software. This system fulfills everything I need from a PC as a graphic designer. Image editing, desktop publishing, multimedia and even light 3D gaming. Needless to say that common office tasks as emailing and web browsing do of course work flawlessly. To get everything done properly only few people do actually need more powerful working machines.

Even the webcam does work out of the box without any issues and the laptop comes back from its idle state well too. I didn’t test the fingerprint reader and bluetooth.

The battery is a little weak

The only downside for power users on the go might be the limited battery life of about two hours with wireless enabled. It is possible to get a new battery which might extend the life to about 3 hours but because the battery is positioned on the bottom front you can’t use a bigger one. (The only sensible option would be a docking station, but I was never fond of those bulky things that crowd my working space even when the laptop isn’t on the desk.)

Summary

Over all this is a great device that just works with entirely free software. I thank the Technoethical team for offering this fantastic service. I can only recommend buying one of those T400s laptops from Technoethical.

Wednesday, 16 August 2017

GSoC Week 11.5: Success!

vanitasvitae's blog » englisch | 14:49, Wednesday, 16 August 2017

Newsflash: My Jingle File Transfer is compatible with Gajim!

The Gajim developers recently fixed a bug in their Jingle Socks5 transport implementation and now I can send and receive files without any problems between Gajim and my test client. Funny side note: The bug they fixed was very familiar to me *cough* :P

Happy Hacking!

Where does our money go?

free software - Bits of Freedom | 08:51, Wednesday, 16 August 2017

Where does our money go?

Each year, the FSFE spends close to half a million Euro raising awareness of and working to support the ecosystem around free software. Most of our permanent funds come from our supporters -- the people who contribute financially to our work each month or year and continue to do so from month to month, and year to year. They are the ones who support our ability to plan for the long term and make long term commitments towards supporting free software.

Thank you!

You might be interested to know a bit more about how our finances are structured and where the money goes. Some of this, especially around the specific activities we have engaged in, you can also read about in our regular yearly reports, like this one. However, here are some further details about what this looks like on the inside, how this differs from what you see in our public reports, and why.

Our budget process and results

Each year, around October, I start putting together a draft for a budget for the following year, taking input from the people who are directly involved in deciding on how we spend our funds: our office staff, president, our executive council, and (from 2017) also our coordinator's team.

As more than half of our budget (about 55%) is employee costs, we are not thinking about how to divide half a million Euro, but about how to divide about 200k. And in reality, it's not even about dividing the budget: the budget is driven by our activities and the need they have for the next fiscal year. No area should get more than it needs, but each area should have a reasonable budget to be able to carry out its work.

If you are interested in our employee costs, our budgeted costs for 2017 are 273k. You do not see this in our public reports, and this is one area where they differ. When we calculate the results at the end of each year, we collect the time reports from each staff and divide the total cost for that staff according to the focus areas on which they have worked.

Our office manager, for example, works almost exclusively on administration and so the cost for her time gets included under the heading for basic infrastructure costs. Our president takes part in this too, but he does much more on public awareness and policy, and so the cost for his time gets split over those areas, according to what he has reported time on.

Basic infrastructure costs

To stay with our basic infrastructure costs, this also includes costs for our office rent in Berlin, staff meetings, our General Assembly of members, lawyers and legal fees, bank and similar fees, fundraising and donor management, and technical infrastructure. The total budget for this in 2017 has been 64k, and the last couple of years' result have looked about the same (55k in 2014, 63k in 2015, 57k in 2016).

Community support

The next budget is community support, which has was a new budget area in 2016 when it included the costs for our FSFE summit and an experiment to cover some costs for a local coordinator in one country. We subsequently decided not to continue with that experiment, but we kept the budget in 2017 for local activities and a potential volunteer meeting. We set the total at 11k, and I have recently delegated to the coordinators' team to manage the part for local activities.

Public awareness

Our work on public awareness, which comes next, has a budget of 35k, most of which is event participation (conferences, booths and talks at a total of 18k). The budget includes costs for FOSDEM, the Chaos Communication Congress, and many other events. We also have costs for information material (flyers and similar material at a total of 8k), technical support for our web pages (at 6k), and a smaller budget for public awareness campaigns (like our I Love Free Software campaign and similar, at 3k).

Our budget for 2016 for public awareness was similar, but as you can see on our public pages, the spending for public awareness was 142k that year. The difference between the budget and the spending is the personnel costs which gets included in the published results.

Legal work

In our legal work, aside from a limited travel budget, the only expense we have is the annual legal and licensing workshop. The budget for this in 2016 was 40k. Compared to the spending of 117k, and you again see the personnel costs accounted for in the public report.

Policy work

And so we then come to our policy work, where I feel we need to elaborate a bit on what has changed in 2017. Our budget for 2016 was 4k for policy work (most of the work on policy is staff time). You will see that when we publish the results for 2017, the costs for policy has shot up remarkably. We have increased the budget to 29k for 2017 to be able to invest in our Public Money - Public Code campaign, which we hope will be a major driver for our work in 2018.

Merchandise

The last and least interesting budget item, in some ways, is our costs for promotional merchandise (t-shirts and so on), as well as related shipping and packaging costs. We have a budget of 23k in total for this in 2017.

Incoming saldo, and what it means for us

By November, we have the results for the current fiscal year up until Q3, so we use that to project the spending for the entire fiscal year to know what we have in "incoming saldo" for the next year.

The incoming saldo is important to us because it is one of the metrics we use internally to get a feeling for the relative health of our finances. We use this in two ways. First, we calculate the income which we know with some certainty we will have in the next fiscal year (like our supporter contributions, and donations which have already been agreed to). This known income, together with our incoming saldo, is what we have to work with for the next fiscal year.

If our budget is larger than what we know we will have, we get a "funding gap", and it becomes the job of primarily the president and executive director to find donors or other ways to close this gap over the year.

The other way in which we use this incoming saldo as a metric is that we calculate how much of our budget for the year is covered by our incoming saldo. A value of 100% or more here means we would be able to survive the full budget year even without getting any other contribution. We have gradually increased this over the years and are now at 54% for the fiscal year of 2017.

Once all the numbers are in place, we send this first as a draft to our members to look at, and then based on their feedback, we finalise the budget and send a final version to the members.

Expenses for events & outreach

Some of the expenses incurred over the year, like bank fees, rent, and so on, get booked directly on the appropriate account. For some budgets, notably, travel costs and event costs, we have an expense request system. The person who would like to make an expense makes a request, which gets automatically sent to the budget owner (typically me, the president or our office manager) who then decides on it.

I will not go through all the possible expenses, but since I am responsible for the event budget (2513 in our accounts), I thought to give an overview of the requests I've approved on this budget for 2017. These are the approved numbers only though, not the actual expense. In many cases, the expense has been less.

  • 2017-01-31: 1900 EUR for booth at ShaCamp 2017
  • 2017-02-21: 309 EUR for booth at Chemnitzer Linuxtag
  • 2017-02-22: 600 EUR for our president to give keynote at OSCAL17
  • 2017-05-04: 750 EUR for our legal coordinator to give talk as OSCAL17
  • 2017-05-29: 200 EUR for our president to give keynote at OpenSUSE Conference
  • 2017-05-29: 660 EUR for booth at the Open Technology Fair in Madrid
  • 2017-06-14: 250 EUR for booth and travel at OpenAg Zurich
  • 2017-06-22: 350 EUR for booth at RMLL
  • 2017-07-31: 60 EUR to design a new booth rollup
  • 2017-08-09: 600 EUR for our president, legal coordinator and one intern to participate at CopyCamp 2017
  • 2017-08-16: 1800 EUR for a booth/assembly at Chaos Communication Congress 2017

There was an experiment earlier to leave part of the decision making for this budget to a group of coordinators, but the way it was done didn't work out in practice. If the delegation of the local activity budget works out well, I believe it would also be time to delegate the events budget to a team and I'll be thinking about this as we get into the budget for 2018.

Tuesday, 15 August 2017

Nagios with PNP4Nagios on CentOS 6.x

Evaggelos Balaskas - System Engineer | 18:18, Tuesday, 15 August 2017

nagios_logo.png

In many companies, nagios is the de-facto monitoring tool. Even with new modern alternatives solutions, this opensource project, still, has a large amount of implementations in place. This guide is based on a “clean/fresh” CentOS 6.9 virtual machine.

Epel

An official nagios repository exist in this address: https://repo.nagios.com/
I prefer to install nagios via the EPEL repository:

# yum -y install http://fedora-mirror01.rbc.ru/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# yum info nagios | grep Version
Version     : 4.3.2

# yum -y install nagios

Selinux

Every online manual, suggest to disable selinux with nagios. There is a reason for that ! but I will try my best to provide info on how to keep selinux enforced. To write our own nagios selinux policies the easy way, we need one more package:

# yum -y install policycoreutils-python

Starting nagios:

# /etc/init.d/nagios restart

will show us some initial errors in /var/log/audit/audit.log selinux log file

Filtering the results:

# egrep denied /var/log/audit/audit.log | audit2allow

will display something like this:

#============= nagios_t ==============
allow nagios_t initrc_tmp_t:file write;
allow nagios_t self:capability chown;

To create a policy file based on your errors:

# egrep denied /var/log/audit/audit.log | audit2allow -a -M nagios_t

and to enable it:

# semodule -i nagios_t.pp

BE AWARE this is not the only problem with selinux, but I will provide more details in few moments.

Nagios

Now we are ready to start the nagios daemon:

# /etc/init.d/nagios restart

filtering the processes of our system:

# ps -e fuwww | egrep na[g]ios

nagios    2149  0.0  0.1  18528  1720 ?        Ss   19:37   0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
nagios    2151  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2152  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2153  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2154  0.0  0.0      0     0 ?        Z    19:37   0:00  _ [nagios] <defunct>
nagios    2155  0.0  0.0  18076   712 ?        S    19:37   0:00  _ /usr/sbin/nagios -d /etc/nagios/nagios.cfg

super!

Apache

Now it is time to start our web server apache:

# /etc/init.d/httpd restart

Starting httpd: httpd: apr_sockaddr_info_get() failed
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

This is a common error, and means that we need to define a ServerName in our apache configuration.

First, we give an name to our host file:

# vim /etc/hosts

for this guide, I ‘ll go with the centos69 but you can edit that according your needs:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 centos69

then we need to edit the default apache configuration file:

# vim /etc/httpd/conf/httpd.conf

#ServerName www.example.com:80
ServerName centos69

and restart the process:

# /etc/init.d/httpd restart

Stopping httpd:      [  OK  ]
Starting httpd:      [  OK  ]

We can see from the netstat command that is running:

# netstat -ntlp | grep 80

tcp        0      0 :::80                       :::*                        LISTEN      2729/httpd      

Firewall

It is time to fix our firewall and open the default http port, so that we can view the nagios from our browser.
That means, we need to fix our iptables !

# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

this is want we need. To a more permanent solution, we need to edit the default iptables configuration file:

# vim /etc/sysconfig/iptables

and add the below entry on INPUT chain section:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Web Browser

We are ready to fire up our web browser and type the address of our nagios server.
Mine is on a local machine with the IP: 129.168.122.96, so

http://192.168.122.96/nagios/

User Authentication

The default user authentication credentials are:

nagiosadmin // nagiosadmin

but we can change them!

From our command line, we type something similar:

# htpasswd -sb /etc/nagios/passwd nagiosadmin e4j9gDkk6LXncCdDg

so that htpasswd will update the default nagios password entry on the /etc/nagios/passwd with something else, preferable random and difficult password.

ATTENTION: e4j9gDkk6LXncCdDg is just that, a random password that I created for this document only. Create your own and dont tell anyone!

Selinux, Part Two

at this moment and if you are tail-ing the selinux audit file, you will see some more error msgs.

Below, you will see my nagios_t selinux policy file with all the things that are needed for nagios to run properly - at least at the moment.!

module nagios_t 1.0;

require {
        type nagios_t;
        type initrc_tmp_t;
        type nagios_spool_t;
        type nagios_system_plugin_t;
        type nagios_exec_t;
        type httpd_nagios_script_t;
        class capability chown;
        class file { write read open execute_no_trans getattr };
}

#============= httpd_nagios_script_t ==============
allow httpd_nagios_script_t nagios_spool_t:file { open read getattr };

#============= nagios_t ==============
allow nagios_t initrc_tmp_t:file write;
allow nagios_t nagios_exec_t:file execute_no_trans;
allow nagios_t self:capability chown;

#============= nagios_system_plugin_t ==============
allow nagios_system_plugin_t nagios_exec_t:file getattr;

Edit your nagios_t.te file accordingly and then build your selinux policy:

# make -f /usr/share/selinux/devel/Makefile

You are ready to update the previous nagios selinux policy :

# semodule -i nagios_t.pp

Selinux - Nagios package

So … there is an rpm package with the name: nagios-selinux on Version: 4.3.2
you can install it, but does not resolve all the selinux errors in audit file ….. so ….
I think my way is better, cause you can understand some things better and have more flexibility on defining your selinux policy

Nagios Plugins

Nagios is the core process, daemon. We need the nagios plugins - the checks !
You can do something like this:

# yum install nagios-plugins-all.x86_64

but I dont recommend it.

These are the defaults :

nagios-plugins-load-2.2.1-4git.el6.x86_64
nagios-plugins-ping-2.2.1-4git.el6.x86_64
nagios-plugins-disk-2.2.1-4git.el6.x86_64
nagios-plugins-procs-2.2.1-4git.el6.x86_64
nagios-plugins-users-2.2.1-4git.el6.x86_64
nagios-plugins-http-2.2.1-4git.el6.x86_64
nagios-plugins-swap-2.2.1-4git.el6.x86_64
nagios-plugins-ssh-2.2.1-4git.el6.x86_64

# yum -y install nagios-plugins-load nagios-plugins-ping nagios-plugins-disk nagios-plugins-procs nagios-plugins-users nagios-plugins-http nagios-plugins-swap nagios-plugins-ssh

and if everything is going as planned, you will see something like this:

nagios_checks.jpg

PNP4Nagios

It is time, to add pnp4nagios a simple graphing tool and get read the nagios performance data and represent them to graphs.

# yum info pnp4nagios | grep Version
Version     : 0.6.22

# yum -y install pnp4nagios

We must not forget to restart our web server:

# /etc/init.d/httpd restart

Bulk Mode with NPCD

I’ve spent toooooo much time to understand why the default Synchronous does not work properly with nagios v4x and pnp4nagios v0.6x
In the end … this is what it works - so try not to re-invent the wheel , as I tried to do and lost so many hours.

Performance Data

We need to tell nagios to gather performance data from their check:

# vim +/process_performance_data /etc/nagios/nagios.cfg

process_performance_data=1

We also need to tell nagios, what to do with this data:

nagios.cfg

# *** the template definition differs from the one in the original nagios.cfg
#
service_perfdata_file=/var/log/pnp4nagios/service-perfdata
service_perfdata_file_template=DATATYPE::SERVICEPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tSERVICEDESC::$SERVICEDESC$tSERVICEPERFDATA::$SERVICEPERFDATA$tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$tSERVICESTATE::$SERVICESTATE$tSERVICESTATETYPE::$SERVICESTATETYPE$
service_perfdata_file_mode=a
service_perfdata_file_processing_interval=15
service_perfdata_file_processing_command=process-service-perfdata-file

# *** the template definition differs from the one in the original nagios.cfg
#
host_perfdata_file=/var/log/pnp4nagios/host-perfdata
host_perfdata_file_template=DATATYPE::HOSTPERFDATAtTIMET::$TIMET$tHOSTNAME::$HOSTNAME$tHOSTPERFDATA::$HOSTPERFDATA$tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$tHOSTSTATE::$HOSTSTATE$tHOSTSTATETYPE::$HOSTSTATETYPE$
host_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
host_perfdata_file_processing_command=process-host-perfdata-file

Commands

In the above configuration, we introduced two new commands

service_perfdata_file_processing_command  &
host_perfdata_file_processing_command

We need to define them in the /etc/nagios/objects/commands.cfg file :

#
# Bulk with NPCD mode
#
define command {
       command_name    process-service-perfdata-file
       command_line    /bin/mv /var/log/pnp4nagios/service-perfdata /var/spool/pnp4nagios/service-perfdata.$TIMET$
}

define command {
       command_name    process-host-perfdata-file
       command_line    /bin/mv /var/log/pnp4nagios/host-perfdata /var/spool/pnp4nagios/host-perfdata.$TIMET$
}

If everything have gone right … then you will be able to see on a nagios check something like this:

nagios_perf.png

Verify

Verify your pnp4nagios setup:

# wget -c http://verify.pnp4nagios.org/verify_pnp_config

# perl verify_pnp_config -m bulk+npcd -c /etc/nagios/nagios.cfg -p /etc/pnp4nagios/ 

NPCD

The NPCD daemon (Nagios Performance C Daemon) is the daemon/process that will translate the gathered performance data to graphs, so let’s started it:

# /etc/init.d/npcd restart
Stopping npcd:                                             [FAILED]
Starting npcd:                                             [  OK  ]

You should see some warnings but not any critical errors.

Templates

Two new template definition should be created, one for the host and one for the service:

/etc/nagios/objects/templates.cfg

define host {
   name       host-pnp
   action_url /pnp4nagios/index.php/graph?host=$HOSTNAME$&srv=_HOST_' class='tips' rel='/pnp4nagios/index.php/popup?host=$HOSTNAME$&srv=_HOST_
   register   0
}

define service {
   name       srv-pnp
   action_url /pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$' class='tips' rel='/pnp4nagios/popup?host=$HOSTNAME$&srv=$SERVICEDESC$
   register   0
}

Host Definition

Now we need to apply the host-pnp template to our system:

so this configuration: /etc/nagios/objects/localhost.cfg

define host{
        use                     linux-server            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               localhost
        alias                   localhost
        address                 127.0.0.1
        }

becomes:

define host{
        use                     linux-server,host-pnp            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               localhost
        alias                   localhost
        address                 127.0.0.1
        }

Service Definition

And we finally must append the pnp4nagios service template to our services:

srv-pnp

define service{
        use                             local-service,srv-pnp         ; Name of service template to use
        host_name                       localhost

Graphs

We should be able to see graphs like these:

nagios_ping.png

Happy Measurements!

appendix

These are some extra notes on the above article, you need to take in mind:

Services

# chkconfig httpd on
# chkconfig iptables on
# chkconfig nagios on
# chkconfig npcd on 

PHP

If you are not running the default php version on your system, it is possible to get this error msg:

Non-static method nagios_Core::SummaryLink()

There is a simply solution for that, you need to modify the index file to exclude the deprecated php error msgs:

# vim +/^error_reporting /usr/share/nagios/html/pnp4nagios/index.php   

// error_reporting(E_ALL & ~E_STRICT);
error_reporting(E_ALL & ~E_STRICT & ~E_DEPRECATED);

Monday, 14 August 2017

GSoC Week 11: Practical Use

vanitasvitae's blog » englisch | 14:43, Monday, 14 August 2017

You know what makes code of a software library even better? Client code that makes it do stuff! I present to you xmpp_sync!

Xmpp_sync is a small command line tool which allows you to sync file from one devices to one or more other devices via XMPP. It works a little bit like you might now it from eg. owncloud or nextcloud. Just drop the files into one folder and they automagically appear on your other devices. At the moment it works only unidirectional, so files get synchronized in one direction, but not in the other.

The program has to modes; master mode and slave mode. In general, a client started in master mode will send files to all clients started in slave mode. So lets say we want to mirror contents from one directory to another. We start the client on our master machine and give it a path to the directory we want to monitor. On the other machines we start the client in slave mode, and then add them to the master client. Whenever we now drop a file into the directory, it will automatically be sent to all registered slaves via Jingle File Transfer. Files do also get send when they get modified by the user. I registered a FileWatcher in order to get notified of such events. For this purpose I got in touch with java NIO again.

Currently the transmission is made unencrypted (as described in XEP-0234), but I plan to also utilize my Jingle Encrypted Transports (JET) code/spec to send the files OMEMO encrypted in the future. My plan for the long run is to further improve JET, so that it might get implemented by other clients.

Besides that I found the configuration error in my ejabberd configuration which prevented my Socks5 proxy from working. The server was listening at 127.0.0.1 by default, so the port was not reachable from the outside world. Now I can finally test on my own server :D

I also tested my code against Gajims implementation and found some more mistakes I made, which are now fixed. The Jingle InBandBytestream Transport is sort of working, but there are some more smaller things I need to change.

Thats all for the week.

Happy Hacking :)

Thursday, 10 August 2017

One Step Forward, Two Steps Back

Paul Boddie's Free Software-related blog » English | 15:00, Thursday, 10 August 2017

I have written about the state of “The Free Software Desktop” before, and how change apparently for change’s sake has made it difficult for those of us with a technology background to provide a stable and reliable computing experience to others who are less technically inclined. It surprises me slightly that I have not written about this topic more often, but the pattern of activity usually goes something like this:

  1. I am asked to upgrade or troubleshoot a computer running a software distribution I have indicated a willingness to support.
  2. I investigate confusing behaviour, offer advice on how to perform certain tasks using the available programs, perhaps install or upgrade things.
  3. I get exasperated by how baroque or unintuitive the experience must be to anyone not “living the dream” of developing software for one of the Free Software desktop environments.
  4. Despite getting very annoyed by the lack of apparent usability of the software, promising myself that I should at least mention how frustrating and unintuitive it is, I then return home and leave things for a few days.
  5. I end up calming down sufficiently about the matter to not to be so bothered about saying something about it after all.

But it would appear that this doesn’t really serve my interests that well because the situation apparently gets no better as time progresses. Back at the end of 2013, it took some opining from a community management “expert” to provoke my own commentary. Now, with recent experience of upgrading a system between Kubuntu long-term support releases, I feel I should commit some remarks to writing just to communicate my frustration while the experience is still fresh in my memory.

Back in 2013, when I last wrote something on the topic, I suppose I was having to manage the transition of Kubuntu from KDE 3 to KDE 4 on another person’s computer, perhaps not having to encounter this yet on my own Debian system. This transition required me to confront the arguably dubious user interface design decisions made for KDE 4. I had to deal with things like the way the desktop background no longer behaved as it had done on most systems for many years, requiring things like the “folder view” widget to show desktop icons. Disappointingly, my most recent experience involved revisiting and replaying some of these annoyances.

Actual Users

It is worth stepping back for a moment and considering how people not “living the dream” actually use computers. Although a desktop cluttered with icons might be regarded as the product of an untidy or disorganised user, like the corporate user who doesn’t understand filesystems and folders and who saves everything to the desktop just to get through the working day, the ability to put arbitrary icons on the desktop background serves as a convenient tool to present a range of important tasks and operations to less confident and less technically-focused users.

Let us consider the perspective of such users for a moment. They may not be using the computer to fill their free time, hang out online, or whatever the kids do these days. Instead, they may have a set of specific activities that require the use of the computer: communicate via e-mail, manage their photographs, read and prepare documents, interact with businesses and organisations.

This may seem quaint to members of the “digital native” generation for whom the interaction experience is presumably a blur of cloud service interactions and social media posts. But unlike the “digital natives” who, if you read the inevitably laughable articles about how today’s children are just wizards at technology and that kind of thing, probably just want to show their peers how great they are, there are also people who actually need to get productive work done.

So, might it not be nice to show a few essential programs and actions as desktop icons to direct the average user? I even had this set up having figured out the “folder view” widget which, as if embarrassed at not having shown up for work in the first place, actually shows the contents of the “Desktop” directory when coaxed into appearing. Problem solved? Well, not forever. (Although there is a slim chance that the problem will solve itself in future.)

The Upgrade

So, Kubuntu had been moaning for a while about a new upgrade being available. And being the modern way with KDE and/or Ubuntu, the user is confronted with parade after parade of notifications about all things important, trivial, and everything in between. Admittedly, it was getting to the point where support might be ending for the distribution version concerned, and so the decision was taken to upgrade. In principle this should improve the situation: software should be better supported, more secure, and so on. Sadly, with Ubuntu being a distribution that particularly likes to rearrange the furniture on a continuous basis, it just created more “busy work” for no good reason.

To be fair, the upgrade script did actually succeed. I remember trying something similar in the distant past and it just failing without any obvious remedy. This time, there were some messages nagging about package configuration changes about which I wasn’t likely to have any opinion or useful input. And some lengthy advice about reconfiguring the PostgreSQL server, popping up in some kind of packaging notification, seemed redundant given what the script did to the packages afterwards. I accept that it can be pretty complicated to orchestrate this kind of thing, though.

It was only afterwards that the problems began to surface, beginning with the login manager. Since we are describing an Ubuntu derivative, the default login manager was the Unity-styled one which plays some drum beats when it starts up. But after the upgrade, the login manager was obsessed with connecting to the wireless network and wouldn’t be denied the chance to do so. But it also wouldn’t connect, either, even if given the correct password. So, some work needed to be done to install a different login manager and to remove the now-malfunctioning software.

An Empty Desk

Although changing the login manager also changes the appearance of the software and thus the experience of using it, providing an unnecessary distraction from the normal use of the machine and requiring unnecessary familiarisation with the result of the upgrade, at least it solved a problem of functionality that had “gone rogue”. Meanwhile, the matter of configuring the desktop experience has perhaps not yet been completely and satisfactorily resolved.

When the machine in question was purchased, it was running stock Ubuntu. At some point, perhaps sooner rather than later, the Unity desktop became the favoured environment getting the attention of the Ubuntu developers, and finding that it was rather ill-suited for users familiar with more traditional desktop paradigms, a switch was made to the KDE environment instead. This is where a degree of peace ended up being made with the annoyingly disruptive changes introduced by KDE 4 and its Plasma environment.

One problem that KDE always seems to have had is that of respecting user preferences and customisations across upgrades. On this occasion, with KDE Plasma 5 now being offered, no exception was made: logging in yielded no “folder view” widgets with all those desktop icons; panels were bare; the desktop background was some stock unfathomable geometric form with blurry edges. I seem to remember someone associated with KDE – maybe even the aforementioned “expert” – saying how great it had been to blow away his preferences and experience the awesomeness of the raw experience, or something. Well, it really isn’t so awesome if you are a real user.

As noted above, persuading the “folder view” widgets to return was easy enough once I had managed to open the fancy-but-sluggish widget browser. This gave me a widget showing the old icons that was too small to show them all at once. So, how do you resize it? Since I don’t use such features myself, I had forgotten that it was previously done by pointing at the widget somehow. But because there wasn’t much in the way of interactive help, I had to search the Web for clues.

This yielded the details of how to resize and move a folder view widget. That’s right: why not emulate the impoverished tablet/phone interaction paradigm and introduce a dubious “long click” gesture instead? Clearly because a “mouseover” gesture is non-existent in the tablet/phone universe, it must be abolished elsewhere. What next? Support only one mouse button because that is how the Mac has always done it? And, given that context menus seem to be available on plenty of other things, it is baffling that one isn’t offered here.

Restoring the desktop icons was easy enough, but showing them all was more awkward because the techniques involved are like stepping back to an earlier era where only a single control is available for resizing, where combinations of moves and resizes are required to get the widget in the right place and to be the right size. And then we assume that the icons still do what they had done before which, despite the same programs being available, was not the case: programs didn’t start but also didn’t give any indication why they didn’t start, this being familiar to just about anyone who has used a desktop environment in the last twenty years. Maybe there is a log somewhere with all the errors in it. Who knows? Why is there never any way of troubleshooting this?

One behaviour that I had set up earlier was single-click activation of icons, where programs could be launched with a single click with the mouse. That no longer works, nor is it obvious how to change it. Clearly the usability police have declared the unergonomic double-click action the “winner”. However, some Qt widgets are still happy with single-click navigation. Try explaining such inconsistencies to anyone already having to remember how to distinguish between multiple programs, what each of them does and doesn’t do, and so on.

The Developers Know Best

All of this was frustrating enough, but when trying to find out whether I could launch programs from the desktop or whether such actions had been forbidden by the usability police, I found that when programs were actually launching they took a long time to do so. Firing up a terminal showed the reason for this sluggishness: Tracker and Baloo were wanting to index everything.

Despite having previously switched off KDE’s indexing and searching features and having disabled, maybe even uninstalled, Tracker, the developers and maintainers clearly think that coercion is better than persuasion, that “everyone” wants all their content indexed for “desktop search” or “semantic search” (or whatever they call it now), the modern equivalent of saving everything to the desktop and then rifling through it all afterwards. Since we’re only the stupid users, what would we really have to say about it? So along came Tracker again, primed to waste computing time and storage space, together with another separate solution for doing the same thing, “just in case”, because the different desktop developers cannot work together.

Amongst other frustrations, the process of booting to the login prompt is slower, and so perhaps switching from Upstart to systemd wasn’t such an entirely positive idea after all. Meanwhile, with reduced scrollbar and control affordances, it would seem that the tendency to mimic Microsoft’s usability disasters continues. I also observed spontaneous desktop crashes and consequently turned off all the fancy visual effects in order to diminish the chances of such crashes recurring in future. (Honestly, most people don’t want Project Looking Glass and similar “demoware” guff: they just want to use their computers.)

Of Entitlement and Sustainable Development

Some people might argue that I am just another “entitled” user who has never contributed anything to these projects and is just whining incorrectly about how bad things are. Well, I do not agree. I enthusiastically gave constructive feedback and filed bugs while I still believed that the developers genuinely wanted to know how they might improve the software. (Admittedly, my enthusiasm had largely faded by the time I had to migrate to KDE 4.) I even wrote software using some of the technologies discussed in this article. I always wanted things to be better and stuck with the software concerned.

And even if I had never done such things, I would, along with other users, still have invested a not inconsiderable amount of effort into familiarising people with the software, encouraging others to use it, and trying to establish it as a sustainable option. As opposed to proprietary software that we neither want to use, nor wish to support, nor are necessarily able to support. Being asked to support some Microsoft product is not only ethically dubious but also frustrating when we end up having to guess our way around the typically confusing and poorly-designed interfaces concerned. And we should definitely resent having to do free technical support for a multi-billion-dollar corporation even if it is to help out other people we know.

I rather feel that the “entitlement” argument comes up when both the results of the development process and the way the development is done are scrutinised. There is this continually perpetuated myth that “open source” can only be done by people when those people have “enough time and interest to work on it”, as if there can be no other motivations or models to sustain the work. This cultivates the idea of the “talented artist” developer lifestyle: that the developers do their amazing thing and that its proliferation serves as some form of recognition of its greatness; that, like art, one should take it or leave it, and that the polite response is to applaud it or to remain silent and not betray a supposed ignorance of what has been achieved.

I do think that the production of Free Software is worthy of respect: after all, I am a developer of Free Software myself and know what has to go into making potentially useful systems. But those producing it should understand that people depend on it, too, and that the respect its users have for the software’s development is just as easily lost as it is earned, indeed perhaps more easily lost. Developers have power over their users, and like anyone in any other position of power, we expect them to behave responsibly. They should also recognise that any legitimate authority they have over their users can only exist when they acknowledge the role of those users in legitimising and validating the software.

In a recent argument about the behaviour of systemd, its principal developer apparently noted that as Free Software, it may be forked and developed as anyone might wish. Although true, this neglects the matter of sustainable software development. If I disagree with the behaviour of some software or of the direction of a software project, and if there is no reasonable way to accommodate this disagreement within the framework of the project, then I must maintain my own fork of that software indefinitely if I am to continue using it.

If others cannot be convinced to participate in this fork, and if other software must be changed to work with the forked code, then I must also maintain forks of other software packages. Suddenly, I might be looking at having to maintain an entire parallel software distribution, all because the developers of one piece of software are too precious to accept other perspectives as being valid and are unwilling to work with others in case it “compromises their vision”, or whatever.

Keeping Up on the Treadmill

Most people feel that they have no choice but to accept the upgrade treadmill, the constant churn of functionality, the shiny new stuff that the developers “living the dream” have convinced their employers or their peers is the best and most efficient way forward. It just isn’t a practical way of living for most users to try and deal with the consequences of this in a technical fashion by trying to do all those other people’s jobs again so that they may be done “properly”. So that “most efficient way” ends up incurring inefficiencies and costs amongst everybody else as they struggle to find new ways of doing the things that just worked before.

How frustrating it is that perhaps the only way to cope might be to stop using the software concerned altogether! And how unfortunate it is that for those who do not value Free Software in its own right or who feel that the protections of Free Software are unaffordable luxuries, it probably means that they go and use proprietary software instead and just find a way of rationalising the decision and its inconvenient consequences as part of being a modern consumer engaging in yet another compromised transaction.

So, unhindered by rants like these and by more constructive feedback, the Free Software desktop seems to continue on its way, seemingly taking two steps backward for every one step forward, testing the tolerance even of its most patient users to disruptive change. I dread having to deal with such things again in a few years’ time or even sooner. Maybe CDE will once again seem like an attractive option and bring us full circle for “Unix on the desktop”, saving many people a lot of unnecessary bother. And then the tortoise really will have beaten the hare.

Tuesday, 08 August 2017

Technoshamanism in Aarhus – rethink ancestrality and technology – PROGRAM

agger's Free Software blog | 11:14, Tuesday, 08 August 2017

FREE RADIO
Moderated by Carsten Agger
14:00-14:45    What is technoshamanism? How does it work and what does it want?
What political issues does it raise? Introduction by Fabiane M. Borges, input from various participants
14:45-15:15    What and how may we learn about Chinese text censorship with machine learning? (Winnie Soon)
15:15-15:45    Decolonization and self-organization (Carsten Agger and Raisa Inocêncio)
15:45-16:30    Immigrating ontologies, recovering ancestralities (Rune Hjarnø Rasmussen and Amalia Fonfara)
16:30-17:30    Sonora Network – Collective Feminism in music and gender study (Ariane Stolfi)
17:30-18:00   BREAK
PERFORMANCE
18:00 – 19:00 Workshop: Invisible Drum (Amalia Fonfara)
19:00 – 19:30 Political Bath (Raisa Inocêncio)
19:30 – 20:00 Open Band (Ariane Stolfi)
20:00-21:00 BREAK, preparations for ritual
21:00 – 24:00  A collective DIY ritual created by the participants.
For the ritual, please bring musical instruments, clothes, whatever props are adequate for your contribution as well as inspiration and ideas.
EXHIBITION
Throughout the event, the installation Hyperelixx by Samuel Capps will be exhibited and can be perused by visitors.
The event will take place in Dome of Visions, Aarhus on August 12, as announced in the Open Call.
BIOGRAPHIES
Amalia Fonfara: (Greenland 1985) Artist and shamanic practitioner based in Trondheim. Since 2010, Amalia Fonfara has lived in Norway. She holds a bachelor of fine art (2013) and  an international master of fine art (2015), from Norwegian University of Science and Technology in Trondheim. Fonfara has studied different esoteric practices, including spiritism, shamanism and contemplative healing practices. She is currently doing a one year study prg. at Scandinavian Center of Shamanic Studies in Sweden. In her artistic practice the perception of reality, imagination and spirituality are closely connected and intertwined. Will present the Invisible Drum project at this event. www.amaliafonfara.com
Ariane Stolfi: Architect, composer, programmer and musician, transits between languages. Doctorate candidate on Sonology (ECA-USP), researching interactive interfaces on web technologies, has made installations and performances such “Hexagrama essa é pra tocar” and “Cromocinética”. Joined festivals such Submidialogias, #DisExperimental, Virada Cultural and Dissonantes, mantains finetanks.com experimental netlabel and collabotates with Sonora feminist collective.
Carsten Agger: Software developer, activist and writer, active in social movements for free software and civil rights and against racism and colonial wars, for twenty years. Trained as a theoretical physicist he works as a free software developer, contributes to the Baobáxia project and co-organized the LibreOffice Conference 2015. He wrote a book about the Qur’an and is currently studying Norse religion and language for a comparative project. Served two years on the board of the hackerspace Open Space Aarhus and co-organized the II International Festival of Technoshamanism and technoshamanism events in Aarhus and Berlin. www.modspil.dk
Fabiane M. Borges holds a Post PhD in Visual Arts and a PhD in Clinical Psychology from the Pontifícia Universidade Católica in São Paolo (Brazil) and works as a psychologist, artist and essayist; organizes events relative to art and technology and social movements; authored two books, Domínios do Demasiado (Hucitec/2010) and Breviário de Pornografia Esquizotrans (ExLibres 2010); coordinated two books with the media, art and technology network Submidialogia (Ideias Perigozas, 2010, and Peixe Morto, 2011). She is one of the organizers of the I and II International Festival of Technoshamanism – http://technoshamanism.wordpress.com/en Blog: https://catahistorias.wordpress.com/ – e-mail: ca t a d o re s@gm a il. c om
Raisa Inocêncio: Brazilian, born in 1989. She studied philosophy at the Federal University of Rio de Janeiro and Visual Arts at Parque Lage School. Now studying at Toulouse University (France) in Erasmus Mundus Masters Europhilosophie. Research on the aesthetic-political practices of post-porn movement through performing actions and references such as artists Anne Sprinkle, Diana Torres, the collective Coyote, Ju Dorneles among others.
Rune Hjarnø Rasmussen:Graduated from the University of Copenhagen with an MA in History of Religions and Anthropology, which included two fieldwork periods in Brazil and two in Uganda. He has self-published a book on the role of traditional songs in Capoeira, and has collaborated on documentary film work on the role of religion in Ghana (2002). He worked for two years in humanitarian work removing land-mines in Angola and the Nuba Mountains in Sudan, and he has worked and published on anti-trafficking. He is currently finishing a PhD on ritual technologies in Afro-Brazilian religion.
Samuel Capps is a British artist whose work, among them the recent show ‘Relics from the De-Crypt’, work around themes similar to technoshamanism. He also runs the gallery Gossamer Fog in London. www.samuelcapps.com
Winnie Soon: Winnie Soon is an artist-researcher who resides in Hong Kong and Denmark. Her work approach spans the fields of artistic practice and software studies, examining the materiality of computational processes that underwrite our experiences and realities in digital culture. Winnie’s work has been presented at festivals, conferences and museums throughout the Asia Pacific, Europe and America, including but not limited to Transmediale2015/2017, ISEA2015/2016, ARoS Aarhus Art Museum, Si Shang Art Museum, Pulse Art + Technology Festival, Hong Kong Microwave International Media Arts Festival, FutureEverything Art Exhibition. Currently, she is assistant professor at the Department of Digital Design and Information Studies in Aarhus University. More info: www.siusoon.net

Monday, 07 August 2017

GSoC Week 10: Finding that damn little bug

vanitasvitae's blog » englisch | 22:31, Monday, 07 August 2017

Finally!! I found a bug which I was after for the last week. Now I finally got that little bas****.

The bug happened in my code for the Jingle SOCKS5 Bytestream Transport (XEP-0260). SOCKS5 proxies are used whenever the two endpoints can’t reach one another directly due to firewalls etc. In such a case, another entity (eg. the XMPP server) can jump in to act as a proxy between both endpoints. For that reason, the initiator (Alice) first collects available proxies, and sends them over to the responder (Bob). The responder does the same and sends its candidates back to the initiator. Both then try to connect to the candidates (in this case proxies) they got sent from their peer. In order for the proxy to know, who wants to talk to whom, both include a destination address, which is calculated as SHA1(sid, providerJid, targetJid), where the provider is the party which sent the candidates to the target.

The alert reader will know, that there are two different destination addresses in the game by now. The first one being SHA1(sid, Alice, Bob) and the second one SHA1(sid, Bob, Alice). The issue is that somewhere in the logs I ended up with 3 different destination addresses. How the hell did that happen. For the answer lets look at an example stanza:

<iq from=’romeo@montague.lit/orchard’
id=’xn28s7gk’
to=’juliet@capulet.lit/balcony’
type=’set’>
<jingle xmlns=’urn:xmpp:jingle:1′
action=’session-initiate’
initiator=’romeo@montague.lit/orchard’
sid=’a73sjjvkla37jfea’>
<content creator=’initiator’ name=’ex’>
<description xmlns=’urn:xmpp:example’/>
<transport xmlns=’urn:xmpp:jingle:transports:s5b:1′
dstaddr=’972b7bf47291ca609517f67f86b5081086052dad’
mode=’tcp’
sid=’vj3hs98y’>
<candidate cid=’hft54dqy’
host=’192.168.4.1′
jid=’romeo@montague.lit/orchard’
port=’5086′
priority=’8257636′
type=’direct’/>
</transport>
</content>
</jingle>
</iq>

Here we have a session initiation with a Jingle SOCKS5 Bytestream transport. The transport exists of one candidate. Now where was my error?

You might have noted, that there are two attributes with the name ‘sid’ in the stanza. The first one is the so called session id, the id of the session. This should not be of interest for our case. The second one however is the stream id. Thats the sid that gets crunched in the SHA1 algorithm to produce the destination address.

Well, yeah… In one tiny method used to update my transport with the candidates of the responder, I used session.getSid() instead of transport.getSid()… That was the bug, that cost me a week.

Now it wasn’t too bad. While I searched for the error, I read through the XEPs again and again, discovering some more issues which I discussed with other developers. Also I began testing my implementation against Gajim and I’m happy to tell you that the InBand Bytestream code is already working sort of. Sometimes a few bytes get lost, but we live in times of Big Data, so thats not too bad, am I right :P ?

In the last 3 weeks I plan to stabilize the API some more. Currently you can only receive data into files, but I plan to add another method which gives back a bytestream instead.

Also I need more tests. Things like that nasty sid bug can be prevented and found using junit tests, so I’ll definitely stock up on that front.

Thats all for now :) Happy Hacking!

Thursday, 03 August 2017

Debian Day in Varese

Elena ``of Valhalla'' | 07:54, Thursday, 03 August 2017

Debian Day in Varese

I'm stuck home instead of being able to go to DebConf, but that doesn't mean that Debian Day will be left uncelebrated!

Since many of the locals are away for the holidays, we of @Gruppo Linux Como and @LIFO aren't going to organize a full day of celebrations, but at the very least we are meeting for a dinner in Varese, at some restaurant that will be open on that date.

Everybody is welcome: to join us please add your name (nickname or identifier of any kind, as long as it fits in the box) on dudle.inf.tu-dresden.de/debday before thursday, August 10th, so that we can
get a reservation at the restaurant.

Wednesday, 02 August 2017

Free Software Village Europe at SHA 2017

English Planet – Dreierlei | 12:20, Wednesday, 02 August 2017

Tonight, the FSFE team Netherlands will arrive at SHA2017 and set up a village for FSFE. SHA-camp is a non-profit hacker-camp in the the Netherlands, similar to the CCCamps in Germany. During 5 days the FSFE will offer a public space for and by our members, friends and supporters to discuss, meet, hack and organise. Find an overview of our sessions and other specialties in this blog post. Find all details and updates on our dedicated FSFE-village-page. Let’s put the hacking back into politics!

Our curated track:

<figure class="wp-caption alignright" id="attachment_1103" style="width: 225px">FSFE assembly on CCCamp15<figcaption class="wp-caption-text">FSFE village at CCCamp15</figcaption></figure>

Free Software Song sessions

Everyday at the FSFE village, we will run a Free Software Song sing-along-session. In addition, and for the first time, we start a project to bring together a choir who performs the Free Software Song. You can read additional details and background about it in my previous blogpost.

The ultimate Free Software challenge

More or less anytime you can come to our village and try the ultimate Free Software challenge that will let you dig deep into the history of Free Software, so deep that you might reach the big-bang-moment of Free Software. Be prepared for an inspiring and challenging journey and bring some friends (or any randomly allocated companionship) to pass it together.

New promotion material and textiles

We will bring our all-time favorites as well as new promotion material to our village. New ones are the FSFE logo on a die-cut sticker, a Hacking for Freedom sticker and a Free your Android sticker. New textiles are a FSFE-hoody in burgundy and bibs.

CCCamp 2007 Datenklo

Still hacking anyway!

GSoC Week 9: Bringing it back to life.

vanitasvitae's blog » englisch | 10:43, Wednesday, 02 August 2017

The 9th week of GSoC is there. I’m surprised of how fast time went by, but I guess that happens when you enjoy what you do :)

I’m happy to report that the third iteration of my Jingle code is working again. There are still many bugs and Socks5Transport is still missing, but all in all I’m really happy with how it turned out. Next I’ll make the implementation more solid and add more features like transport replacing etc.

Apart from normal Jingle File Transfer I also started working on my JET protocol. JET is short for Jingle Encrypted Transfers which is my approach to combining Jingle sessions with end-to-end encryption. My focus lays on modularity and easy extensibility. Roughly JET works as follows:

Lets assume, Alice wants to send an encrypted file to Bob. Luckily Alice and Bob already do have a secure OMEMO session. Alice now sends a JET File transfer request to Bob, which includes a security element which contains an OMEMO key transport message. Bob can decrypt the key transport message with his OMEMO session to retrieve an AES key. That key will be used to encrypt/decrypt the file Alice sends to Bob as soon as the jingle session negotiation is finished.

This protocol should in theory work with any end-to-end encryption method, for example also with OpenPGP. Also JET is in theory not limited to file transfer, but could also be used to secure other types of Jingle sessions, eg. Audio/Video calls. Since the development is in a very early state, it would be nice to get some feedback from more experienced developers and members of the XMPP community. A rendered version of the JET specification can be found here.

I’m very happy that encrypted File transfer already works in my implementation. I created an integration test for that, which transports the encryption key using OMEMO. Apropos tests: I created a basic JingleTransport test, which tests transport methods. Currently SOCKS5 is still failing, but I’m very close to a solution.

During the week I opened another PR against the XEPs repo, which adds a missing attribute to a XML schema in the Jingle File Transfer XEP.

Monday, 31 July 2017

Big day in poppler-land

TSDgeos' blog | 22:24, Monday, 31 July 2017

Today in Poppler:
* Poppler 0.57 got released
* We agreed to stop supporting openjpeg 1.x at the end of the year
* We agreed to stop supporting Qt 4.x at the end of the year
* We merged the better_object branch

The last one is the one that is really big, since it introduces a big rework of the Object class, a central component to Poppler. Object is much used like a QVariant, i.e. it can hold various kind of data inside and you can pass it around.

Unfortunately the Object implementation we inherited from xpdf was kind of hard to use, having to basically do the memory management by hand. i.e. destroying the object was not enough to free the memory, you had to call free() on it.

Thanks to C++11 now we have an implementation with move semantics that greatly simplifies the use of Object and will hopefully make for less memory management mistakes.

Let's hope we didn't break anything in the process though :D

Wednesday, 26 July 2017

The Mobile Web

Paul Boddie's Free Software-related blog » English | 10:31, Wednesday, 26 July 2017

I was tempted to reply to a comment on LWN.net’s news article “The end of Flash”, where the following observation was made:

So they create a mobile site with a bit fewer graphics and fewer scripts loading up to try to speed it up.

But I found that I had enough to say that I might as well put it here.

A recent experience I had with one airline’s booking Web site involved an obvious pandering to “mobile” users. But to the designers this seemed to mean oversized widgets on any non-mobile device coupled with a frustratingly sequential mode of interaction, as if Fisher-Price had an enterprise computing division and had been contracted to do the work. A minimal amount of information was displayed at any given time, and even normal widget navigation failed to function correctly. (Maybe this is completely unfair to Fisher-Price as some of their products appear to encourage far more sophisticated interaction.)

And yet, despite all the apparent simplification, the site ran abominably slow. Every – single – keypress – took – ages – to – process. Even in normal text boxes. My desktop machine is ancient and mostly skipped the needless opening and closing animations on widgets because it just isn’t fast enough to notice that it should have been doing them before the time limit for doing them runs out. And despite fewer graphics and scripts, it was still heavy on the CPU.

After fighting my way through the booking process, I was pointed to the completely adequate (and actually steadily improving) conventional site that I’d used before but which was now hidden by the new and shiny default experience. And then I noticed a message about customer feedback and the continued availability of the old site: many of their other customers were presumably so appalled by the new “made for mobile” experience and, with some of them undoubtedly having to use the site for their job, booking travel for their colleagues or customers, they’d let the airline know what they thought. I imagine that some of the conversations were pretty frank.

I suppose that when companies manage to decouple themselves from fads and trends and actually listen to their customers (and not via Twitter), they can be reminded to deliver usable services after all. And I am thankful for the “professional customers” who are presumably all that stand in the way of everyone being obliged to download an “app” to book their flights. Maybe that corporate urge will lead to the next reality check for the airline’s “digital strategists”.

GSoC Week 8: Reworking

vanitasvitae's blog » englisch | 09:57, Wednesday, 26 July 2017

The 8th. week of GSoC is there. The second evaluation phase has begun.

This week I was not as productive as the weeks before. This is due to me having my last (hopefully :D ) bachelors exam. But that is now done, so I can finally give 100% to coding :) .

I made some more progress on my Jingle code rework. Most of the transports code is now finished. I started to rework the descriptions code which includes the base classes as well as the JingleFileTransfer code. I’m very happy with the new design, since it is way more modular and less interlocked than the last iteration. Below you can find an UML-like diagram of the current structure.

UML diagram of the jingle code

UML-like diagram of the current jingle implementation

During the rework I stumbled across a slight ambiguity in the Jingle XEP(s), which made me wonder: There are multiple jingle actions, which denote the purpose of a Jingle stanza (eg. transport-replace to replace the used transport message, session-initiate to initiate a session (duh) and so forth). Now there is the session-info Jingle action, which is used to announce session specific events. This is used for example in RTP sessions to let the peers client ring during a call, or to send the checksum of a file in a file transfer session. My problem with this is, that such use-cases are in my opinion highly description related and instead the description-info action should be used. The description part of a Jingle session is the part that represents the actual purpose of the session (eg. file transfer, video calls etc.).

The session itself is description agnostic, since it only bundles together a set of contents. One content is composed of one description, one transport and optionally one security element. In a content you should be able to combine different description, transport and security components in an arbitrary way. Thats the whole purpose of the Jingle protocol. In my opinion it does no make much sense to denote description related informational stanzas with the session-info action.

My proposal to make more use of the description-info element is also consistent with other uses of *-info actions. The transport-info action for example is used to denote transport related stanzas, while the security-info action is used for security related information.

But why do I even care?

Lets get back to my implementation for that :) . As you can see in the diagram above, I split the different Jingle components into different classes like JingleTransport, JingleSecurity, JingleDescription and so on. Now I’d like to pass component-related jingles down to the respective classes (a transport-info usually only contains information valuable to the transport-component). I’d like to do the same for the JingleDescription. At the moment I have no real “recipient” for the session-info action. It might contain session related infos, but might also be interesting for the description. As a consequence I have to make exceptions for those actions, which make the code more bloated and less logical.

Another point is, that such session-info elements (due to the fact that they target a single content in most cases) often contain a “name” attribute that matches the name of the targeted content. I’d propose to not only replace session-info with description-info, but also specify, that the description-info MUST have one or more content child elements that denote the targeted contents. That would make parsing much easier, since the parser always can expect content elements.

That’s all for now :)

Happy Hacking!

Monday, 24 July 2017

Let's Encrypt - Auto Renewal

Evaggelos Balaskas - System Engineer | 22:03, Monday, 24 July 2017

Let’s Encrypt

I’ve written some posts on Let’s Encrypt but the most frequently question is how to auto renew a certificate every 90 days.

Disclaimer

This is my mini how-to, on centos 6 with a custom compiled Python 2.7.13 that I like to run on virtualenv from latest git updated certbot. Not a copy/paste solution for everyone!

Cron

Cron doesnt not seem to have something useful to use on comparison to 90 days:

crond.png

Modification Time

The most obvious answer is to look on the modification time on lets encrypt directory :

eg. domain: balaskas.gr

# find /etc/letsencrypt/live/balaskas.gr -type d -mtime +90 -exec ls -ld {} \;

# find /etc/letsencrypt/live/balaskas.gr -type d -mtime +80 -exec ls -ld {} \;

# find /etc/letsencrypt/live/balaskas.gr -type d -mtime +70 -exec ls -ld {} \;

# find /etc/letsencrypt/live/balaskas.gr -type d -mtime +60 -exec ls -ld {} \;

drwxr-xr-x. 2 root root 4096 May 15 20:45 /etc/letsencrypt/live/balaskas.gr

OpenSSL

# openssl x509 -in <(openssl s_client -connect balaskas.gr:443 2>/dev/null) -noout -enddate

Email

If you have registered your email with Let’s Encrypt then you get your first email in 60 days!

Renewal

Here are my own custom steps:

#  cd /root/certbot.git
#  git pull origin 

#  source venv/bin/activate && source venv/bin/activate
#  cd venv/bin/

#  monit stop httpd 

#  ./venv/bin/certbot renew --cert-name balaskas.gr --standalone 

#  monit start httpd 

#  deactivate

Script

I use monit, you can edit the script accordingly to your needs :

#!/bin/sh

DOMAIN=$1

## Update certbot
cd /root/certbot.git
git pull origin 

# Enable Virtual Environment for python
source venv/bin/activate && source venv/bin/activate 

## Stop Apache
monit stop httpd 

sleep 5

## Renewal
./venv/bin/certbot renew  --cert-name ${DOMAIN} --standalone 

## Exit virtualenv
deactivate 

## Start Apache
monit start httpd

All Together

# find /etc/letsencrypt/live/balaskas.gr -type d -mtime +80 -exec /usr/local/bin/certbot.autorenewal.sh balaskas.gr \;

Systemd Timers

or put it on cron

whatever :P

Tag(s): letsencrypt

Tuesday, 18 July 2017

GSoC: Week 7

vanitasvitae's blog » englisch | 16:04, Tuesday, 18 July 2017

This is my update post for the 7th week of GSoC. The next evaluation phase is slowly approaching and it is still a lot of work to do.

This week I started to work on integrating encryption in my Jingle File Transfer code. I’m very pleased, that only very minor changes are required to my OMEMO code. The OmemoManager just has to implement a single interface with two methods and that’s it. The interface should be pretty forward to implement in other encryption methods as well.

Unfortunately the same is not true for the code I wrote during the GSoC. There are many things I haven’t thought about, which require major changes, so it looks like I’ll have to rethink the concept once again. My goal is to make the implementation so flexible, that description (eg. video chat, file transfer…), transport (eg. Socks5, IBB…) and security (XTLS, my Jet spec etc.) can be mixed in arbitrary ways without adding in glue code for new specifications. Flow told me, this is going to get complicated, but I want to try anyway :D . For “safety” reasons, I’ll keep a seperate working branch in case the next iteration does not turn out as I want.

Yesterday Flow found an error in smack-omemo, which caused bundles not getting fetched. The mistake I made was, that a synchronous CarbonListener was registered in the packet-reader thread. This caused the packet-reader thread to timeout on certain messages, even though the stanzas arrived. It is nice to have this out of the way and it was a good lesson about the pitfalls of parallel programming.

While reading the Jingle File Transfer XEP I also found some missing XML schemes and proposed to replace the usage of xs:nonNegativeInteger and xs:positiveNumber with xs:unsignedLong to simplify/unify the process of implementing the XEP.

Thats basically it for this week. Unfortunately I have an upcoming exam at university next week, which means even less free time for me, but I’ll manage that. In the worst case I always have a second try on the exam :)

Happy Hacking!

Tuesday, 11 July 2017

Flowhub IoT hack weekend at c-base: buttons, sensors, the Big Switch

Henri Bergius | 00:00, Tuesday, 11 July 2017

Last weekend we held the c-base IoT hack weekend, focused on the Flowhub IoT platform. This was continuation from the workshop we organized at the Bitraf makerspace a week earlier. Same tools and technologies, but slightly different focus areas.

c-base is one of the world’s oldest hackerspaces and a crashed space station under Berlin. It is also one of the earliest users of MsgFlo with quite a lot of devices connected via MQTT.

Hack weekend debriefing

Hack weekend

Just like at Bitraf, the workshop aimed to add new IoT capabilities to c-base, as well as to increase the number of members who know how to make the station’s setup do new things. For this, we used three primary tools:

Internet of Things

The workshop started in Friday evening after a lecture on nuclear pulse propulsion ended in the main hall. We continued all the way to late Sunday evening with some sleep breaks in between. There is something about c-base that makes you want to work there at night.

Testing a humidity sensor

By Sunday evening, we had built and deployed 15 connected IoT devices, with five additional ones pretty far in development. You can find the source code in the c-flo repository.

Idea wall

Sensor boxes

Quite a lot of c-base was already instrumented when we started the workshop. We had details on electricity consumption, internet traffic, and more. But one thing we didn’t have was information on the physical environment at the station. To solve this, we decided to build a set of sensor boxes that we could deploy in different areas of the hackerspace.

Building sensors

The capabilities shared by all the sensor boxes we deployed were:

  • Temperature
  • Humidity
  • Motion (via passive infrared)

For some areas of interest we provided some additional sensors:

  • Sound level (for the workshop)
  • Light level (for c-lab)
  • Carbon dioxide
  • Door open/closed
  • Gravity

Workshop sensor on a breadboard

We found a set of nice little electrical boxes that provided a convenient housing for these sensor boxes. This way we were able to mount them in proper places quickly. This should also protect them from dust and other elements to some degree.

Installed weltenbaulab sensor

The Big Switch

The lights of the c-base main hall are controllable via MsgFlo, and we have a system called farbgeber to produce pleasing color schemes for any given time.

However, when there are events we need to enable manual control of all lights and sound. To make this “MsgFlo vs. IP lounge” control question clearer, we built a Big Switch to decide which controls the lights:

Big Switch in action

The switch is an old electric mains switch from an office building. It makes a satisfying sound when you turn it, and is big enough that you can see which way the setting is from across the room.

To complement the Big Switch we also added a “c-boom” button to trigger the disco mode in the main hall:

c-boom button

Info screens

One part of the IoT setup was to make statistics and announcements about c-base visible in different areas of the station. We did this by rolling out a set of displays with Raspberry Pi 3s connected to the MsgFlo MQTT environment.

Info screens ready for installing

The announcements shown on the screens range from mission critical information like station power consumption or whether the bar is open, to more fictional ones like the NoFlo-powered space station announcements.

Air lock

We also built an Android version of the info display software, which enabled deploying screens using some old donated tablets.

Info screen tablet

Conclusions

This was another successful workshop. Participants got to do new things, and we got lots of new IoT infrastructure installed around c-base. The Flowhub graph is definitely starting to look populated:

c-base is a graph

We also deployed NASA OpenMCT so that we get a nice overview on the station status. Our telemetry server provides MsgFlo participants that receive data via MQTT, store it in InfluxDB, and then visualize it on the dashboard:

OpenMCT view on c-base

All the c-base IoT software is available on Github:


If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!

Monday, 10 July 2017

GSoC Week 6 – Tests and Excitement

vanitasvitae's blog » englisch | 11:20, Monday, 10 July 2017

Time is flying by. The sixth week is nearly over. I hope I didn’t miscounted so far :)

This week I made some more progress working on the file transfer code. I read the existing StreamInitialization code and found some typos which I fixed. I than took some inspiration from the SI code to improve my Jingle implementation. Most notably I created a class FileTransferHandler, which the client can use to control the file transfer and get some information on its status etc. Most functionality is yet to be implemented, but at least getting notified when the transfer has ended already works. This allowed me to bring the first integration test for basic Jingle File transfer to life. Previously I had the issue, that the transfer was started as a new thread, which was then out of scope, so that the test had no way to tell if and when the transfer succeeded. This is now fixed :)

Other than that integration test, I also worked on creating more junit tests for my Jingle classes and found some more bugs that way. Tests are tedious, but the results are worth the effort. I hope to keep the code coverage of Smack at least on a constant level – already it dropped a little bit with my commits getting merged, but I’m working on correcting that again. While testing, I found a small bug in the SOCKS5 Proxy tests of Smack. Basically there were simultaneous insertions into an ArrayList and a HashSet with a subsequent comparison. This failed under certain circumstances (in my universities network) due to the properties of the set. I fixed the issue by replacing the ArrayList with a LinkedHashSet.

Speaking of tests – I created a “small” test app that utilizes NIO for non-blocking IO operations. I put the word small in quotation marks, because NIO blows up the code by a factor of at least 5. My implementation consists of a server and a client. The client sends a string to the server which than 1337ifies the string and sends it back. The goal of NIO is to use few Threads to handle all the connections at once. It works quite well I’d say. I can handle around 10000 simultaneous connections using a single thread. The next steps will be working NIO into Smack.

Last but not least, I once again got excited about the XMPP community :)
As some of you might now, I started to dig into XMPP roughly 8 months ago as part of my bachelors thesis about OMEMO encryption. Back then I wrote a mail to Daniel Gultsch, asking if he could give me some advice on how to start working on an OMEMO implementation.
Now eight months later, I received a mail from another student basically asking me the same question! I’m blown away by how fast one can go from the one asking to the one getting asked. For me this is another beautiful example of truly working open standards and free software.

Thank you :)

Sunday, 09 July 2017

DRM free Smart TV

tobias_platen's blog | 09:54, Sunday, 09 July 2017

Today is Day against DRM, so I’ll post a short update about a DRM free TV set construction that I have build in the last two month.

My TV set only supports DVB-T and old analogue cable TV. Because I don’t want to buy a new one with even harder DRM and the patented H.265 codec, I am now using Kodi to watch TV. Kodi is free software and runs on a ThinkPad T400, which I also use as a DVD player. I installed Libreboot and removed the internal screen, which caused problems with the external TV set connected via VGA.

Libreboot is a free BIOS replacement which removes the Intel Management Engine. The Intel Management Engine is proprietary malware which includes a back door and some DRM functions. Netflix uses this hardware DRM called the Protected Audio/Video Path on Windows 10 when watching 4K videos. The Thinkpad T400 does not even have an HDMI port, which is known to be encumbered by HDCP, an ineffective DRM that has been cracked.

Instead of using DRM encumbered streaming services such as Netflix, Entertain or Vodafone TV, I still buy DVDs and pay them anonymously with cash. In my home there is a DVB-C connector, which I have connected to a FRITZ!WLAN Repeater DVB-C which streams the TV signal to the ThinkPad. The TV set is switched on and off using a FRITZ!DECT 200 which I control using a python script running on the ThinkPad. I also reuse an old IR remote and an IRDuino to control the ThinkPad.

Welcome to my new Homepage

English on Björn Schießle - I came for the code but stayed for the freedom | 09:04, Sunday, 09 July 2017

Finally I moved my homepage a a complete static page powered by Hugo. Here I want to document some challenges I faced during the transition and how I solved them.

Basic setup

As already said I use Hugo to generate the static sites. My theme is based on Sustain. I did some changes and uploaded my version to GitLab.

I want to have all dependencies like fonts and JavaScript libraries locally, so this was one of the largest changes to the original theme. Further I added a easy way to add some share buttons to a blog post, like you can see at the end of this article. The theme also contains a nice and easy way to add presentations or general slide shows to the webpage, some examples can be seen here. The theme contains a example site which shows all this features.

Comments

This was one of the biggest challenges. I had some quite good discussion on my old blog powered by Wordpress so I don’t want to lose this feature completely. There are some solutions for static pages but non of these are satisfying. For example Staticman looks really promising. Sadly it only works with GitHub. Please let me know if you know something similar which doesn’t depend on GitHub.

For now I decided to do two things. By default I add a short text at the end of each article to tell people to send me a e-mail if they want to share or discuss their view on the topic. Additionally I can add to the meta data of each posts a link to a Friendica post. In this case the link will be added at the end of the article, inviting people to discuss the topic on this free, decentralised and federated network. I have chosen Friendica because it allows users to interact with my blog posts not only with a Friendica account but also with a Diaspora, GNU Social, Mastodon or Hubzilla account. If you have a account on one of these networks and want to get updates about new blog posts in order to participate in conversations around it, follow this Friendica account. I also created a more detailed description for people new to the world of free social networking.

Deployment

After all the questions above where answered and a first version of the new webpage was in place, I had to find a easy way to deploy it. I host the source code of my homepage on GitLab which has a nicely integrated CI service which can be used to deploy the webpage on any server.

Therefore we need to add a CI script called .gitlab-ci.yml to the root of the repository. This script needs to contain following (please adjust the paths):

image: publysher/hugo

before_script:
  - apt-get update
  - apt-get --yes --force-yes install git ssh rsync
  - git submodule update --init --recursive

pages:
  script:
  - hugo
  - mkdir "${HOME}/.ssh"
  - echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
  - echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
  - chmod 700 "${HOME}/.ssh/id_rsa"
  - rsync -hrvz --delete --exclude=_ public/ schiesbn@schiessle.org:/home/schiesbn/websites/schiessle.org/htdocs/
  artifacts:
    paths:
    - public
  only:
  - master

We need to create a ssh key-pair to deploy the webpage. For security reasons it is highly recommend to create a ssh key used only for the deployment.

The variables SSH_HOST_KEY and SSH_PRIVATE_KEY need to be set at GitLab in the CI settings. SSH_PRIVATE_KEY contains the private ssh key which is located in the ~/.ssh directory.

To get the right value for SSH_HOST_KEY, we run ssh-keyscan <our-webpage-host>. Once we executed that command, we should see something similar to schiessle.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCtwsSpeNV.... Let’s copy this to the SSH_HOST_KEY value in our GitLab settings.

Finally we need to copy the public ssh key to the .ssh/authorized_keys file on the web-server to allow GitLab to access it.

Now we are already done. The next time we push some changes to the Github repository GitLab will build the page and sync it to the web-server.

Using the private key stored in the GitLab settings allows everyone with access to the key to login to our web-server. Something we don’t want. Therefore I recommend to limit the ssh key to only this one rsync command from the .gitlab-ci.yml file. In order to do this, we need to find the exact command send to the web-server by adding -e'ssh -v' to the rsync command.

Executing the rsync command with the additional option should result in something like:

debug1: Sending command: rsync --server -vrze.iLsfxC --delete . /home/schiesbn/websites/schiessle.org/htdocs/

we copy this command to create following .ssh/authorized_keys entry:

command="rsync --server -vrze.iLsfxC --delete . /home/schiesbn/websites/schiessle.org/htdocs/",no-pty,no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Sf/PDty0d0SQPg9b+Duc18RxPGaBKMzlKR0t1Jz+0eMhTkXRDlBMrrkMIdXJFfJTcofh2nyklp9akUnKA4mRBVH6yWHI+j0aDIf5sSC5/iHutXyGZrLih/oMJdkPbzN2+6fs2iTQfI/Y6fYjbMZ+drmTVnQFxWN8D+9qTl49BmfZk6yA1Q2ECIljpcPTld7206+uaLyLDjtYgm90cSivrBTsC4jlkkwwYnCZo+mYK4mwI3On1thV96AYDgnOqCn3Ay9xiemp7jYmMT99JhKISSS2WNQt2p4fVxwJIa6gWuZsgvquP10688aN3a222EfMe25RN+x0+RoRpSW3zdBd

Now it is no longer possible to use the private key, stored at GitLab to login to the web-server or to perform any other command than this specific rsync command.

Interesting observation

I run this static webpage now for a few weeks. During this weeks I got quite some email from people interested in some topic I write about in my blog. This are not new blog articles, but posts which where already online for quite some time. Somehow it looks like more people find this articles after the transition to a static site. Maybe search engines rate the static site higher than the old Wordpress page? I don’t know, maybe it is just a coincidence… but interesting.

Friday, 07 July 2017

PHP Sorting Iterators

Evaggelos Balaskas - System Engineer | 20:24, Friday, 07 July 2017

Iterator

a few months ago, I wrote an article on RecursiveDirectoryIterator, you can find the article here: PHP Recursive Directory File Listing . If you run the code example, you ‘ll see that the output is not sorted.

Object

Recursive Iterator is actually an object, a special object that we can perform iterations on sequence (collection) of data. So it is a little difficult to sort them using known php functions. Let me give you an example:

$Iterator = new RecursiveDirectoryIterator('./');
foreach ($Iterator as $file)
    var_dump($file);
object(SplFileInfo)#7 (2) {
  ["pathName":"SplFileInfo":private]=>
  string(12) "./index.html"
  ["fileName":"SplFileInfo":private]=>
  string(10) "index.html"
}

You see here, the iterator is an object of SplFileInfo class.

Internet Answers

Unfortunately stackoverflow and other related online results provide the most complicated answers on this matter. Of course this is not stackoverflow’s error, and it is really a not easy subject to discuss or understand, but personally I dont get the extra fuzz (complexity) on some of the responses.

Back to basics

So let us go back a few steps and understand what an iterator really is. An iterator is an object that we can iterate! That means we can use a loop to walk through the data of an iterator. Reading the above output you can get (hopefully) a better idea.

We can also loop the Iterator as a simply array.

eg.

$It = new RecursiveDirectoryIterator('./');
foreach ($It as $key=>$val)
    echo $key.":".$val."n";

output:

./index.html:./index.html

Arrays

It is difficult to sort Iterators, but it is really easy to sort arrays!
We just need to convert the Iterator into an Array:

// Copy the iterator into an array
$array = iterator_to_array($Iterator);

that’s it!

Sorting

For my needs I need to reverse sort the array by key (filename on a recursive directory), so my sorting looks like:

krsort( $array );

easy, right?

Just remember that you can use ksort before the array is already be defined. You need to take two steps, and that is ok.

Convert to Iterator

After sorting, we need to change back an iterator object format:

// Convert Array to an Iterator
$Iterator = new ArrayIterator($array);

and that’s it !

Full Code Example

the entire code in one paragraph:

<?php
// ebal, Fri, 07 Jul 2017 22:01:48 +0300

// Directory to Recursive search
$dir = "/tmp/";

// Iterator Object
$files =  new RecursiveIteratorIterator(
          new RecursiveDirectoryIterator($dir)
          );

// Convert to Array
$Array = iterator_to_array ( $files );
// Reverse Sort by key the array
krsort ( $Array );
// Convert to Iterator
$files = new ArrayIterator( $Array );

// Print the file name
foreach($files as $name => $object)
    echo "$namen";

?>
Tag(s): php, iterator

Tuesday, 04 July 2017

GSoC Week 5: Tests, fallbacks and politics

vanitasvitae's blog » englisch | 19:32, Tuesday, 04 July 2017

This is my blog post for the 5th week of the Google Summer of Code. I passed the first evaluation phase, so now the really hard work can begin.
(Also the first paycheck came in, and I bought a new Laptop. Obviously my only intention is to reduce compile times for Smack to be more productive *cough*).

The last week was mostly spent writing JUnit tests and finding bugs that way. I found that it is really hard to do unit tests for certain methods, which might be an indicator that there are too many side effects in my code and that there is room to improve. Sometimes when I need to save a state as a variable from within a method, I just go the easy way and create a new attribute for that. I should really try to improve on that front.

Also I did try to create an integration test for my jingle file transfer code. Unfortunately the sendFile method creates a new Thread and returns, so I have no real way of knowing when the file transfer is complete for now, which hinders me from creating a proper integration test. My plans are to go with a Future task to solve this issue, but I’ll have to figure out the most optimal way to bring Futures, Threads, (NIO) and the asynchronous Jingle protocol together. This will probably be the topic of the second coding phase :)

The implementation of the Jingles transport fallback functionality works! When a transport method (eg. SOCKS5) fails for some reason (eg. proxy servers are not reachable), then my implementation can fallback to another transport instead. In my case the session switches to an InBandBytestream transport since I have no other transports implemented yet, but theoretically the fallback method will try all available transports in the future.

I started on creating a small client/server application that will utilize NIO to handle multiple connections on a single thread as a small apprentice piece. I hope to get more familiar with NIO to start integrating non-blocking IO into the jingle filetransfer code.

In the last days I got some questions on my OMEMO module, which very nicely illustrated to me, that developing a piece of software does not only mean to write the code, but also maintain it and support the people that end up using it. My focus lays on my GSoC project though, so I mostly tell those people how to fix their issues on their own.

Last but not least a sort of political remark: In the end of the GSoC, students will receive a little gift from Google (typically a Tshirt). Unfortunatelly not all successful students will receive one due to some countries being “restricted”. Among those countries are Russia, Ukraine, Kazakhstan, but also Mexico. It is sad to see, that politics made by few can affect so many.

Working together, regardless of where we come from, where we live and of course who we are, that is something that the world can and should learn from free software development.

Happy Hacking.

Malicious ReplyTo

Evaggelos Balaskas - System Engineer | 07:44, Tuesday, 04 July 2017

Prologue

Part of my day job is to protect a large mail infrastructure. That means that on a daily basis we are fighting SPAM and try to protect our customers for any suspicious/malicious mail traffic. This is not an easy job. Actually globally is not a easy job. But we are trying and trying hard.

ReplyTo

The last couple months, I have started a project on gitlab gathering the malicious ReplyTo from already identified spam emails. I was looking for a pattern or something that I can feed our antispam engines with so that we can identify spam more accurately. It’s doesnt seem to work as i thought. Spammers can alter their ReplyTo in a matter of minutes!

TheList

Here is the list for the last couple months: ReplyTo
I will -from time to time- try to update it and hopefully someone can find it useful

Free domains

It’s not much yet, but even with this small sample you can see that ~ 50% of phishing goes back to gmail !

    105 gmail.com
     49 yahoo.com
     18 hotmail.com
     17 outlook.com

More Info

You can contact me with various ways if you are interested in more details.

Preferably via encrypted email: PGP: ‘ 0×1c8968af8d2c621f
or via DM in twitter: @ebalaskas

PS

I also keep another list, of suspicious fwds
but keep in mind that it might have some false positives.

Tag(s): spam

Flowhub IoT workshop at Bitraf: sensors, access control, and more

Henri Bergius | 00:00, Tuesday, 04 July 2017

I just got back to Berlin from the Bitraf IoT hackathon we organized in Oslo, Norway. This hackathon was the first of two IoT workshops around MsgFlo and Flowhub IoT. The second will be held at c-base in Berlin this coming weekend.

Bitraf and the existing IoT setup

Bitraf is a large non-profit makerspace in the center of Oslo. It provides co-working facilities, as well as labs and a large selection of computer controlled tools for building things. Members have 24/7 access to the space, and are provided with everything needed for CNC milling, laser cutting, 3D-printing and more.

The space uses the Flowhub IoT stack of MsgFlo and Mosquitto for business-critical things like the door locks that members can open with their smartphone.

Bitraf lock system

In addition to access control, they also had various environmental sensors available on the MQTT network.

With the workshop, our aim was to utilize these existing things more, as well as to add new IoT capabilities. And of course to increase the number of Bitraf members with the knowledge to work with the MsgFlo IoT setup.

Preparations

Being a makerspace, Bitraf already had everything needed for the physical side of the workshop — tons of sensors, WiFi-enabled microcontrollers, tools for building cases and mounting solutions. So the workshop preparations mostly focused on the software side of things.

The primary tools for the workshop were:

To help visualize the data coming from the sensors people were building, I integrated the NASA OpenMCT dashboard with MsgFlo and InfluxDB time series database. This setup is available at the cbeam-telemetry-server project.

OpenMCT at Bitraf

This gave us a way to send data from any interesting sensors in the IoT network to a dashboard and visualize it. Down the line the persisted data can also be interesting for further analysis or machine learning.

Kick-off session

We started the workshop with a quick intro session about Flowhub, MsgFlo, and MQTT development. There is unfortunately no video, but the slides are available:

<iframe allowfullscreen="true" frameborder="0" height="569" mozallowfullscreen="true" src="https://docs.google.com/presentation/d/1Xo7RxPTOOcgJpVc4rl-xuzwxtStpDWwdun4fCYCcbV8/embed?start=false&amp;loop=false&amp;delayms=3000" webkitallowfullscreen="true" width="960"></iframe>

After the intro, we did a round of all attendees to see what skills people already had, and what they were interested in learning. Then we started collecting ideas what to work on.

Bitraf IoT ideas

People picked their ideas, and the project work started.

Idea session at Bitraf IoT

I’d like to highlight couple of the projects.

New sensors for the makerspace

Teams at work

Building new sensors was a major part of the workshop. There were several projects, all built on top of msgflo-arduino and the ESP8266 microcontroller:

Working on a motion sensor

There was also a project to automatically open and close windows, but this one didn’t get completed over the weekend. You can follow the progress in the altF4 GitHub repo.

Tool locking

All hackerspaces have the problem that people borrow tools and then don’t return them when finished. This means that the next person needing the tool will have to spend time searching for it.

To solve this, the team designed a system that enabled tools to be locked to a wall, with a web interface where members can “check out” a tool they want to use. This way the system constantly knows what tools are in their right places, and which tools are in use, and by who.

You can see the tool lock system in action in this demo video:

<iframe allowfullscreen="" frameborder="0" height="480" src="https://www.youtube.com/embed/3u51ZDOo7UQ" width="853"></iframe>

Source code and schematics: https://github.com/einsmein/bitraf-thelock.

After the hackathon

Before my flight out, we sat down with Jon to review how things went. In general, I think it is clear the event was a success — people got to learn and try new things, and all projects except one were completed during the two days.

Our unofficial goal was to double the number of nodes in the Bitraf Flowhub graph, and I think we succeeded in this:

Bitraf as a graph

Here are couple of comments from the attendees:

Really fun and informative. The development pipeline also seems complete. Made it a lot easier for beginner to get started.

this was a very fantastic hackathon! Lots of interesting things to learn, very enthusiastic participants, great stewardship and we actually got quite a few projects finished. Well done everbody.

In general the development tools we provided worked well. Everybody was able to run the full Flowhub IoT environment on their own machines using the Docker setup we provided. And apart from a couple of corner cases, msgflo-arduino was easy to get going on the NodeMCUs.

With these two, everybody could easily wire up some sensors and see their data in both Flowhub and the OpenMCT dashboard. From the local setup going to production was just a matter of switching the MQTT broker configuration.


If you’d like to have a similar IoT workshop at your company, we’re happy to organize one. Get in touch!

Friday, 30 June 2017

Back to the Hurd

David Boddie - Updates (Full Articles) | 17:09, Friday, 30 June 2017

Last year I looked at Debian GNU/Hurd, using the network installer to set up a working environment in kvm. Since them I haven't really looked at it very much, so when I saw the announcement of the latest release I decided to check it out and see what has changed over the last few months. I also thought it might be interesting to try and run some of my own software on the system to see if there are any compatibility issues I need to be aware of. This resulted in a detour to port some code to Python 3 and a few surprises when code written on a 64-bit system found itself running on a 32-bit system.

A New Installation

As before, I created a blank disk image, downloaded the network installer and booted it using kvm:

qemu-img create hurd-install-2017.qemu 5G
wget http://ftp.ports.debian.org/debian-ports-cd/hurd-i386/debian-hurd-2017/debian-hurd-2017-i386-NETINST-1.iso
sha512sum debian-hurd-2017-i386-NETINST-1.iso
# Check the hash of this against those listed on this page.
kvm -m 1024M -drive file=hurd-install-2017.qemu,cache=writeback -cdrom debian-hurd-2017-i386-NETINST-1.iso -boot d -net user -net nic

The default pseudo-graphical installation method seems to work well, though the graphical one also worked nicely. The text-based method didn't seem to work at all. After doing all the usual things a Debian installation process requires, such as defining the keyboard layout and partitioning disks, it's possible to boot the hard disk image and get going with GNU/Hurd again. I use the -redir option to allow me to log into a running environment with ssh via a non-standard port on the host machine:

kvm -m 1024M -drive file=hurd-install-2017.qemu,cache=writeback -cdrom debian-hurd-2017-i386-NETINST-1.iso -boot c -net user -net nic -redir tcp:2222::22

The Debian GNU/Hurd Configuration page covered all the compatibility issues I encountered, though some issues mentioned there did not cause problems for me. For example, I didn't need to explicitly enable the swap partition. On the other hand, I needed to reconfigure Xorg, as suggested, to allow any user to start an X session; not the "Console Users Only" option, but the "Anybody" option.

I tried running a few desktop environments to see which of those that run I would like to use, and which of those run acceptably in kvm without any graphics acceleration. Although MATE, LXDE and XFCE4 all run, I found that I preferred LXQt. However, none of these were as responsive as Blackbox which, for the moment, is as much as I need in a window manager.

A Python 3 Diversion


The end result.

With a graphical environment in place, I wanted to try some software I'd written to see if there were any compatibility issues with running it on GNU/Hurd. I decided to try one of my tools for editing retro game maps. However, it turned out that this PyQt 4 application wouldn't run correctly, crashing with a bus error. This seems to be a compatibility problem with Qt 4 because simple tests with widgets would fail with this library while similar tests with Qt 5's widgets worked fine. At this point it seemed like a good idea to port the tool to PyQt 5.

Since PyQt 5 is compatible with versions of Python from 2.6 up to the latest 3.x releases, I could have just tweaked the tool to use PyQt 5 and left it at that. However, I get the impression that many of the developers working with PyQt 5 are using Python 3, so I also thought it would be a good excuse to try and port the tool to Python 3 at the same time.

One of the first things that many people think about when considering porting from Python 2 to Python 3, apart from the removal of the print statement, is the change to the way Unicode strings are handled. In this application we hardly care about Unicode at all because, in the back end modules at least, all our strings contain ASCII characters. However, these strings are really 8-bit strings containing binary data rather than printable text, so we might welcome the opportunity to stop misusing strings for this purpose and embrace Python 3's byte strings (bytes objects). This is where the fun started.

First of all, we have to think about all the places where we open files, ensuring that those files are opened in binary mode, using the "rb" mode string. I've been quite careful over the years to do this for binary files, even though you could get away with using "r" on its own on many platforms. Still, it's good to be explicit and Python 3 now rewards us by returning byte strings. So we now pass these around in our application and process them a bit like the old-style strings. We should still be able to use ord to translate single byte characters to integer values; chr is no longer used for the reverse translation. The problems start when we start slicing up the data.

In Python 2, we can use the subscript or slicing notation to access parts of strings that we want to convert to integer values, perhaps using the struct module to ensure that we are decoding and encoding data consistently. When we access a string in this way, we get a string of zero or more 8-bit characters:

# Python 2:
>>> a = "\x10ABC\0"
>>> print repr(a[0]), repr(a[1:5]), repr(a[5:])
'\x10' 'ABC\x00' ''

In Python 3, using an equivalent byte string, we find that we get something different for the case where we access a single 8-bit character:

# Python 3:
>>> a = b"\x10ABC\0"
>>> print(repr(a[0]), repr(a[1:5]), repr(a[5:]))
16 b'ABC\x00' b''

In some ways it's more convenient to get an integer instead of a single byte string. It means we can remove lots of ord calls. The problem is that it introduces inconsistency in the way we process the data: we can no longer treat single byte accesses in the same way as slices or join a series of single bytes together using the + operator. The work around for this is to use slices for single byte accesses, too, but it seems slightly cumbersome:

# Python 3:
>>> print(repr(a[0:1]), repr(a[1:5]), repr(a[5:]))
b'\x10' b'ABC\x00' b''

This little trap means that we need to be careful in other situations. For example, where we might have iterated over a string to extract the values of each byte, we now need to think of an alternative way to do this:

# Python 2:
>>> a = b"\x10ABC\0"
>>> map(ord, a)
[16, 65, 66, 67, 0]
# Python 3:
>>> a = b"\x10ABC\0"
>>> list(map(ord, a))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: ord() expected string of length 1, but int found

We could use the struct module's unpack function or pass a lambda that returns the value passed to it. Both of these seem a bit unwieldy for the case where we just want to access single bytes sequentially. There's probably an easy way to do this; it's just that I haven't learned the Python 3 idioms for this yet.

We also run into an interesting problem when we want to convert integers back into a byte string. For a list of integers, we use the bytes class as you might expect:

>>> a = [16, 65, 66, 67, 0]
>>> bytes(a)
b'\x10ABC\x00'

However, for a single integer, what do we do? Let's try passing the single value:

>>> a = 3
>>> bytes(a)
b'\x00\x00\x00'

That's not what we wanted. We can't use the chr function instead because that's now used for creating Unicode strings. The answer is to wrap the value in a list:

>>> a = 3
>>> bytes([a])
b'\x03'

The conclusion here seems to be to keep all the values extracted from byte strings in lists and only use slices on them so that we can reconstruct byte strings more easily later. Most of the other problems I encountered were due to the lazy evaluation of built-in functions like map and range. Where appropriate, these had to be wrapped in calls to list.

Converting the GUI code to PyQt 5 was a minor task after the porting to Python 3 since the classes in the QtWidgets module behave more or less the same as before. For example, QFileDialog.getOpenFileName returns a tuple instead of a single file name, but this was quickly fixed, and I could discard a few obsolete calls to Python 2's unicode class.

Python 3's handling of byte strings is a mixed bag. On one hand I can see the benefits of exposing single bytes as integers, and understand that there is a certain logical consistency in expecting developers to use slices everywhere when handling byte strings. On the other hand it seems like a solution based on an idea of theoretical purity more than practicality, and it seems inconsistent with the approach of returning different item types for single and multiple values when accessing what is effectively still a string of characters.

32-Bit Surprise

With the Python 3 porting project out of the way, I turned my attention to a current Python 2 project. I wanted to see if my DUCK tools would run without problems. Initially, everything looked fine, as you might expect from taking something developed on one flavour of Debian and running it on another. However, testing the packages produced by the compiler led to unexpected crashes. To cut a long story short, the problem was due to an inconsistency in the Python type system on architectures of different sizes.

To illustrate the problem, let's assign an integer value to a variable on our 32-bit and 64-bit Python installations. Here's the 64-bit version:

# 64-bit
>>> 0x7fffffff
2147483647
>>> 0xffffffff
4294967295

That looks fine. Just what we would expect. Let's see the 32-bit version:

# 32-bit
>>> 0x7fffffff
2147483647
>>> 0xffffffff
4294967295L

So the second value is a long value in this case. That's useful to know, but it means we cannot rely on Python's type system to give us a single type for values up to the precision of Dalvik's long type. Another related problem is that the struct module defines different sizes for the long type depending on whether the platform is 32-bit or 64-bit.

These issues can be worked around. They help remind us that we need to test our software on different configurations. Incidentally, it seems that the int type is finally unified in Python 3, though the sizes of long integers are still dependent on the platform's underlying architecture.

What's Next?

I'll continue to play with GNU/Hurd for a while. The system seems pretty stable so far, with the only instabilities I've encountered coming from running different graphical environments under Xorg. I'll try to start looking at Hurd-specific features now that I have something I can conveniently dip into from time to time.

Categories: Free Software, Android, Python

A FOSScamp by the beach

DanielPocock.com - fsfe | 08:47, Friday, 30 June 2017

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

Thursday, 29 June 2017

STARTTLS with CRAM-MD5 on dovecot using LDAP

Evaggelos Balaskas - System Engineer | 23:06, Thursday, 29 June 2017

Prologue

I should have written this post like a decade ago, but laziness got the better of me.

I use TLS with IMAP and SMTP mail server. That means I encrypt the connection by protocol against the mail server and not by port (ssl Vs tls). Although I do not accept any authentication before STARTTLS command is being provided (that means no cleartext passwords in authentication), I was leaving the PLAIN TEXT authentication mechanism in the configuration. That’s not an actual problem unless you are already on the server and you are trying to connect on localhost but I can do better.

LDAP

I use OpenLDAP as my backend authentication database. Before all, the ldap attribute password must be changed from cleartext to CRAM-MD5

Typing the doveadm command from dovecot with the password method:

# doveadm pw

Enter new password:    test
Retype new password:   test
{CRAM-MD5}e02d374fde0dc75a17a557039a3a5338c7743304777dccd376f332bee68d2cf6

will return the CRAM-MD5 hash of our password (test)

Then we need to edit our DN (distinguished name) with ldapvi:

From:

uid=USERNAME,ou=People,dc=example,dc=org
userPassword: test

To:

uid=USERNAME,ou=People,dc=example,dc=org
userPassword: {CRAM-MD5}e02d374fde0dc75a17a557039a3a5338c7743304777dccd376f332bee68d2cf6

Dovecot

Dovecot is not only the imap server but also the “Simple Authentication and Security Layer” aka SASL service. That means that imap & smtp are speaking with dovecot for authentication and dovecot uses ldap as the backend. To change AUTH=PLAIN to cram-md5 we need to do the below change:

file: 10-auth.conf

From:

auth_mechanisms = plain

To:

auth_mechanisms = cram-md5

Before restarting dovecot, we need to make one more change. This step took me a couple hours to figure it out! On our dovecot-ldap.conf.ext configuration file, we need to tell dovecot NOT to bind to ldap for authentication but let dovecot to handle the authentication process itself:

From:

# Enable Authentication Binds
# auth_bind = yes

To:

# Enable Authentication Binds
auth_bind = no

To guarantee that the entire connection is protected by TLS encryption, change in 10-ssl.conf the below setting:

From:

ssl = yes

To:

ssl = required

SSL/TLS is always required, even if non-plaintext authentication mechanisms are used. Any attempt to authenticate before SSL/TLS is enabled will cause an authentication failure.

After that, restart your dovecot instance.

Testing

# telnet example.org imap

Trying 172.12.13.14 ...
Connected to example.org.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=CRAM-MD5] Dovecot ready.

1 LOGIN USERNAME@example.org test

1 NO [ALERT] Unsupported authentication mechanism.
^]
telnet> clo

That meas no cleartext authentication is permitted

MUA

Now the hard part, the mail clients:

RainLoop

My default webmail client since v1.10.1.123 supports CRAM-MD5
To verify that, open your application.ini file under your data folder and search for something like that:

    imap_use_auth_plain = On
    imap_use_auth_cram_md5 = On
    smtp_use_auth_plain = On
    smtp_use_auth_cram_md5 = On

as a bonus, rainloop supports STARTTLS and authentication for imap & smtp, even when talking to 127.0.0.1

Thunderbird

thunderbird_cram_md5.png

K9

k9.png

How F-Droid is Bringing Apps to Cuba

Free Software – | 18:23, Thursday, 29 June 2017

Only in 2015, when the government opened the first public WiFi hotspots in the country, did internet access become available to ordinary Cubans. Before that, even though modern mobile phones had already found their way into the country, they were mostly used off-line. Now, all these phones can be connected to the internet. However, at 1.50 CUC per hour, this is still not affordable to most Cubans whose average salary is only 20 CUC per month.

Mobile Phone Shop offering Apps and Updates

So it is not surprising that most Cubans do not use what little expensive bandwidth they have available to download apps, but use it for other things such as communicating with their relatives. If they want apps, they can go to one of the many mobile phone repair shops and get them installed there for a nominal fee.

In order to distinguish themselves from the tough competition, some shops take an interesting approach: They open up their own app store in a local (offline) WiFi network to attract customers.

The only existing technology that allows everybody to open their own app store is called F-Droid. It is a full-blown app store kit that comes with all the tools and documentation required to open up your own store: an app store server, the app to install the apps with on your phone and a curation tool to manage your store with ease.

DroidT’s F-Droid Repository

Normally, the F-Droid app that you can download comes with thousands of useful Free Software apps without advertising and user tracking. These however require an internet connection to be downloaded from F-Droid’s official server. Thankfully, F-Droid allows you to add other repositories to it. These work similar to the package repositories that you might know from GNU/Linux distributions. Everybody can create their own repository, even in an offline network. As long as people are connected to the same WiFi, they can use your repository to install and update your apps.

WiFi Antenna Outside of the Shop

This is exactly what the DroidT shop did that I visited in Sancti Spíritus, Cuba. They run their own repository within their local WiFi network. Currently, they offer more than 2000 apps, mostly games, but also other useful apps, for free to everybody within the range of their WiFi router. Having a worse store location than their competitors, it really helps them to drive people to their shop and to build up their reputation within the local community. So far, they are the only store in their city offering this free and convenient service to their customers.

 

A Cuban child looking through the print-out of available apps

The apps are downloaded by the store employees once, put into the repository and then are available to an unlimited number of customers without ever needing to connect to the internet again. The DroidT team is even going the last mile by also offering Spanish app metadata such as summaries and app descriptions to make it easier for people to find the app that they are looking for.

At the moment they work on making use of the new screenshot feature that was introduced to the F-Droid app recently, so their customers can browse through the screenshots before deciding whether to install an app or not.

A Screenshot of DroidT’s app categories

Another nice service they offer are app updates that are securely delivered by F-Droid. When somebody who has already downloaded apps from their repository, comes into the range of their WiFi again, F-Droid will check for updates and offer to install the ones that are available. This is a huge advantage in a country where people normally never care about updates at the expense of security.

So it comes at no surprise that their slogan is “BE UPDATE” that even hangs in huge letters right above their desk in their shop. The repository by the way is hosted on the server that you can see directly above the banner with their slogan.

It is great to see that the free technology that the F-Droid community worked on for so many years to liberate Android users is put to unexpected use by talented individuals in other countries like Cuba and helps to make a difference there.


For the sake of completeness however, it should be mentioned that ordinary downloads and F-Droid are not the only ways that Cubans use to get apps. There are weekly data packages that are sent through the country on hard-drives by bus. These Paquete Semanal contain mostly movies and TV shows, but also news and apps.

Then, there is a proprietary app called Zapya that is very popular among young Cubans, because it allows them to wirelessly share apps and other files between two phones. Although F-Droid offers the same functionality, this is not yet widely known and currently being improved to match the simplicity that Zapya offers to Cubans at the moment.

A Cuban sharing apps with Zapya

<script type="text/javascript"> (function () { var s = document.createElement('script'); var t = document.getElementsByTagName('script')[0]; s.type = 'text/javascript'; s.async = true; s.src = '/wp-content/libs/SocialSharePrivacy/scripts/jquery.socialshareprivacy.min.autoload.js'; t.parentNode.insertBefore(s, t); })(); </script>

Technoshamanism in Aarhus – rethink ancestrality and technology

agger's Free Software blog | 04:35, Thursday, 29 June 2017

OPEN CALL

On August 12, 2017 from 14:00 to 22:00 there will be a technoshamanism meeting in Dome of Visions, Aarhus, Denmark.

The purpose of the meeting is to unite people who are interested in the combination of DIY technologies such as free software and permaculture with ancestral, ancestorfuturist and shamanistic practices. We are calling all the cyborgs, witches, heretics, technoshamans, programmers, hackers, artists, alchemists, thinkers and everyone who might be curious to join us and explore the possiblities of combining ancestorfuturism, perspectivism, new and old indigenism im middle of the climate changes of anthropocene.

If you feel attracted by the combination of these terms, techno + shamanism and ancestrality + futurism and if you’re worried about the destruction of the Earth and the hegemony of capital and neoliberal ontologies, this event is for you. In view of Aarhus’ slogan as European Cultural Capital of 2017, the theme of this event could be: Rethink ancestrality and technology!

We welcome all proposals for rituals, musical and artistic performances, talks, discussions and technological workshops. Please send your proposal to xamanismotecnologico@gmail.com.

The proposal needs to be short (250 words) with your web site (if any) and a short bio.

PROGRAM

FREE RADIO

The verbal talks will be structured as roundtable discussions with several participants which are recorded and simultaneously live streamed as Internet radio.

Topics:

  • TECHNOSHAMANISM – What is it?
  • Ancestrality and ancestrofuturism
  • Experiences from the II International Festival and other events
  • Immigration and new ontologies arriving in Europe
  • Self-organizing with free software and DIY technology
  • Your proposal!

PERFORMANCE AND RITUAL

A collaborative DIY ritual to end the event – bring costumes, proposals, visual effects, ideas and musical instruments.

We welcome proposals for all kinds of performance, rituals and narratives along the lines of this open call – all proposals to be sent to xamanismotecnologico@gmail.com.

NOTE

When we have received your proposals, we will organize them and publish a detailed program around August 1, for the discussions and workshops as well as for the rituals.

ACCOMODATION

If you don’t live in Aarhus and need accomodation, that can be arranged (for free). Bring your sleeping bag!

WHO ARE WE?

This encounter is organized by Carsten Agger, Beatriz Ricci, Fabiane M. Borges and Ouafa Rian.

TECHNOSHAMANISM – THE NETWORK

Tecnoshamanism is an international network for people interested in living out their ideas in everyday life while focusing on open science, open technology, free and DIY cosmological visions and feel the necessity of maintaining a strong connection to the Earth as a living, ecological organism.

In recent years, we have had meetings in Spain, England, Denmark, Ecuador, Colombia, Brazil, Germany, and Switzerland. In November 2016, we had the II International Festival of Tecnoxamanism in the indigenous Pataxó village of Pará in Bahia, Brazil. The purpoose of this meeting is to discuss technoshamanism as outlined above and to strengthen and grow this network, hopefully reaching out to new partners in Denmark and beyond. The network is based in Brazil but draws inspiration from all over the world.

You can find more information on technoshamanism in these articles:

ACKNOWLEDGEMENTS

This event is supported by Digital Living Research Commons, Aarhus University.

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  Andrea Scarpino's blog  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English on Björn Schießle - I came for the code but stayed for the freedom  English – Max's weblog  English — mina86.com  Escape to freedom  Evaggelos Balaskas - System Engineer  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  FSFE – Patis Blog  Fellowship News  Fellowship News » Page not found  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – Frank Karlitschek_  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  MHO  Mario Fux  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  TSDgeos' blog  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog