Planet Fellowship (en)
Friday, 22 May 2015
bb's blog | 10:31, Friday, 22 May 2015This green paper summarizes user comments regarding needs and wishes for a new KSysGuard and presents a couple of mockups to initiate the discussion at the KDE forums. [Read the full article]
Don't Panic » English Planet | 05:32, Friday, 22 May 2015Ever wondered about the brilliance of the sky you’re looking at night? Well, honestly, I already liked the NASA before, especially for producing and offering royalty-free content. But this week, I encountered something that made me like the NASA even … Continue reading
Tuesday, 19 May 2015
freedom bits | 07:02, Tuesday, 19 May 2015
If you are a user of Roundcube, you want to contribute to roundcu.be/next. If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.
Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.
Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.
Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.
Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.
TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.
From a Kolab perspective, Roundcube is the web mail component of its web application.
The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.
But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.
The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.
The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.
These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.
It just won’t solve the problem at scale.
To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.
Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.
That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.
While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.
But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.
All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.
So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.
And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.
Monday, 18 May 2015
DanielPocock.com - fsfe | 17:48, Monday, 18 May 2015
Some key points about the Fedora service:
- The web code is all available in a Github repository so people can extend it.
- Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
- The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.
Testing it with WebRTC
Testing it with other SIP softphones
The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.
The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.
WebRTC opportunities expanding
Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.
Wednesday, 13 May 2015
Seravo | 12:27, Wednesday, 13 May 2015
The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.
On Friday Otto Kekäläinen had a talk with the title “Contributing to WordPress.org – Why you (and your company) should publish plugins at WordPress.org”. On Saturday he held workshops titled “How to publish a plugin at WordPress.org” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.
Below are the slides from these presentations and workshops:<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/47901678" width="540"></iframe>
<iframe frameborder="0" height="443" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/47942413" width="540"></iframe>
WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)
See also our recap on WordCamp Finland 2015 in Finnish: WP-palvelu.fi/blogi
(Photos by Jaana Björklund)
Mario Fux | 07:07, Wednesday, 13 May 2015
If you are interested in participating in this year’s Randa Meetings and want to have a chance to be financially supported to travel to Randa then the last 24 hours of the registration period just began.
So it’s now or never – or maybe next year.
Tuesday, 12 May 2015
Paul Boddie's Free Software-related blog » English | 19:39, Tuesday, 12 May 2015
Having recently seen an article about the closure of a project featuring that project’s usage of proprietary tools from Atlassian – specifically JIRA and Confluence – I thought I would share my own experiences from the migration of another project’s wiki site that had been using Confluence as a collaborative publishing tool.
Quite some time ago now, a call for volunteers was posted to the FSF blog, asking for people familiar with Python to help out with a migration of the Mailman Wiki from Confluence to MoinMoin. Subsequently, Barry Warsaw followed up on the developers’ mailing list for Mailman with a similar message. Unlike the project at the start of this article, GNU Mailman was (and remains) a vibrant Free Software project, but a degree of dissatisfaction with Confluence, combined with the realisation that such a project should be using, benefiting from, and contributing to Free Software tools, meant that such a migration was seen as highly desirable, if not essential.
Up and Away
Initially, things started off rather energetically, and Bradley Dean initiated the process of fact-finding around Confluence and the Mailman project’s usage of it. But within a few months, things apparently became noticeably quieter. My own involvement probably came about through seeing the ConfluenceConverter page on the MoinMoin Wiki, looking at the development efforts, and seeing if I couldn’t nudge the project along by pitching in with notes about representing Confluence markup format features in the somewhat more conventional MoinMoin wiki syntax. Indeed, it appears that my first contribution to this work occurred as early as late May 2011, but I was more or less content to let the project participants get on with their efforts to understand how Confluence represents its data, how Confluence exposes resources on a wiki, and so on.
But after a while, it occurred to me that the volunteers probably had other things to do and that progress had largely stalled. Although there wasn’t very much code available to perform concrete migration-related tasks, Bradley had gained some solid experience with the XML format employed by exported Confluence data, and such experience when combined with my own experiences dealing with very large XML files in my day job suggested an approach that had worked rather well with such large files: performing an extraction of the essential information, including identifiers and references that communicate the actual structure of the information, as opposed to the hierarchical structure of the XML data itself. With the data available in a more concise and flexible form, it can then be processed in a more convenient fashion, and within a few weeks I had something ready to play with.
With a day job and other commitments, it isn’t usually possible to prioritise volunteer projects like this, and I soon discovered that some other factors were involved: technological progress, and the tendency for proprietary software and services to be upgraded. What had initially involved the conversion of textual content from one markup format to another now seemed to involve the conversion from two rather different markup formats. All the effort documenting the original Confluence format now seemed to be almost peripheral if not superfluous: any current content on the Mailman Wiki would now be in a completely different format. And volunteer energy seemed to have run out.
Time passed. And then the Mailman developers noticed that the Confluence upgrade had made the wiki situation even less bearable (as indeed other Confluence users had noticed and complained about), and that the benefits of such a solution were being outweighed by the inconveniences of the platform. And it was at this point that I realised that it was worthwhile continuing the migration effort: it is bad enough that people feel constrained by a proprietary platform over which they have little control, but it is even worse when it appears that they will have to abandon their content and start over with little or no benefit from all the hard work they have invested in creating and maintaining that content in the first place.
And with that, I started the long process of trying to support not only both markup formats, but also all the features likely to have been used by the Mailman project and those using its wiki. Some might claim that Confluence is “powerful” by supporting a multitude of seemingly exotic features (page relationships, comments, “spaces”, blogs, as well as various kinds of extensions), but many of these features are rarely used or never actually used at all. Meanwhile, as many migration projects can attest, if one inadvertently omits some minor feature that someone regards as essential, one risks never hearing the end of it, especially if the affected users have been soaking up the propaganda from their favourite proprietary vendor (which was thankfully never a factor in this particular situation).
Despite the “long tail” of feature support, we were able to end 2012 with some kind of overview of the scope of the remaining work. And once again I was able to persuade the concerned parties that we should focus on MoinMoin 1.x not 2.x, which has proved to be the correct decision given the still-ongoing status of the latter even now in 2015. Of course, I didn’t at that point anticipate how much longer the project would take…
Over the next few months, I found time to do more work and to keep the Mailman development community informed again and again, which is a seemingly minor aspect of such efforts but is essential to reassure people that things really are happening: the Mailman community had, in fact, forgotten about the separate mailing list for this project long before activity on it had subsided. One benefit of this was to get feedback on how things were looking as each iteration of the converted content was made available, and with something concrete to look at, people tend to remember things that matter to them that they wouldn’t otherwise think of in any abstract discussion about the content.
In such processes, other things tend to emerge that initially aren’t priorities but which have to be dealt with eventually. One of the stated objectives was to have a full history, meaning that all the edits made to the original content would need to be preserved, and for an authentic record, these edits would need to preserve both timestamp and author information. This introduced complications around the import of converted content – it being no longer sufficient to “replay” edits and have them assume the timestamp of the moment they were added to the new wiki – as well as the migration and management of user profiles. Particularly this latter area posed a problem: the exported data from Confluence only contained page (and related) content, not user profile information.
Now, one might not have expected user details to be exportable anyway due to potential security issues with people having sufficient privileges to request a data dump directly from Confluence and then to be able to obtain potentially sensitive information about other users, but this presented another challenge where the migration of an entire site is concerned. On this matter, a very pragmatic approach was taken: any user profile pages (of which there were thankfully very few) were retrieved directly over the Web from the existing site; the existence of user profiles was deduced from the author metadata present in the actual exported wiki content. Since we would be asking existing users to re-enable their accounts on the new wiki once it became active, and since we would be avoiding the migration of spammer accounts, this approach seemed to be a reasonable compromise between convenience and completeness.
By November 2013, the end was in sight, with coverage of various “actions” supported by Confluence also supported in the migrated wiki. Such actions are a good example of how things that are on the edges of a migration can demand significant amounts of time. For instance, Confluence supports a PDF export action, and although one might suggest that people just print a page to file from their browser, choosing PDF as the output format, there are reasonable arguments to be made that a direct export might also be desirable. Thus, after a brief survey of existing options for MoinMoin, I decided it would be useful to provide one myself. The conversion of Confluence content had also necessitated the use of more expressive table syntax. Had I not been sufficiently interested in implementing improved table facilities in MoinMoin prior to this work, I would have needed to invest quite a bit of effort in this seemingly peripheral activity.
Again, time passed. Much of the progress occurred off-list at this point. In fact, a degree of confusion, miscommunication and elements of other factors conspired to delay the availability of the infrastructure on which the new wiki would be deployed. Already in October 2013 there had been agreement about hosting within the python.org infrastructure, but the matter seemed to stall despite Barry Warsaw trying to push it along in February and April 2014. Eventually, after complaining from me on the PSF members’ mailing list at the end of May, some motion occurred on the matter and in July the task of provisioning the necessary resources began.
After returning from a long vacation in August, the task of performing the final migration and actually deploying the content could finally begin. Here, I was able to rely on expert help from Mark Sapiro who not only checked the results of the migration thoroughly, but also configured various aspects of the mail system functionality (one of the benefits of participating in a mail-oriented project, I guess), and even enhanced my code to provide features that I had overlooked. By September, we were already playing with the migrated content and discussing how the site would be administered and protected from spam and vandalism. By October, Barry was already confident enough to pre-announce the migrated site!
At Long Last
Alas, things stalled again for a while, perhaps due to other commitments of some of the volunteers needed to make the final transition occur, but in January the new Mailman Wiki was finally announced. But things didn’t stop there. One long-standing volunteer, Jim Tittsler, decided that the visual theme of the new wiki would be improved if it were made to match the other Mailman Web resources, and so he went and figured out how to make a MoinMoin theme to do the job! The new wiki just wouldn’t look as good, despite all the migrated content and the familiarity of MoinMoin, if it weren’t for the special theme that Jim put together.
There have been a few things to deal with after deploying the new wiki. Spam and vandalism have not been a problem because we have implemented a very strict editing policy where people have to request editing access. However, this does not prevent people from registering accounts, even if they never get to use them to do anything. To deal with this, we enabled textcha support for new account registrations, and we also enabled e-mail verification of new accounts. As a result, the considerable volume of new user profiles that were being created (potentially hundreds every hour) has been more or less eliminated.
It has to be said that throughout the process, once it got started in earnest, the Mailman development community has been fantastic, with constructive feedback and encouragement throughout. I have had disappointing things to say about the experience of being a volunteer with regard to certain projects and initiatives, but the Mailman project is not that kind of project. Within the limits of their powers, the Mailman custodians have done their best to enable this work and to see it through to the end.
I am sure that offers of “for free” usage of certain proprietary tools and services are made in a genuinely generous way by companies like Atlassian who presumably feel that they are helping to make Free Software developers more productive. And I can only say that those interactions I experienced with Contegix, who were responsible for hosting the Confluence instance through which the old Mailman Wiki was deployed, were both constructive and polite. Nevertheless, proprietary solutions are ultimately disempowering: they take away the control over the working environment that users and developers need to have; they direct improvement efforts towards themselves and away from Free Software solutions; they also serve as a means of dissuading people from adopting competing Free Software products by giving an indication that only they can meet the rigorous demands of the activity concerned.
I saw a position in the Norwegian public sector not so long ago for someone who would manage and enhance a Confluence installation. While it is not for me to dictate the tools people choose to do their work, it seems to me that such effort would be better spent enhancing Free Software products and infrastructure instead of remedying the deficiencies of a tool over which the customer ultimately has no control, to which the customer is bound, and where the expertise being cultivated is relevant only to a single product for as long as that product is kept viable by its vendor. Such strategic mistakes occur all too frequently in the Norwegian public sector, with its infatuation with proprietary products and services, but those of us not constrained by such habits can make better choices when choosing tools for our own endeavours.
I encourage everyone to support Free Software tools when choosing solutions for your projects. It may be the case that such tools may not at first offer precisely the features you might be looking for, and you might be tempted to accept an offer of a “for free” product or to use a no-cost “cloud” service, and such things may appear to offer an easier path when you might otherwise be confronted with a choice of hosting solutions and deployment issues. But there are whole communities out there who can offer advice and will support their Free Software project, and there are Free Software organisations who will help you deploy your choice of tools, perhaps even having it ready to use as part of their existing infrastructure.
In the end, by embracing Free Software, you get the control you need over your content in order to manage it sustainably. Surely that is better than having some random company in charge, with the ever-present risk of them one day deciding to discontinue their service and/or, with barely enough notice, discard your data.
Friday, 22 May 2015
Don't Panic » English Planet | 05:32, Friday, 22 May 2015Last week, I was invited to talk for the “Concurso Universitario de Software Libre (CUSL)” in Zaragoza, Spain. The objective of this “concurso” is to promote the use and development of Free Software by organising an annual contest among various … Continue reading
Sunday, 10 May 2015
Paul Boddie's Free Software-related blog » English | 15:09, Sunday, 10 May 2015
It didn’t all start with a poorly-considered April Fools’ joke about hosting a Python conference in Cuba, but the resulting private mailing list discussion managed to persuade me not to continue as a voting member of the Python Software Foundation (PSF). In recent years, upon returning from vacation, discovering tens if not hundreds of messages whipping up a frenzy about some topic supposedly pertinent to the activities of the PSF, and reading through such messages as if I should inform my own position on the matter, was undoubtedly one of the chores of being a member. This time, my vacation plans were slightly unusual, so I was at least spared the surprise of getting the bulk of people’s opinions in one big serving.
I was invited to participate in the PSF at a time when it was an invitation-only affair. My own modest contributions to the EuroPython conference were the motivating factor, and it would seem that I hadn’t alienated enough people for my nomination to be opposed. (This cannot be said for some other people who did eventually become members as well after their opponents presumably realised the unkindness of their ways.) Being asked to participate was an honour, although I remarked at the time that I wasn’t sure what contribution I might make to such an organisation. Becoming a Fellow of the FSFE was an active choice I made myself because I align myself closely with the agenda the FSFE chooses to pursue, but the PSF is somewhat more vague or more ambivalent about its own agenda: promoting Python is all very well, but should the organisation promote proprietary software that increases Python adoption, or would this undermine the foundations on which Python was built and is sustained? Being invited to participate in an organisation with often unclear objectives combines a degree of passivity with an awareness that some of the decisions being taken may well contradict some of the principles I have actively chosen to support in other organisations. Such as the FSFE, of course.
Don’t get me wrong: there are a lot of vital activities performed within the PSF. For instance, the organisation has a genuine need to enforce its trademarks and to stop other people from claiming the Python name as their own, and the membership can indeed assist in such matters, as can the wider community. But looking at my archives of the private membership mailing list, a lot of noise has been produced on other, more mundane matters. For a long time, it seemed as if the only business of the PSF membership – as opposed to the board who actually make the big decisions – was to nominate and vote on new members, thus giving the organisation the appearance of only really existing for its own sake. Fortunately, organisational reform has made the matter of recruiting members largely obsolete, and some initiatives have motivated other, more meaningful activities. However, I cannot be the only person who has noted that such activities could largely be pursued outside the PSF and within the broader community instead, as indeed these activities typically are.
Some of the more divisive topics that have caused the most noise have had some connection with PyCon: the North American Python conference that mostly replaced the previous International Python Conference series (from back when people thought that conferences had to be professionally organised and run, in contrast to PyCon and most, if not all, other significant Python conferences today). Indeed, this lack of separation between the PSF and PyCon has been a significant concern of mine. I will probably never attend a PyCon, partly because it resides in North America as a physical event, partly because its size makes it completely uninteresting to me as an attendee, and largely because I increasingly find the programme uninteresting for a variety of other reasons. When the PSF members’ time is spent discussing or at least exposed to the discussion of PyCon business, it can just add to the burden of membership for those who wish to focus on the supposed core objectives of the organisation.
What may well be worse, however, is that PyCon exposes the PSF to substantial liability issues. As the conference headed along a trajectory of seemingly desirable and ambitious growth, it collided with the economic downturn caused by the global financial crisis of 2008, incurring a not insignificant loss. Fortunately, this outcome has not since been repeated, and the organisation had sufficient liquidity to avoid any serious consequences. Some have argued that it was precisely because profits from previous years’ conferences had been accumulated that the organisation was able to pay its bills, but such good fortune cannot explain away the fundamental liability and the risks it brings to the viability of the organisation, especially if fortune happens not to be on its side in future.
In recent times, I have been more sharply focused on the way volunteers are treated by organisations who rely on their services to fulfil their mission. Sadly, the PSF has exhibited a poor record in various respects on this matter. Once upon a time, the Python language Web site was redesigned under contract, but the burden of maintenance fell on community volunteers. Over time, discontentment forced the decision to change the technology and a specification was drawn up under a degree of consultation. Unfortunately, the priorities of certain stakeholders – that is, community volunteers doing a fair amount of hard work in their own time – were either ignored or belittled, leaving them confronted with either having to adapt to a suboptimal workflow not of their own choosing, as well as spending time and energy developing that workflow, or just quitting and leaving it to other people to tidy up the mess that those other people (and the hired contractors) had made.
Understandably, the volunteers quit, leaving a gap in the Web site functionality that took a year to reinstate. But what was most disappointing was the way those volunteers were branded as uncooperative and irresponsible in an act of revisionism by those who clearly failed to appreciate the magnitude of the efforts of those volunteers in the first place. Indeed, the views of the affected volunteers were even belittled when efforts were championed to finally restore the functionality, with it being stated by one motivated individual that the history of the problem was not of his concern. When people cannot themselves choose the basis of their own involvement in a volunteer-run organisation without being vilified for letting people down or for “holding the organisation to ransom”, the latter being a remarkable accusation given the professionalism that was actually shown in supporting a transition to other volunteers, one must question whether such an organisation deserves to attract any volunteers at all.
As discussion heated up over the PyCon Cuba affair, the usual clash of political views emerged, with each side accusing the other of ignorance and not understanding the political or cultural situation, apparently blinkered by their own cultural and political biases. I remember brazen (and ill-informed) political advocacy being a component in one of the Python community blogging “planets” before I found the other one, back when there was a confusing level of duplication between the two and when nobody knew which one was the “real” one (which now appears to consist of a lot of repetition and veiled commercial advertising), and I find it infuriating when people decide to use such matters as an excuse to lecture others and to promote their own political preferences.
I have become aware of a degree of hostility within the PSF towards the Free Software Foundation, with the latter being regarded as a “political” organisation, perhaps due to hard feelings experienced when the FSF had to educate the custodians of Python about software licensing (which really came about in the first place because of the way Python development had been moved around, causing various legal representatives to play around with the licensing, arguably to make their own mark and to stop others getting all the credit). And I detect a reluctance in some quarters to defend software freedom within the PSF, with a reluctance to align the PSF with other entities that support software and digital freedoms. At least the FSF can be said to have an honest political agenda, where those who support it more or less know where they stand.
In contrast, the PSF seems to cultivate all kinds of internal squabbling and agenda-setting: true politics in the worst sense of the word. On one occasion I was more or less told that my opinion was not welcome or, indeed, could ever be of interest on a topic related to diversity. Thankfully, diversity politics moved to a dedicated mailing list and I was thereafter mostly able to avoid being told by another Anglo-Saxon male that my own perspectives didn’t matter on that or on any other topic. How it is that someone I don’t actually know can presume to know in any detail what perspectives or experiences I might have to offer on any matter remains something of a mystery to me.
Looking through my archives, there appears to be a lot of noise, squabbling, quipping, and recrimination over the last five years or so. In the midst of the recent finger-wagging, someone dared to mention that maybe Cubans, wherever they are, might actually deserve to have a conference. Indeed, other places were mentioned where the people who live there, through no fault of their own, would also be the object of political grandstanding instead of being treated like normal people wanting to participate in a wider community.
I mostly regard April Fools’ jokes as part of a tedious tradition, part of the circus that distracts people away from genuine matters of concern, perhaps even an avenue of passive aggression in certain circles, a way to bully people and then insist – as cowards do – that it was “just a joke”. The lack of a separation of the PSF’s interests combined with the allure of the circus conspired to make fools out of the people involved in creating the joke and of many in the accompanying debate. I find myself uninterested in spending my own time indulging such distractions, especially when those distractions are products of flaws in the organisation that nobody wishes to fix, and when there are more immediate and necessary activities to pursue in the wider arena of Free Software that, as a movement in its own right, some in the PSF practically refuse to acknowledge.
Leaving the PSF won’t really change any of my commitments, but it will at least reduce the level of background noise I have to deal with. Such an underwhelming and unfortunate assessment is something the organisation will have to rectify in time if it wishes to remain relevant and to deserve the continued involvement of its members. I do have confidence in some of the reform and improvement processes being conducted by volunteers with too little time of their own to pursue them, and I hope that they make the organisation a substantially better and more effective one, as they continue to play to an audience of people with much to say but, more often than not, little to add.
I would have been tempted to remain in the PSF and to pursue various initiatives if the organisation were a multiplier of effect for any given input of effort. Instead, it currently acts as a divider of effect for all the effort one would apparently need to put in to achieve anything. That isn’t how any organisation, let alone one relying on volunteer time and commitment, should be functioning.
On political matters and accusations of ignorance being traded, my own patience is wearing thin indeed, and this probably nudged me into finally making this decision. It probably doesn’t help that I recently made a trip to Britain where election season has been in full swing, with unashamed displays of wilful idiocy openly paraded on a range of topics, indulged by the curated ignorance of the masses, with the continued destruction of British society, nature and the economy looking inevitable as the perpetrators insist they know best now and will undoubtedly in the future protest their innocence when challenged on the legacy of their ruinous rule, adopting the “it wasn’t me” manner of a petulant schoolchild so befitting of the basis of the nepotism that got most of them where they are today.
Friday, 22 May 2015
bb's blog | 10:31, Friday, 22 May 2015The KDE system monitor needs an update. In the first step we like to ask you to join the brainstorming about requirements. What do you want integrated into KSysGuard? KSysGuard has been attributed as visually outdated and suboptimal in respect to its functionality. The competitors do have beautiful layouts that makes it easy to grasp [...]
Friday, 08 May 2015
the_unconventional's blog » English | 17:00, Friday, 08 May 2015
A while ago, I got my hands on two Lenovo ThinkPad T60s. Both were broken and incomplete, but I was able to mix and match the parts and turn them into a single working machine.
As the T60 is supported by coreboot, I naturally wanted to replace the proprietary Lenovo BIOS. Normally, I would have used Libreboot, but unfortunately both motherboards had an ATI GPU, which requires a proprietary VGABIOS. (Contrary to popular belief, this blob is only 64kB.)
So I pulled in the Libreboot build infrastructure and compiled everything, including a coreboot ROM with the ATI VGABIOS included (using the Libreboot configuration). Libreboot’s patched Flashrom worked the first time, and I was able to boot up with SeaBIOS. Then you’d have to flash the same ROM a second time because of BUC.TS. And that’s where everything went wrong and I ended up with a brick.
Fortunately, not all hope was lost. I bought a SOIC clip online, and I was able to borrow a spare Raspberry Pi from a friend. Technically, one could use the GPIO pins on the Raspberry Pi to flash coreboot with Flashrom over the linux_spi interface. Documented, however, this is not.
In order to unbrick the laptop, you’ll have to disassemble it almost completely. This is documented rather well on the Libreboot web site. The only part I’d ignore is the part about removing the screen from the frame. You don’t actually have to do that.
When you’ve taken out the motherboard, attach the SOIC clip to the BIOS chip. It’s probably best to get a numbered GPIO cable so you’ll only have to mess about with the wires on one side.
The chip’s pins are numbered counter-clockwise starting at the bottom left. So the row near the RAM slots consists of pins 1, 2, 3, 4, and the row near the northbridge consists of pins 8, 7, 6, 5.
Getting the right pins connected to the right GPIO headers was basically guesswork, because it’s not described correctly anywhere, and even Flashrom IRC couldn’t help me any further than “rtfm noob”. But sadly, there is no manual entry about this.
bibanon on GitHub came closest, but was also wrong at the time of writing. So I simply started testing pins until I finally got Flashrom to recognize the chip. The correct, tested and working pinout can be found here.
1 - CS - Pi Pin #24 2 - MISO - Pi Pin #21 3 - Don't connect 4 - GND - Pi Pin #25 5 - MOSI - Pi Pin #19 6 - CLK - Pi Pin #23 7 - Don't connect 8 - 3.3V - Pi Pin #17 with an SST chip or Don't connect with a Macronix chip
Because I have a Macronix chip, I did not connect pin 8, but plugged in the T60′s power adapter. Beware that you should never connect both pin 8 and an external PSU at the same time.
Setting up the Raspberry Pi
For some reason, SPI seems to be broken in the latest Raspbian images, so I downloaded an older one which could even fit on an old 1GB SD card.
Then, I downloaded the Flashrom source code and compiled it for armhf.
sudo apt-get install build-essential pciutils usbutils libpci-dev libusb-dev libftdi-dev zlib1g-dev subversion svn co svn://flashrom.org/flashrom/trunk flashrom cd flashrom make
If you’re not in the mood to compile it yourself on a slow Raspberry Pi, I have a precompiled armv6l Flashrom binary here. See above for obtaining the source code.
Once I was able to successfully read the chip’s contents, I was confident enough to flash the coreboot ROM.
sudo ./flashrom -c "MX25L1605" -p linux_spi:dev=/dev/spidev0.0 -w ../coreboot.rom
The erase failure is normal and is not really an error. (It finally does erase the chip as you can see; it just fails the first two times.)
So I crossed my fingers, removed the SOIC clip and reassembled the ThinkPad.
It booted to SeaBIOS immediately, allowing me to load the Trisquel netinstaller. All that was left to do was to install an Atheros AR9280 WiFi card from my collection, and I now have a working T60 with only 64kB of proprietary code on it. (Aside from the SSD controller, probably.)
Max's weblog » English | 07:34, Friday, 08 May 2015
After two months in Tanzania and in the computer education centre I work every day I learnt a lot about the culture of the locals in terms of their viewpoint on information technology. And in the same way I had to accept that my initial mental image of the people’s behaviour was (at least in parts) very wrong. So in this article I try to explain how I see the situation of modern technologies and the usage and understanding of Free Software in the region of Tanzania where I live.
Free Software guarantees the full rights to use, study, share and improve it (but is not necessarily free of gratis). This sounds like something only interesting for IT specialists and nerds. But given the importance of software in our lifes one has to reconsider: Software controls our mobile phones, cars, air planes, heating systems, power plants, bank accounts and medical equipment. The one who controls this software is also the one who controls most parts of our lifes. Questions like “Does all my data belong to someone else?”, “Is my data safe?” and “Who knows how much about me?” can only be answered when we start thinking about Free Software. By some people Free Software is also called Open Source. More about Free Software.
Let’s start with a list of what I thought and what’s in fact the reality:
Before I went to Tanzania it was quite clear to me that people here value Free Software quite much. This is because a lot of Free Software is also free of cost. Why should people use Windows, Adobe Photoshop and Microsoft Office when there’s also GNU/Linux, GIMP and LibreOffice/OpenOffice?
“Free Software? What is this and can I eat it?” It’s not that drastical though but the core message remains the same: The broad average population doesn’t know about Free Software and Open Source or even the applications I listed. When I gave a small workshop about GNU/Linux, noone of my students knew about it. But as we installed replacements of popular non-free software like LibreOffice, GIMP or VLC the questions marks in my students’ heads became almost visible. Although they liked the idea of the whole world working on this software and that it’s for free, they asked me afterwards “…and how can we install Microsoft Frontpage?”. This is the perfect time for misconception 2.
“Free Software is cool”. This is what I and many other people think. It takes power away from single and very large IT-companies to us, the users and small companies. It enables a free and fair market competition and can support our data privacy and civil rights protection in various ways. In western countries I can almost understand that there’re people who mistakenly think that only expensive products by big brands can be quality products. But in Africa? Never ever! The people are quite poor and why shouldn’t they value products which are for free and good?
Apart from the fact that many people don’t know about alternatives to popular non-free software, they also cannot believe that something is for free. Many people here have arranged themselves with sharing illegal (and often virus-infested) copies of Windows and Microsoft Office. And especially in the rather “rich” northern Tanzania, everything is about money. Asking to take a photo of a group of Maasai people in a nice background setting? 2000 Shilling. Somebody escorting you to a place you didn’t find? 500-1000 Shilling.
However, I was able to convince my students that in the case of Free Software most software is really for free in terms of free beer but only after clear up many questions about it. The idea that something so valuable and created by so many people in so many working hours is really for free – almost unbelievable, even for my local co-teachers.
I’ve been tinkering with computers and software since my youth when I reinstalled my operating systems at least once a month and started exploring the internet. I did this because I was interested in technology and wanted to explore its and my limits, but also because even back then I knew that IT will become more and more important and those who don’t understand it will rather be left behind.
I thought in Tanzania it’s a similar situation but somehow easier for the population. I thought that they have very limited technology here but that they know about the importance of computers and software in the industrial countries – and it’s quite obvious that with several years delay they will reach the same level of IT-dependency than we have today. So I thought the people here would care about technology and will try to learn as much as possible about it to improve their career chances and catch up the industrial countries.
(Disclaimer here: This is just my personal and at the moment very subjective view) It’s not that the people here are lazy and miss the future. They already have the future and it’s too much for them. Most of the Tanzanians in the city have a smartphone, some even have several. The mobile internet is partly better than in Germany, many companies already heavily depend on computers and I’m asked for my Facebook and WhatsApp contact details almost every day.
The in some way funny thing is that they know all this (modern smartphone apps, newest iPhone’s details) but if they’re asked to download and install an application on their own Windows computer, even my IT-students reach their limits.
That’s one of the questions I have in my mind every day. Why don’t they know about other software than the most popular (and even not best). Why do they refuse alternatives even if they just benefit their financial and infrastructural situation (no money, old computers, slow internet)? And why don’t they even know the most basic things but enjoy quite modern technologies?
I assume it’s because of the very rapid and overwhelming change that the people here experienced. Before the smartphones they only had very old computers, mostly donated or from the trash bin of the industrial countries. While we already enjoyed internet, they had to linger around with ancient machines. And on these machines there was preinstalled Windows and maybe applications like Photoshop and Microsoft Office. It was almost impossible for them to download OpenOffice or GIMP because landline internet (ISDN, DSL) is very uncommon here.
So they didn’t know about any alternatives and were happy to be able to use at least some applications. And here a second reason kicks in: How to learn to use a software properly? There’re almost no schools who teach usage of computers and their applications. And small companies cannot afford the expenses to train their employees in IT. So the limited supply of technology is further limited by the missing knowledge. As a side note: A volunteer friend of mine here told me that he fascinated his whole working place by showing the people that there’s something like =SUM() in Excel. Before that they wrote down long lists of numbers in Excel but calculated them by hand. It was a micro-finance organisation which lend small amounts of money to communities and single persons…
And then, the smartphones came. And the companies offering mobile internet for affordable rates. People don’t rely on home computers and landline connections anymore but can chat and surf everywhere. They’re given the technology but not the knowledge. Although it works like a charm, many don’t know anything about it or how it works. When one of my students asked me how I learnt about web programming I showed him how to use internet search engines properly. He was stunned the whole day about sources of knowledge like wikibooks.org. And when I told the other student that apps on smartphones are just like programs on classic computers he asked where to find the Google Play Store on his Windows laptop.
Tanzanians are not stupid and they’re not lazy. The students I referred to in this article are keen on learning new things and improving their lifes. However it’s hard to understand for first world people like me how they behave and think about many things. For me many people here are some kind of paralyzed by the rushing modern technologies coming from the industrialised nations without any education about it. So I still try to find a good way to teach my students and co-teachers the importance of computer and software knowledge as well as the benefits of Free Software.
And as another important note: Not all Tanzanians are rather helpless when it comes to IT. I also met people who run very successfull IT businesses and some who know crazy software tricks which let my jaw drop to the ground. They somehow found a way to teach themselves although it’s very hard to do that here. I hope there will be more people of this type in the future. But for this, Tanzania need more and better education, more political support of IT schools, better infrastructure and better future perspectives for workers in IT businesses. Sounds like a harsh roadmap? It is…
Thursday, 07 May 2015
Paul Boddie's Free Software-related blog » English | 14:06, Thursday, 07 May 2015
Once again, I was reading an article, became tempted to comment, and then found myself writing such a long response that I felt it would be better here. (The article is initially for subscribers of LWN.net, with which I have a subscription thanks to their generosity in giving FSFE Fellows access upon their Fellowship renewal. The article will eventually become available to everyone after one week, which is the site’s policy. Maybe this blog post will encourage you to read the article, either eventually or as a subscriber.)
It was asserted that Haskell – a statically-typed language – doesn’t need type annotations because its type inference mechanisms are usually good enough. But generally, functional programming languages have effective type inference because of other constraints described by the author of the program. Now, Python could also benefit from such an approach if the language developers were willing to concede a few important properties of the language, but up to now the view has been that if a single property of Python is sacrificed then “it isn’t Python”. That’s why PyPy has had to pursue full compatibility, why the Nuitka author (who has done a heroic job despite detractors claiming it isn’t a worthwhile job) is trying to provide full compatibility, and why things like Cython and Shedskin get pushed to one side and/or ignored by everybody as they keep adding more stuff to the language themselves, which actually isn’t going to help make programs more predictable for the most part.
In the debate, an apparently irritated BDFL was served up a statement of his own from twelve years previously on the topic of Python no longer being Python once people start to change the syntax of the language to meet new needs. What type annotations risk is Python, as people read it, becoming something very different to what they are used to and what they expect. Look at some of the examples of type annotations and apart from the shapes of the brackets, you could start to think you were looking at parameterised template code written in C++. Python’s strength is the way you can write generic code that will work as long as the values you give a part of the program can support the operations being done, and the way that non-compliant values are properly rejected through exceptions being raised. If everybody starts writing “int, int, int” everywhere, the re-usability of code will really suffer; if people still want to express the type constraints properly, the conciseness of the code will really suffer because the type annotations will be necessarily complicated.
I think the BDFL has heard too many complaints about Python programs crashing over the years, but nobody has given him a better strategy than type annotations/declarations for dealing with such criticism. But then again, the dominant conservatism of Python language and CPython implementation development has perhaps resulted in that group being ill-equipped or ill-positioned to offer anything better. Personally, I see the necessary innovation coming from beyond the core development community, but then again, my own perspective is coloured by my own experiences and disappointments with the direction of Python.
Maybe progress – and a satisfactory remedy for negative perceptions of Python program reliability – will only be made when everybody finally has the debate about what Python really is that so many have tried to avoid having over the years. And no: “exactly what CPython 3.x happens to support” is not – and never has been – a valid circuit-breaker on that particular debate.
Wednesday, 06 May 2015
tobias_platen's blog | 19:13, Wednesday, 06 May 2015
I do not use iTunes because it is non-free software. While iTunes music is DRM-Free, other media distributed through iTunes contain DRM. iTunes also has long EULAs which further restrict what you can do with the music. If you buy a CD in a store there is no such EULA and you can pay anonymously with cash. iTunes does not accept cash, they only accept credit cards which tell Big Brother about your purchases and gift card which are known to be insecure. Therefore I do not use iTunes, instead I buy CDs in a store or directly from artists who self publish their music.
As an anime fan I often go to conventions where you can easily buy CDs with Japanese music (including VOCALOID music). Many VOCALOID producers sell their music online using iTunes, while others offer gratis copies for download after you have liked them on Facebook which is a Monstrous Surveillance Engine. Because I don’t like surveillance, I never used Facebook. The only way to support these artists is buying a CD. I also started writing my own music using only free software, and I plan to sell physical copies of my music at anime conventions.
I don’t use VOCALOID because it is non-free software with DRM and a surveillance feature. I also think that there is need for a free iTunes replacement, where one can pay anonymously with GNU Taler and download copies of the music in patent free formats such as FLAC and Ogg Vorbis.
Paul Boddie's Free Software-related blog » English | 11:13, Wednesday, 06 May 2015
A discussion on the International Day Against DRM got my attention, and instead of replying on the site in question, I thought I’d write something about it here. The assertion was that “this war has been lost“, to which it was noted that “ownership isn’t for everyone”.
True enough: people are becoming conditioned to accept that they can enjoy nice things but not have any control of them or, indeed, any right to secure them for themselves. So, you effectively have the likes of Spotify effectively reinventing commercial radio where the interface is so soul-crushingly awful that it’s almost more convenient to have to call the radio station to request that they play a track. Or at least it was when I was confronted with it on someone’s smartphone fairly recently.
Meanwhile, the ignorant will happily trumpet the corporate propaganda claiming that those demanding digital rights are “communists”, when the right to own things to enjoy on your own terms has actually been taken away by those corporations and their pocket legislators. Maybe people should remember that when they’re next out shopping for gadgets or, heaven forbid, voting in a public election.
An Aside on Music
Getting older means that one can happily and justifiably regard a lot of new cultural output as inferior to what came before, which means that if one happened to stop buying music when DRM got imposed, deciding not to bother with new music doesn’t create such a big problem after all. I have plenty of legitimately purchased music to listen to already, and I didn’t need to have the potential enjoyment of any new work inconvenienced by only being able to play that work on certain devices or on somebody else’s terms.
Naturally, the music industry blames the decline in new music sales on “piracy”, but in fact people just got used to getting their music in more convenient ways, or they decided that they already have enough music and don’t really need any more. I remember how some people would buy a CD or two every weekend just as a treat or to have something new to listen to, and the music industry made a very nice living from this convenient siphoning of society’s disposable income, but that was just a bubble: the prices were low enough for people to not really miss the money, but the prices were also high enough and provided generous-enough margins for the music industry to make a lot of money from such casual purchasers while they could.
Note that I emphasised “potential” above. That’s another thing that the music business got away with for years: the loyalty of their audiences. How many people bought new material from an artist they liked only to discover that it wasn’t as good as they’d hoped? After a while, people just lose interest. This despite the effective state subsidy of the music business through public broadcasters endlessly and annoyingly playing and promoting that industry’s proprietary content. And there is music from even a few years ago that you wouldn’t be able to persuade anyone to sell you any more. It is like they don’t want your money, or at least if it is not being handed over on precisely their terms, which for a long time now has seemed to involve the customer going back and paying them again and again for something they already bought (under threat of legal penalties for “format shifting” in order to compel such repeat business).
It isn’t a surprise that the focus now is on music (and video) streaming and that actually buying media to play offline is becoming harder and harder. The focus of the content industries is on making it more and more difficult to acquire their content in ways that make it possible to experience that content on sustainable terms. Just as standard music CDs became corrupted with DRM mechanisms that bring future access to the content into doubt, so have newer technologies been encumbered with inconvenient and illegitimate mechanisms to deny people legitimate access. And as the campaign against DRM notes, some of the outcomes are simply discriminatory and shameful.
Even content that has not been “protected” has proven difficult to recover simply due to technological progress and material, cultural and intellectual decay. It would appal many people that anyone would put additional barriers around content just to maximise revenues when the risk is that the “protectors” of such content will either inadvertently (their competence not being particularly noted) or deliberately (their vindictiveness being especially noted) consign that content to the black hole of prehistory just to stop anyone else actually enjoying it without them benefiting from the act. In some cases, one would think that content destruction is really what the supposed guardians of the content actually want, especially when there’s no more easy money to be made.
Of course, such profiteers don’t actually care about things like cultural legacy or the historical record, but society should care about such things. Regardless of who paid for something to be made – and frequently it was the artist, with the publishers only really offering financing that would most appropriately be described as “predatory” – such content is part of our culture and our legacy. That is why we should resist DRM, we should not support its proponents when buying devices and content or when electing our representatives, and it is why we should try and limit copyright terms so that legacy content may stand a chance of being recovered and responsibly archived.
We owe it to ourselves and to future generations to resist DRM.
Tuesday, 05 May 2015
Nico Rikken » fsfe | 11:14, Tuesday, 05 May 2015
A couple of years ago I started using my own laptop at work under the policy of bring your own device although to me it wasn’t about the device, it was about bringing a free computing platform I can trust, a GNU/Linux distribution, and many free and powerful applications, all to improve my short-term and long-term effectivenes as an engineer. This however changed my needs regarding battery life, screen brightness and form-factor. And with the recent dying of my Lenovo Thinkpad T60p, I desired an upgrade, to be able to destine my current Lenovo Thinkpad T61 to become my backup.
Currently there are plenty of interesting developments regarding more free laptop projects, which are even destined to pass the FSF Respects Your Freedom certification. Specifically the Novena laptop, the Libreboot X200, other corebooted Thinkpads, the Librem laptop, and the EOMA68 15 inch laptop. Of this set only the Libreboot X200 and the Librem seem to provide the desired technical upgrade relative to my T61. The Librem seems more ideal, but import taxes would drastically increase the already hefty price. Eventually the Libreboot X200 seemed to be the best deal around, and that’s probably why it is used by many FSFE contributors.
However eventually I decided to go another route, in retrospect mainly driven by technical aspects of an even longer battery-life, an even brighter screen, and an even smaller form-factor. I decided to use a converted Acer C720 Chromebook, more specifically the C720P model bringing more RAM, a larger SSD, a touchscreen and a white housing. Also there are quite a number of copies available second-hand, reducing the cost. Having an X86-architecture supported by SeaBIOS, the level of freedom on the C720P can be increased rather easily. Removing an internal screw allows the BIOS to be reflashed, and thanks to John Lewis amongst others a free SeaBIOS payload is available to flash the laptop using coreboot. I must admit this laptop wouldn’t pass the FSF’s Respects Your Freedom certification, due to the Intel Management Engine and VGA BIOS, which are on the coreboot tasklist.
Although I’m somewhat of a tinkerer, I left the freedomification to my Dutch FSFE Fellow Kevin Keijzer. He has flashed his own Acer C720 with coreboot, having used it since early 2014. We agreed on a fair price, as free software isn’t about free as in gratis, it’s about free as in freedom. I must admit I was pleasantly surprised by the level of service I was given. The laptop was buffed to remove scratches, reflashed, pre-installed with Ubuntu GNOME according to my specifications, configured with all the right shortcuts and device-specific configurations, and subjected to a test run to make sure everything was working correctly. As a finishing touch, to remind me about practices by other vendors preloading unwanted media, I was given my best preloaded media yet. I donated my defect T60p to Kevin’s effort on creating freedom respecting laptops from discarded Thinkpads.
Now having two operational laptops with two slightly different use-cases, I’m even more encouraged to finish my syncing setup. So far my synchronization is done using Syncthing, Mozilla Sync, my own Freenas build, and a remote OwnCloud server, but more on that later.
Friday, 22 May 2015
bb's blog | 10:31, Friday, 22 May 2015The Libreoffice Human Interface Guidlines (HIG) have been given a new lease on life. In the first step, generic artifacts including the vision, personas, and an UX manifesto are presented. The work on usability and design is often seen as some kind of anarchistic creativity: some people receive divine inspiration out of the blue and [...]
bb's blog | 10:31, Friday, 22 May 2015KSysGuard underperformes in both visual as well as functional respect. Unfortunately it is not actively maintained yet so we are looking for developers first before starting with ideas about the redesign. Recently there was a request on the forums to update KSysGuard. The dialog looks outdated and not appealing, from the user perspective. From the [...]
Thursday, 30 April 2015
Seravo | 08:21, Thursday, 30 April 2015
WordCamps are casual, locally-organized conferences covering everything related to WordPress, the free and open source personal publishing software that powers over 75 million sites on the web.
In May of 2015, WordCamp will finally have its debut in Finland. The event is set to take place at the home base of Seravo in Tampere.
WordCamps come in all different flavours, based on the local communities that produce them. In general, WordCamps include sessions on how to use WordPress more effectively, plugin and theme development, advanced techniques and security. Conference talks can also dig into such topics as how WordPress can be used in marketing or media businesses.
WordCamps are attended by people ranging from blogging newbies to professional WordPress developers and consultants, and usually combine scheduled programming with unconference sessions and other activities.
This year’s WordCamp Finland is held on May 8–9. The conference will bring together more than two hundred WordPress enthusiasts. Seravo is proud to participate by sponsoring the event and helping with the organising effort. The tickets to the event have already been sold out, but you can always participate in the discussions by following @WordCampFinland and #wcfi on Twitter.
See you there!
More information: WordCamp Finland 2015
Tuesday, 28 April 2015
I love it here » English | 05:36, Tuesday, 28 April 2015
Did you try to share several URLs at once on Android before? Until now I copied and pasted each one of the links step-by-step into an e-mail or a text. While checking F-Droid for new programs last month, I discovered bulkshare, which offers an easier way to achieve this task.
First you share each of the links with bulkshare through Android’s share menu. Then you open bulkshare and re-share it with another program. In this step you can choose which of the links you want to share (by default all).
This way you can share the link list for example with K-9 mail or other programs, edit the text around it and send it out.
Thanks to the author Alex Gherghișan for this nice program.
Thursday, 23 April 2015
I love it here » English | 12:22, Thursday, 23 April 2015
We currently wrap-up the PDFreaders campaign, and we need your help to measure our success.
Started in 2009 FSFE’s goal with the campaign was to get rid of advertisement for proprietary PDF readers. We focused on the websites of public administrations, and many people helped us gather contact details for over 2000 public websites which advertised non-free software. Many people helped us to contact the public administrations, governments were made aware of it and published guidelines. Until now we know that 772 of the 2110 bugs were fixed, which is a 36% success rate.
But for most countries we did not check the status for several months now. That’s why we need your help now to make one final round. We are looking for volunteers who can help us checking websites in their native language.
Here a step by step guide:
- Check the etherpad to see if someone is already working on your country list
- If yes, please coordinate directly who takes care about what, so you do not waste your time
- If no, please indicate in the pad that you start to work on it.
- For each web page listed on the page or the xml file, go to the web page and search if there is still an advertisement for non-free PDFreaders
- If yes, keep the bug open.
- If no, use your favourite search engine with a query like: “site:DomainNameOfOrganiation.TLD adobe acrobat pdf reader
- If you have no results, close the bug by adding the current date in the “closed” field in the xml file
- If you have some results and there is still advertisment without also listing Free Software PDF readers, let the bug open and change the link in the “institution-url” field to one from the results you just found.
- If the link is broken, use the query from the point above
- If you have some results and there is just advertisement for non-free PDF readers, change the broken url with a new one in the “institution-url” field.
- If you have no result close the bug by adding the current date in the “closed” field.
- When you have finished to update, please inform others by updating the status on the public pad and sent the xml file to our web team.
- Now, you have all our gratitude. Thank you very much!
Afterwards we will send an update about how many institutions removed the advertisement, and what else we achieved together with you in the campaign.
Monday, 20 April 2015
Told to blog - Entries tagged fsfe | 19:16, Monday, 20 April 2015
This post was originally written as an email to the mailing list of the FSFE web team. In the text I describe my progress in rewriting the build system and integrating campaign sites. I also provide an overview about the road map and give some leads on how to run a local web site build.
Introducing: The build scripts
This mail is particularly long. Unfortunately I'm not really the blogger type so I'll have to use your email inbox to keep you posted about what happens on the technical side. Today I'm going to post some command line examples, that will help you perform local builds and test your changes to the web site. This email will also provide technical background about what's happening behind the screen when the build is performed.
But first things first...
External site integration
As announced in the last report we moved pdfreaders.org and drm.info to the main web server where the sites are served as part of the main web site.
Did you know? Instead of accessing the domains pdfreaders.org or drm.info, you can also go to http://fsfe.org/pdfreaders or http://fsfe.org/drm.info/.
This means the sites are also integrated in the fsfe test builds and parts of the fsfe web site, like e.g. a list of news, can be integrated into the pages. The build process can be observed together with the main build under http://status.fsfe.org.
Production/Test script unification
We used to have different scripts for building the test branch and the productive branch of the website. This means the four files build.sh, build.pl, build-test.sh, and build-test.pl were each present in the test branch and the trunk. As a result, there was not only a logical difference between the build.* and build-test.* scripts, but also between the build.* scripts in the test and trunk section and the build-test.* scripts in the test and trunk section - we actually had to consider four sets of build scripts.
Obviously the fact that the scripts exist in different branches should represent all version differences and should be enough for testing too.
Since the only intended difference between the scripts were some hard coded file system pathes I added support for some call parameters to select the different build flavours. I was able to unify the scripts, causing the test and productive build to work in identical ways.
Pathes are also no longer hard coded, allowing for local builds. I.e. to perform a site build in your home directory you can issue a command like:
~/fsfe-web/trunk/tools/build.sh -f -d ~/www_site -s ~/buildstatus
NOTE: Don't trust the process to be free of side effects! I recommend you set up a distinct user in your system to run the build!
Another note: you will require some perl libraries for the build and some latex packages for building the pdf files included in the site. This is the old build. I aim to document the exact dependencies for my reimplementation of the build process.
Reimplementing the build scripts
Our goals here are
- to be able to extend the build scripts by other functions
- to speed up the build
- to simplify local builds
Of course it is practical to start from a point where we don't have to fundamentally change the way our site is set up. That's why I'd like the new build process to work with all the existing data. Basically it should build the site in a very similar way to the scripts we use today.
Which brings us to:
Understanding the build
The first part of the above mentioned build system is a shell script (tools/build.sh) that triggers the build and does some set up and job control. The actual workhorse is the perl script, tools/build.pl, which is called by the shellscript. On our webserver the shell program is executed every few minutes by cron and tries to update the website source tree from SVN. If there are updates the build is started. Unconditional builds are run only once a night.
The first stage of the site build is performed by
make, also called from the shell script. You can find makefiles all over the svn tree of our website and it is easy to add new make rules to those. Existing make rules call the latex interpreter to generate pdf documents from some of our pages, assembles parts of the menu from different pages, build includable xml items from news articles, etc. The initial make run on a freshly checked out source tree will require a long time, easily more than an hour. Subsequent runs will only affect updated files and run through very quickly.
The perl script then traverses the source tree of our web site. Its goal is to generate a html file for each xhtml file in the source. Since not every source document is available in every language there will be more html files on the output side than there are xhtml files in the source. Missing translations are padded with a file that contains the navigation menu and a translation warning in the respective language and the document text in english.
Each HTML file is generated from more than just its corresponding xhtml source! The basic processing mechanic is xslt and build.pl automatically selects the xsl processing rules for each document depending on its location and file name. The input to the XSL processor is assembled from various files, including the original xhtml file, translation files, menu files and other referenced files like xml news items.
Assembling this input tree for the xslt processor is the most complex task performed by the perl script.
The reimplementation of the build script drops the perl dependency and happens entirely in shell script. It uses
xltproc as main xslt processor. The program is already used during the make stage of the build and replaces the perl module which was previously required. The new shell script makes exessive use of shell functions and is much more modular than the old builder. I started by rewriting functions to assemble the processor input, hence the name of the shell script is build/xmltree.sh. At some point I will choose a more expressive name.
While the perl script went out of its way to glue xml snippets onto each other, a much larger portion of the rewrite is comitted to resolving interdependencies. With each site update we want to process as little source material as possible, rebuilding only those files that are actually affected by the change. The
make program provides a good basis for resolving those requirements. The new script is set around generating make rules to build the entire site. Make can then perform an efficient differential update, setting up parallel processing threads whereever possible. Backend functions are provided to make, again by the same shell script.
To perform a site build in your home directory using the new script you can run:
~/fsfe-web/trunk/build/xmltree.sh -s ~/buildstatus build_into ~/www_site/
Try building only a single page for review:
cd ~/fsfe-web/trunk ./build/xmltree.sh process_file index.it.xhtml >~/index.it.html
Note, when viewing the result in a browser, that some ressources are referenced live from fsfe.org, making it hard to review e.g. changes to CSS rules. This is a property of the xsl rules not the build program. You can set up a local web server and point fsfe.org to your local machine via /etc/hosts if you want to perform more extensive testing.
xmltree.sh provides some additional functions. View
build/xmltree.sh --help to get a more complete overview. If you are bold enough you can even source the script into your interactive shell and use functions from it directly. This might help you debug some more advanced problems.
The new script still has some smaller issues in regards to compatibility to the current build process.
- source files are not yet copied to the destination tree (the old script does this to help translators)
- The XSL rule files for generating RSS and ICS output are not yet executed, resulting in RSS and ICS files not being built automatically. (manual build is possible)
- Documents not present in english are not yet built at all.
make limits the required processing work for a page rebuild drastically, the generation of the make rules alone still takes a lot of time. I intend to relieve this problem by making use of the SVN log.
SVN update provides clear information about the changes made to the source tree, making it possible to limit even the rule generation to a very small subset.
In addition it may be possible to refine the dependency model further, resulting in even fewer updates.
Some files have an unduely large part of the site depending on them. Even a minor update to the file "tools/texts-en.xhtml" for example would result in a rebuild of almost every page, which in the new build script is actually slower than before. For this if not for a hundred other reasons this file should be split into local sections, which in turn requires a slight adaption to the build logic and possibly some xslt files.
documentfreedom.org never benefited much from the development on the main web site. The changes to xsl files, as well as to the build script should be applied to the DFD site as well. This should happen before the start of next years DFD campaign.
Currently our language support is restricted to the space provided by iso639-1 language tags. During the most recent DFD campaign we came to perceive this as a limitation when working with concurrent Portuguese and Brazilian translations. Adding support for the more distinct RFC5646 (IETF) language tags requires some smaller adaptions to the build script. The IETF language codes exend the currently used two letter codes by adding refinement options while maintaining compatibility.
To help contributors in setting up local builds the build script should check its own dependencies at the start and give hints which packages to install.
DanielPocock.com - fsfe | 12:15, Monday, 20 April 2015
It was a great opportunity to meet more people for the first time and share ideas.
Unfortunately, we had some wifi problems that delayed the demonstration but we did eventually see it work successfully towards the end of the talk.
Friday, 22 May 2015
bb's blog | 10:31, Friday, 22 May 2015The Libreoffice UX team presents ideas for an updated workflow to insert charts. We show mockups how the dialog may look like and idea how to tweak your graph to perfection. Probably everyone working with Libreoffice Calc is using charts. And maybe some wondered why the dialog to insert a chart has a wizard-like workflow [...]
Thursday, 16 April 2015
DanielPocock.com - fsfe | 17:48, Thursday, 16 April 2015
The date scheduled for the jessie release, 25 April 2015, is also ANZAC day and the 100th anniversary of the Gallipoli landings. ANZAC day is a public holiday in Australia, New Zealand and a few other places, with ceremonies remembering the sacrifices made by the armed services in all the wars.
Gallipoli itself was a great tragedy. Australian forces were not victorious. Nonetheless, it is probably the most well remembered battle from all the wars. There is even a movie, Gallipoli, starring Mel Gibson.
It is also the 97th anniversary of the liberation of Villers-Bretonneux in France. The previous day had seen the world's first tank vs tank battle between three British tanks and three German tanks. The Germans won and captured the town. At that stage, Britain didn't have the advantage of nuclear weapons, so they sent in Australians, and the town was recovered for the French. The town has a rue de Melbourne and rue Victoria and is also the site of the Australian National Memorial for the Western Front.
Its great to see that projects like Debian are able to span political and geographic boundaries and allow everybody to collaborate for the greater good. ANZAC day might be an interesting opportunity to reflect on the fact that the world hasn't always enjoyed such community.
Monday, 13 April 2015
Mario Fux | 19:29, Monday, 13 April 2015
The dates for the sixth edition of the Randa Meetings are set: Sunday, 6th to Sunday 13th of September 2015. The first Sunday will be the day of arrival and the last Sunday accordingly the day of departure.
So what about you? If you know about Qt and touch gesture support, want to bring your KDE application to Android and Co, plan to work on KDE infrastructure for mobile systems, are a UI or UX designer for mobile and touch interfaces, want to make your software more accessible or just want to work on your already ported KDE application please register as soon as possible on our Sprints page.
The registration is open until the 13th of May 2015. Please add your estimated travel cost and what you plan to work on in Randa this September. You don’t need to include any accommodation costs as we organize this for you (see the Randa Meetings wiki page for further information about the building). After this date we will present a budget and work on a fundraiser (together with you) to make it possible for as many people as possible to come to Randa.
If there are any questions or further ideas don’t hesitate to contact me via email or on freenode.net IRC in #randa.
Friday, 10 April 2015
Seravo | 10:27, Friday, 10 April 2015
OpenFOAM (Open source Field Operation And Manipulation) is a numerical CFD (Computational Fluid Dynamics) solver and a pre/postprocessing software suite.
Special care has been taken to enable automatic parallelization of applications written using OpenFOAM high-level syntax. Parallelization can be further extended by using a clustering software such as OpenMPI that distributes simulation workload to multiple worker nodes.
Pre/post-processing tools like ParaView enable graphical examination of the simulation set-up and results.
A parellel version called OpenFOAM-extend is a fork maintained by Wikki Ltd that provides a large collection of community generated code contributions that can be used with the official OpenFOAM version.
What does it actually do?
OpenFOAM is aimed at solving continuum mechanical problems. Continuum mechanics deals with the analysis of kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles.
OpenFOAM has an extensive range of features to solve complex gas/fluid flows involving chemical reactions, turbulence, heat transfer, solid dynamics, electromagnetics and much more!
The software suite is used widely in the engineering and scientific fields concerning simulations of fluid flows in pipes, engines, combustion chambers, pumps and other diverse use cases.
How is it used?
In general, the workflow adheres to the following steps:
- physical modeling
- input mesh generation
- visualizing the input geometry
- setting simulation parameters
- running the simulation
- examining output data
- visualizing the output data
- refining the simulation parameters
- rerunning the simulation to achieve desired results
Later we will see an example of a 2d water flow simulation following these steps.
What can Seravo do to help a customer running OpenFOAM?
Seravo can help your organization by building and maintaining a platform for running OpenFOAM and related software.
Our services include:
- installing the host platform OS
- host platform security updates and maintenance
- compiling, installing and updating the OpenFOAM and OpenFOAM-extend suites
- cluster set-up and maintenance
- remote use of visualization software
Seravo has provided above-mentioned services in building a multinode OpenFOAM cluster to its customers.
OpenFOAM example: a simplified laminar flow 2d-simulation of a breaking water dam hitting an obstacle in an open container
N.B. Some steps are omitted for brevity!
Input files for simulation are ascii text files with defined open format.
Inside the working directory of a simulation case, we have many files defining the simulation environment and parameters, for example (click filename for sample view):
- defines the physical geometries; walls, water, air
- simulation parameters that define the time range and granularity of the run
- defines material properties of air and water used in simulation
- numerous other control files define properties such as gravitational acceleration, physical properties of the container materials and so on
In this example, the simulated timeframe will be one second with output snapshot every 0,01 seconds.
After input files have been massaged to desired consistency, commands are executed to check and process the input files for actual simulation run:
- process input mesh (blockMesh)
- initialize input conditions (setFields)
- optional: visually inspect start conditions (paraFoam/paraview)
Solver application in this case will be OpenFOAM provided “interFoam”, which is a solver for 2 incompressible fluids. It tracks the material interfaces and mesh motion.
After setup, the simulation is executed by running the interFoam command (sample output).
After about 40 seconds, the simulation is complete and results can be visualized and inspected with ParaView:
And here is a fancy gif animation of the whole simulation output covering one second of time:
Thursday, 09 April 2015
DanielPocock.com - fsfe | 15:05, Thursday, 09 April 2015
This is the first time we've flown out to Australia with Etihad and it may also be the last.
We were due to fly back into Europe at CDG and head down to Lyon for the mini-DebConf this weekend.
Lets look at how our Etihad experience has worked out:
21:00 UTC Tuesday - waking up on Wednesday morning in Melbourne (UTC+10)
13:00 UTC Wednesday - leaving Melbourne about 11pm Wednesday night, a 12-13 hour flight to Abu Dhabi. We had heard about the air traffic control strikes in France (where we are going) and asked the airline if we should fly and they told us everything would be OK.
02:30 UTC Thursday - touchdown in Abu Dhabi, 6:30 am local time. Go to the transfer counter to ask for our boarding passes to CDG. At this stage, we were told that the connecting flight to CDG had been delayed 20 hours due to French strikes. As we are trying to reach Lyon for the mini-DebConf this weekend, we asked if we could leave Abu Dhabi on a 09:00 flight to Geneva. The Etihad staff told us to contact our travel agent (the flight was booked through Expedia) and for the next hour everybody's time was wasted making calls to Expedia who kept telling us to speak to Etihad. Whenever the Etihad customer service staff tried to speak to Expedia, the Expedia call center would hang up.
Eventually, the Etihad staff told us that the deadline for putting us on the Geneva flight had passed and we would be stuck in Abu Dhabi for at least 20 hours.
For flights to and from Europe, airlines have a responsibility to arrange hotels for passengers if there is a lengthy delay. If the airline is at fault, they must also pay some extra cash compensation but for a strike situation that is not applicable.
Etihad has repeatedly fobbed us off. Initially we were given vouchers for Burger King or a pizza slice and told to hang around the transfer counter.
By about 12:00 UTC (4pm local time, nine hours of waiting around the transfer counter) there was still no solution. One passenger was so upset that the airport security were called to speak to him and he was taken away. The airline staff kept giving excuses. Some passengers had been sent to a hotel but others left behind. I asked them again about our hotel and they kept trying to fob me off.
Faced with the possibility that I would miss two nights of sleep and eight hours time difference coming into Europe, I continued asking the Etihad staff to own up to their responsibilities and they eventually offered us access to their airport lounge. We discovered some other passengers in the lounge too, including the passenger who had earlier been escorted away by security.
This is unlike anything we've experienced with any other airline.
At every opportunity (the check-in at Melbourne, or when the Geneva flight was boarding), the airline has failed to make arrangements that would have avoided cost and inconvenience.
Assuming the flight goes ahead with a 20 hour delay, we will arrive in CDG some time Friday morning and not really sleep in a proper bed again until Friday night, about 70 hours after getting up in Melbourne on Wednesday morning. Thanks Etihad, you are a waking nightmare.
The airline has been evasive about how they will deal with our onward travel from CDG to Lyon. We had booked a TGV train ticket already but it is not valid after such a long delay and it seems quite possible that trains will be busier than usual thanks to the air traffic control strike. So we don't even know if we will be loitering around a Paris airport or railway station for hours on Friday and nobody from the airline or Expedia really seems to care.
The only conclusion I can reach from this experience is that Etihad can't be trusted, certainly not for long journies such as Australia to Europe. Having flown through Singapore, Kuala Lumpur and Hong Kong, I know that air passengers have plenty of options available and there are many airlines that do go the extra mile to look after passengers especially on such long journeys. The airline missed opportunities to re-route us at every opportunity. It looks like they help some passengers (like those who did get to hotels) but leave many others high and dry just to stay within their budget.
Wednesday, 08 April 2015
Paul Boddie's Free Software-related blog » English | 16:37, Wednesday, 08 April 2015
It is hard to believe that almost two years have passed since I criticised the Ubuntu Edge crowd-funding campaign for being a distraction from true open hardware initiatives (becoming one which also failed to reach its funding target, but was presumably good advertising for Ubuntu’s mobile efforts for a short while). Since then, the custodians of Ubuntu have pressed on with their publicity stunts, the most recent of which involving limited initial availability of an Ubuntu-branded smartphone that may very well have been shipping without the corresponding source code for the GPL-licensed software being available, even though it is now claimed that this issue has been remedied. Given the problems with the same chipset vendor in other products, I personally cannot help feeling that the matter might need more investigation, but then again, I personally do not have time to chase up licence compliance in other people’s products, either.
Meanwhile, some genuine open hardware initiatives were mentioned in that critique of Ubuntu’s mobile strategy: GTA04 is the continuing effort to produce a smartphone that continues the legacy of the Openmoko Neo FreeRunner, whose experiences are now helping to produce the Neo900 evolution of the Nokia N900 smartphone; Novena is an open hardware laptop that was eventually successfully crowd-funded and is in the process of shipping to backers; OpenPandora is a handheld games console, the experiences from which have since been harnessed to initiate the DragonBox Pyra product with a very similar physical profile and target audience. There is a degree of collaboration and continuity within some of these projects, too: the instigator of the GTA04 project is assisting with the Neo900 and the Pyra, for example, partly because these projects use largely the same hardware platform. And, of course, GNU/Linux is the foundation of the software for all this hardware.
But in general, open hardware projects remain fairly isolated entities, perhaps only clustering into groups around particular chipsets or hardware platforms. And when it comes to developing a physical device, the amount of re-use and sharing between projects is perhaps less than we might have come to expect from software, particularly Free Software. Not that this has necessarily slowed the deluge of boards, devices, products and crowd-funding campaigns: everywhere you look, there’s a new Arduino variant or something claiming to be the next big thing in the realm of the “Internet of Things” (IoT), but after a while one gets the impression that it is the same thing being funded and sold, over and over again, with the audience probably not realising that it has all mostly been done before.
The Case for Modularity
Against this backdrop, there is one interesting and somewhat unusual initiative that I have only briefly mentioned before: the development of the EOMA-68 (Embedded Open Modular Architecture 68) standard along with products to demonstrate it. Unlike the average single-board computer or system-on-module board, EOMA-68 attempts to define a widely-used modular computing unit which is also a complete computing device, delegating input (keyboards, mice, storage) and output (displays) to other devices. It has often been repeated that today phones are just general-purpose computers that happen to be able to make calls, and the same can be said for a lot of consumer electronics equipment that traditionally were either much simpler devices or which only employed special-purpose computing units to perform their work: televisions are a reasonably illustrative example of this.
And of course, computers as we know them come in all shapes and sizes now: phones, media players, handhelds, tablets, netbooks, laptops, desktops, workstations, and so on. But most of these devices are not built to be upgraded when the core computing part of them becomes obsolete or, at the very least, less attractive than the computing features of newer devices, nor can the purchaser mix and match the computing part of one device with the potentially more attractive parts of another: one kind of smart television may have a much better screen but a poorer user interface that one would want to replace, for example. There are workarounds – some people use USB-based “plug computers” to give their televisions “smart” capabilities – but when you buy a device, you typically have to settle for the bundled software and computing hardware (even if the software might eventually be modifiable thanks to the role of the GPL, subject to constraints imposed by manufacturers that might prevent modification).
With a modular computing unit, the element of choice is obviously enhanced, but it also helps those developing open hardware. First of all, the interface to the computing unit is well-defined, meaning that the designers of a device need not be overly concerned with the way the general-purpose computing functionality is to be provided beyond the physical demands of that particular module and the facilities provided by it. Beyond such constraints, being able to rely on a tested functional element, designers can focus on the elements of their device that differentiate it from other devices without having to master the integration of their own components of interest with those required for the computing functionality in one “make or break” hardware design that might prove too demanding to get right first time (or even second or third time). Prototyping complicated circuit designs can quickly incur considerable costs, and eliminating complexity from what might be described as the “peripheral board” – the part providing the input and output capabilities and the character of a particular device – not only reduces the risk of getting things wrong, but it could make the production of that board cheaper, too. And that might open up device design to a broader group of participants.
As Nico Rikken explains, EOMA-68 promises to offer benefits for hardware designers, software developers and customers. Modularity does make sense if properly considered, which is perhaps why other modularity initiatives like Phonebloks have plenty of critics even though they share the same worthy objectives of reducing waste and avoiding device obsolescence: with vague statements about modularity and the hint of everything being interchangeable and interoperating with everything, one cannot help be skeptical about the potential complexity and interoperability problems that could result, not to mention the ergonomic issues that most people can easily relate to. By focusing on the general-purpose computing aspect of modularity, EOMA-68 addresses the most important part of the hardware for Free Software and delegates modularity elsewhere in the system to other initiatives that do not claim to do it all.
A Few False Starts
Unfortunately, not everything has gone precisely according to schedule with EOMA-68 so far. After originally surfacing as part of an initiative to make a competitive ARM-based netbook, the plan was to make computing modules and “engineering boards” on the way to delivering a complete product, and the progress of the first module can be followed on the Allwinner A10 news page on the Rhombus Tech Web site. From initial interest from various parties at the start of 2012, and through a considerable amount of activity, by May 2013, working A10 boards were demonstrated running Debian Wheezy. And a follow-up board employing the Allwinner A20 instead of the A10 was demonstrated running Debian at the end of October 2014 as part of a micro-desktop solution.
One might have thought that these devices would be more widely available by now, particularly as development began in 2012 on a tablet board to complement the computing modules, with apparently steady progress being made. Now, the development of this tablet was driven by the opportunity to collaborate with the Vivaldi tablet project, whose own product had been rendered unusable for Free Software usage by the usual product iteration performed behind the scenes by the contract manufacturer changing the components in use without notice (as is often experienced by those buying computers to run Free Software operating systems, only to discover that the wireless chipset, say, is no longer one that is supported by Free Software). With this increased collaboration with KDE-driven hardware initiatives (Improv and Vivaldi), efforts seemingly became directed towards satisfying potential customers within the framework of those other initiatives, so that to acquire the micro-engineering board one would seek to purchase an Improv board instead, and to obtain a complete tablet product one would place an advance order for the Vivaldi tablet instead of anything previously under development.
Somehow during 2014, the collaboration between the participants in this broader initiative appears to have broken down, with there undoubtedly being different perspectives on the sequence of events that led to the cancellation of Improv and Vivaldi. Trawling the mailing list archives gives more detail but not much more clarity, and it can perhaps only be said that mistakes may have been made and that everybody learned new things about certain aspects of doing business with other people. The effect, especially in light of the deluge of new and shiny products for casual observers to purchase instead of engaging in this community, and with many people presumably being told that their Vivaldi tablet would not be shipping after all, probably meant that many people lost interest and, indeed, hope that there would be anything worth holding out for.
The Show Goes On
One might have thought that such a setback would have brought about the end of the initiative, but its instigator shows no sign of quitting, probably because genuine hardware has been made, and other opportunities and collaborations have been created on the way. Initially, the focus was on an ARM-based netbook or tablet that would run Free Software without the vendor neglecting to provide the complete corresponding source for things like the Linux kernel and bootloader required to operate the device. This requirement for licence compliance has not disappeared or diminished, with continuing scrutiny placed on vendors to make sure that they are not just throwing binaries over the wall.
But as experience was gained in evaluating suitable CPUs, it was not only ARM CPUs that were found to have the necessary support characteristics for software freedom as well as for low power consumption. The Ingenic jz4775, a sibling of the rather less capable jz4720 used by the Ben NanoNote, uses the MIPS architecture and may well be fully supported by the mainline Linux kernel in the near future; the ICubeCorp IC1T is a more exotic CPU that can be supported by Free Software toolchains and should be able to run the Linux kernel in addition to Android. Alongside these, the A20 remains the most suitable of the options from Allwinner, whose products have always been competitively priced (which has also been a consideration), but there are other ARM derivatives that would be more interesting from a vendor cooperation perspective, notably the TI AM389x series of CPUs.
Meanwhile, after years of questions about whether a crowd-funding campaign would be started to attract customers and to get the different pieces of hardware built in quantity, plans for such a campaign are now underway. While initial calls for a campaign may have been premature, I now think that the time is right: people have been using the hardware already produced for some time, and considerable experience has been amassed along the way up to this point; the risks should be substantially lower than quite a few other crowd-funding campaigns that seem to be approved and funded these days. Not that anyone should seek to conceal the nature of crowd-funding and the in-built element of risk associated with such campaigns, of course: it is not the same as buying a product from a store.
Nevertheless, I would be very interested to see this hardware being made, and I am even on record as having said so. Part of this is selfishness: I could do with some newer, quieter, less power-consuming hardware. But I also think that a choice of different computing modules, supporting Free Software operating systems out of the box, with some of them candidates for FSF endorsement, and offering a diversity of architectures, would be beneficial to a sustainable computing infrastructure in the longer term. If you also think so, maybe you should follow the progress of EOMA-68 over the coming weeks and months, too.
Friday, 22 May 2015
bb's blog | 10:31, Friday, 22 May 2015The Libreoffice UX team presents two proposals on how to access shapes from the sidebar, with the goal to unify the look and feel of Libreoffice tools and to get more space for additional categories of shapes. Libreoffice Draw was treated somewhat novercally in last time. But we didn’t forget it and started to pimp [...]
/127.0.0.? /var/log/fsfe/flx » planet-en Albrechts Blog Alessandro at FSFE » English Alina Mierlus - Building the Freedom » English André on Free Software » English Being Fellow #952 of FSFE » English Bela's Internship Blog Bernhard's Blog Bits from the Basement Björn Schießle's Weblog » English Blog of Martin Husovec Blog » English Bobulate Brian Gough's Notes Carlo Piana :: Law is Freedom :: Ciarán's free software notes Colors of Noise - Entries tagged planetfsfe Commons Machinery » FSFE Communicating freely Computer Floss Creative Destruction & Me » FLOSS Daniel Martí's blog DanielPocock.com - fsfe Don't Panic » English Planet ENOWITTYNAME Escape to freedom FSFE Fellowship Vienna » English Fellowship Interviews Fellowship News Frederik Gladhorn (fregl) » FSFE Free Software & Digital Rights Noosphere Free Software with a Female touch Free as LIBRE Free speech is better than free beer » English Free, Easy and Others From Out There GLOG » Free Software Gianf:) » free software Graeme's notes » Page not found Green Eggs and Ham Handhelds, Linux and Heroes Heiki "Repentinus" Ojasild » English HennR's FSFE blog Henri Bergius Hook’s Humble Homepage I love it here » English Inductive Bias Intuitionistically Uncertain » Technology Jelle Hermsen » English Jens Lechtenbörger » English Jonas Öberg Karsten on Free Software Leena Simon » » english Losca Mario Fux Mark P. Lindhout’s Flamepit Martin's notes - English Matej's blog » FSFE Max's weblog » English Myriam's blog Mäh? Nice blog Nico Rikken » fsfe Nicolas Jean's FSFE blog » English Paul Boddie's Free Software-related blog » English Pressreview Riccardo (ruphy) Iaconelli - blog Saint's Log Sam Tuke » Free Software Sam Tuke's blog Seravo The Girl Who Wasn't There » English The trunk Thib's Fellowship Blog » fsfe Think. Innovation. » Blog Thinking out loud » English Thomas Koch - free software Thomas Løcke Being Incoherent Told to blog - Entries tagged fsfe Tonnerre Lombard Torsten's FSFE blog » english Torsten's Thoughtcrimes» Free Software Viktor's notes » English Weblog Weblog Weblog Weblog Weblog Weblog Werner's own blurbs With/in the FSFE » English a fellowship ahead agger's Free Software blog anna.morris's blog ayers's blog bb's blog blog blog.padowi.se » English drdanzs blog » freesoftware emergency exit free software blog freedom bits gollo's blog » English hesa's Weblog » Free Software irl:/dev/blog » fsfe-planet julia.e.klein's blog marc0s on Free Software mina86.com mkesper's blog » English nikos.roussos pb's blog » en pichel's blog rieper|blog » en softmetz' anglophone Free Software blog stargrave's blog the_unconventional's blog » English things i made tobias_platen's blog tolld's blog wkossen's blog yahuxo's blog