Russel Winder's Website

N900 Upgrade -- Joke?

My N900 is telling there is a Maemo upgrade, and the list of new features looks enticing. However if you try the upgrade, it goes through the first few stages of preparation, and then stops and tells you that you can't do it over the air, you have to download the install file onto a PC and use the USB connection to do the upgrade.

When you read the instructions it tells you a whole collection of bizarre things you have to do to get things to work. OK, so it seems like you have to have a PhD in Linux administration to upgrade a Maemo N900. What is actually worse are the warnings about how the upgrade will completely anihilate all data on the phone. That isn't going to win friends and influence people -- at least not positively. It is making me think twice about doing the upgrade at all. Even though I back up my data on the phone and then copy the backup file out to my fileserver. The thought of being left with a phone that doesn't work is a real mountain.

If this is the upgrade strategy then this phone is definitely not a general consumer device, it is definitely a programmer's toy. I wonder if Android phones are as bad?

The US Patent System in Flux?

The US Supreme Court today issued its opinion in the case most people know as "In Re Bilski". Although there is actually much more to it, this case is about whether business processes can be patented. A lot of people (mostly large corporates) have a lot riding on the patentability of business processes, so this is an important opinion, it has the ability to fundamentally change the US patent system. The software industry is interested in this case because there is the side issue of whether software is patentable. Currently the US system allows software to be patented, but many, for example, think this is wrong. The UK patent system technically does not allow software patents, but is coming under increasing pressure (mostly from big US corporates) to allow them. The EU patent system appears already to have succumbed to pressure (mostly from big US corporates) to allow software patents, which leads to real problems in the EU since there is conflict between the two systems. The correct outcome would be for EU Patent Office to agree that software itself cannot be patented and for the UK Patent Office to reaffirm that this is its position.

Why are people worrying about business and software patents? Patents are weird things in that you can be in violation of the law without knowing it. A patent is a legal instrument given to someone to create a monopoly on exploitation of an invention. The patent holder can licence others to manufacture and sell products based on the invention. Anyone found to be manufacturing a product that uses the same information as in the patent documentation is in violation of the patent and can be sued in the courts. Not only does this lead to (potentially huge) damages, but there are the costs as well. The really important issue here is that ignorance is not a defence: you can be guilty of violation of the patent even if you didn't know the patent existed. Even if you didn't know anything about the invention that was patented you can still be in violation of a patent.

Some people argue that patents are tools for small players to benefit from inventions. Someone can create an invention but not have the resources to exploit it. By taking out a patent they can benefit by getting a big player who does have the resources to manufacture and sell product based on the invention to do so and pay royalties to the inventor. Whilst this scenario is possible, and indeed does happen, it is no longer the real purpose of the patent system.

Patents are now tools for businesses to do business with each other, and to ensure their ability to stop new players from entering into a market, i.e. to preserve monopoly positions against all comers. Businesses build up patent portfolios as an offensive or defensive measure.

  • As offence you create (or more usually buy) patents and then search round all the products on sale looking for ones that violate (or appear to violate) your patent. You then threaten to sue the manufacturers of violating (or apparently violating) products unless they pay royalties. If they don't pay up you then actually do sue them if you think you can win - notice the patent holder may not actually believe they can win, they may just have been trying it on. Without the patent involved this sort of behaviour would be labelled extortion or demanding money with menaces and be a criminal offence, but with the patents it is a business process.
  • As defence you stockpile patents that others might find useful so that when an organization approaches you wanting to talk to you about some product or other, you have bargaining chips so you can cross licence patents you are interested in for zero cost enabling you to make products that otherwise you would not be able to because of the need to make royalty payments.

The only defence for a small organization against a patent is "prior art" or proof that the invention is obvious - large organization generally have patent portfolios and can find a patent the other party is interested in so can do a deal. Any legal action is hugely costly - this is the main barrier, unless you have deep pockets you cannot afford to enter into any legal action. Patents are tools for big organizations.

OK so that is the backdrop, why are software developers worried?

Let's use an extreme example. Say I took out a patent on the for loop being used to create the sum of values in a sequence:

var sum = 0.0 for ( item in sequence ) { sum += item }

As soon as I have my patent in a given jurisdiction, everyone who uses code such as this in that jurisdiction owes me royalties for every use - and remember this is true even if they didn't know that there was a patent granted in this.

So in a world with software patents what role is there for the independent developer of software? There isn't one. Large corporates would be the only people capable of developing any software because only large corporates have the resources to take a playable position in the patent wars. It's no wonder that the large corporates want software patents, they want to kill off free and open source software. The large corporates want a world in which they control all software.

Unless you are one of the monopolistic corporates desperate to remove all potential opposition, you should be against software patents.

Reports to date seem to indicate that "In Re Bilski" finds that the Bilski process patent is not allowed, but they have not offered an opinion on business processes in general and definitely not on software patents. So this is not a close, just the beginning of the next phase of activity.

_ I am not a lawyer, the above is not a legal opinion, just my understanding as a person involved in software development. _

Groklaw is also a good place to follow this sort of thing.

Go Tutorial Session

Yesterday (yes, Saturday!) the London Google Technology User Group put on an all day session about the Go programming language. Initially this was going to be a 90min talk followed by a "hack" session, but at the last minute it got turned into an exercise-led tutorial. Andrew Gerrand (a member of the Go development team within Google) led the session which had been organized by Anup Jadhav. In effect this was a re-run of the session Russ Cox did at USENIX 2010

I think I would have started the session in a very different way to the way Andrew did, but once things got going, it was a fun day. Certainly the guys I ended up in the pub with were great. Bizarrely, the pub we had originally been going to end up is was closed and shuttered. On a Saturday night, in a fairly residential area. Anyway the place we ended up was OK, roomy enough, and the London Pride wasn't too dreadful.

I have been looking at Go on and off for a few months now - I didn't look at it immediately it was announced because of all the fanfare and hype, which was reminiscent of the fanboi-ism rampant in the Apple product arena and now spilling over into the Google product arena. However, once all the initial fuss had died down, I started taking a look at it.

Go is marketed as a general purpose, statically typed, compiled to native code programming language suitable for systems programming. It is aimed at the place where C and C++ currently hold sway but bringing to that area some of the "speedy development" aspects of Python, Java, etc.

I have to admit, as soon as I started using the language I liked it. I am not keen on the massive fascism regarding code layout and the use of Make, but the language is appealing enough functionally that I can get over this. I am not a fan of the logo, mascot or even the language name, but can live with these - mostly, trying to do Web searches for things associated with Go really is hard. (Experience from Groovy indicates that programming languages with twee names suffer a take up barrier, especially by the suits running most companies.)

The most obvious thing when you first see any Go code is that the types are on the right of the names, Pascal style. Actually there is a really interesting backdrop to token ordering in Go: Go can be parsed without performing symbol table lookups. This enforces a left-to-right consistency of all tokens making complex type specifications readable. Contrast this with C and C++ where some type specification are literally incomprehensible unless you are an LL(k) parser.

What I like most about Go is the goroutines - though again I don't think the name is anything other than a twee joke. Go recognizes that the Multicore Revolution is in full swing (unlike C++ which is only just getting to the "threads are part of the language" stage of development): Go has a high-level, very usable model of concurrency based on processes and channels - threads are infrastructure the programmer should never have to work with.

Go's antecedents are the AT&T series of languages associated with the Plan 9 operating system. Process and channel based thinking has been an integral part of this development from the outset. Newsqueak led to Alef which led to Limbo which led to Go, this has been positively confirmed by Rob Pike on the Go users mailing list:

_ Russ's [Russ Cox] historical analysis is right, except that there were two CSPs. The original 1978 paper did not have channels; the later book did. Newsqueak was the first programming language I ever used that had channels, but that is not a claim of invention. I honestly have no memory of where idea for channels in Newsqueak came from, but I'm pretty sure I didn't see the CSP book until afterwards. I do have a vivid memory of getting the idea for the implementation of hyperthreading in Newsqueak by generalizing something I read in the HUB paper by O'Dell, which was based on much earlier work by Masamoto; that was the trigger to write the language. Winterbottom grafted Newsqueak's communication ideas onto a C-like systems language to create Alef. Limbo arrived when it was time to build a virtual machine for embedded processing. _

_ Neither Occam nor Erlang influenced those languages. I didn't learn much about Occam until after Newqueak, and about Erlang until after Limbo. It was a long time ago but I believe I ignored Occam primarily because of its lack of first-class channels; its roots in hardware did not enable that generality. _

So the process and channel approach of Go is a different thread of development compared to Erlang, Scala, Python-CSP, GroovyCSP, etc. yet leading to exactly the same paradigm, a paradigm yet to have impact in the C, C++, Fortran and Java communities. This software architecture paradigm fits superbly with massively multicore processors predicted for the medium and long term. Contrast this with the shared memory multi-threading paradigm which is increasingly causing problems for C, C++, Fortran and Java code.

Most of yesterday's session was about Web applications - not really surprising given that the session was led by Google - and didn't really cover the package structuring, code testing and parallelism aspects of things, which for me are the unique selling points (USPs) of Go. I think Go will rapidly replace C for all software development that uses C because it is the default language of development. I suspect C++ will lose some to Go, but C++ has capabilities and a fanatically loyal user base that more or less guarantee it will still be used for many years to come. Also some applications using Python (and Ruby, Perl, etc.) will use Go because it has many of the ease of development aspects that these languages have but it is compiled to native code and so very much faster of execution. Python (and Ruby, Perl, etc.) are of course dynamic languages and so have meta-object protocol capabilities that are impossible with compiled languages such as Go, C, C++, Fortran. Though, like Java, Go has a reflection system.

But Go is just a Google language the purpose of which to promote the monopoly of Google over everything in everyone's lives. I hear you say. Well it is true that Go is currently being developed by Google staff with an eye to Google's use, but it is an open source project in the full FSF definition.

Go is still a very young language, but the libraries are being built at a fairly astonishing rate. It will very soon be ready for production code usage by all and sundry.

Malice vs. Rights

It seems that Google can remove applications from your Android phone without first asking your permission. Though they do at least do you the courtesy of telling you after such a thing has been completed..

This facility that Google has is being marketing as something to benefit users and protect them against malicious software, and in some sense this is true. However, there is nothing to stop Google from deleting whatever application that they choose to, whenever they want. Are the applications supplied via the Android Market different to those supplied with the device on initial purchase? I have no doubt that anyone downloading an application from the Android Market has been forced to agree that Google has the right to delete the application whenever it wants. Perhaps more interesting is whether they have the same rights over applications supplied at the point of purchase: are the users rights under the Sale of Good Acts abrogated by the Android use licence? I am not a lawyer, so cannot give a view. It will be interesting if the case ever arises!

The obvious question is: can Google be trusted?

It is well known that Apple has the same capability for their products, and Amazon for their's, and indeed both facilities have been used. Can they be trusted?

There seems to be a bit of a trend here: manufacturers maintain control over the products you purchase. Are Google, Apple, Amazon just this centuries incarnation of "Big Brother" (from George Orwell's book 1984).

Where are the user's rights in all this? Nowhere. "Big Brother's" protection of you against malicious software clearly trumps any rights you thought you might have had over the products you thought you owned by paying money for them. It seems the very nature of sale and product are changing. But then I am not a lawyer so cannot say. Interesting issues though.

The Debian Squeeze vs. Ubuntu Lucid Question

As I had a spare partition on my laptop, I loaded Debian Squeeze (so the laptop can dual boot Debian Squeeze or Ubuntu Lucid on it and have been using it. Having spent a while getting the bulk of the things I use loaded - only a little bit longer than with Ubuntu since my set up doesn't look very much like the default configurations of either - my Squeeze desktop looks identical to my Lucid one. Well the obvious differences are the Debian rather than Ubuntu logo on the main menu entry and the different waste bin (aka trash can) icon on the panel. Actually this shouldn't really be much of a surprise to anyone, Ubuntu is built on Debian so everything is basically the same. Also of course all my filestore is a mounted partition and so is the same whichever operating system I boot. So what are the real differences?

Perhaps the single biggest issue is that with Ubuntu the Flash plugin works on Galeon, Epiphany, Firefox and Chromium whereas with Debian, the Flash plugin isn't available and won't load. Why is this an issue? The free replacement for Flash used by Debian does not emulate Flash and the BBC website explicitly checks the version of Flash being used before it allows you to do anything with any of its dynamic content -- such as cricket scoreboards. Cricket fans everywhere will appreciate that there is nothing more important in the universe than having the continuously updated scoreboard on your desktop and match commentary coming out of the speakers. OK, so the core of the problem is that the BBC is mandating Flash v10 or higher, but this really does mean that Ubuntu got it right and Debian got it wrong :-(

Of course on the downside for Ubuntu, they have not only removed Galeon from main, they have ejected it from universe and even multiverse. Aggressive or what? I know Galeon is ancient and totally untrendy in these days of more and more flaming vulpes and shinier and shinier reflective metal stuff, but I like it. Actually what I really like is having four different browsers so that I can use different ones for different categories of browsing. Oh well, down to three now. Of course Squeeze has an analogous problem, Chromium hasn't been allowed into Squeeze from Sid yet, so isn't available. So 3-3 on that score.

Huge irritant with Squeeze is that Python is still 2.5 by default. This really, really sucks. Fortunately 2.6 is available, but you have to manually hack the symbolic links because Python is not part of the alternatives system. Hopefully 2.6 will become the default in Squeeze soon.

Apart from Python, most of the versions of things in Squeeze are a bit more up-to-date that in Lucid, and haven't been tinkered with to be biased towards the default Ubuntu look and feel. Examples are the on screen display which in Squeeze obeys the GTK theme and is graphically rooted to the icon of the generating item, whereas on Lucid it is always black and always somewhere you don't expect it. Stupidly, it is trivia like this that end up leading to decisions. For example:

  • pdfjam is v2 in Squeeze and v1 in Lucid. This causes me real pain as the scripts got changed such that all the options to commands are different so all my scripts now have to know whether they are running on Lucid or Squeeze :-(
  • The power applet in the notification area tells me the percentage full my battery is on Squeeze but not on Lucid. Of course Ubuntu is abandoning the notification area in favour of indicators. Lucid is a transition for this but when I tried using their indicator area rather than notification area for the applications I used the spacing of the icons was ridiculously large and unchangeable. The Ubuntu team have a strong model of the applications you will use and how you will use them, and are not entirely keen on worrying about the applications that some of us really use. Hopefully they don't get too Apple-like in their user interface fascism.

On the upside, setting up Squeeze with my home directory on a filestore shared with Lucid has fixed most of the Compiz problems I was having with Lucid! There are lots of bug reports on Launchpad about problems with configuration of Compiz and lack of storing of options from session to session. Well the solution is: load Squeeze.

In other news: Network Manager works much better on Squeeze than on Lucid, at least for me. When I plug in or unplug the Ethernet wire, the routing tables are properly updated: Ubuntu seems to assume you will either use wired or wireless Internet but not mix both in a single session.

So, apart from the Flash problem, Squeeze is the winner of the competition for which distribution I am going to use for the foreseeable future. But note it's basically the trivial things that are the decision making points. Also I might change my mind -- flexibility over dogma I think.

Go back to using Debian or stay with Ubuntu?

Along with many others, I have been increasingly irritated by certain aspects of the way Ubuntu has evolved recently. There is increasing obsession, and indeed hype, around the look and feel that is presented by default. This is not in itself a problem, at least not for me: having an interface that looks appealing and is usable is very definitely a good thing. At the surface of my problem is the Apple-like assumption that there should be a single, essentially unchangeable look and feel for all instances of Ubuntu. It is not the attempt to create a default, harmonious look and feel that is the problem for me, it is the fact that Ubuntu users are basically being blocked from anything different from that which is being handed down by the Ubuntu designer elite. In pushing their vision for Ubuntu, Ubuntu is losing the ability to work reasonably with anything other than the officially sanctioned default look and feel.

I have my own (highly idiosyncratic) user interface that works for me. It is minimalistic, yet Gnomic. It is just not post-Jaunty Ubuntuish. I want to keep my look and feel rather than fall in with the emerging Ubuntu standard. More importantly though I want to use the applications I prefer with the functionality that fits what I want to do.

I moved from Debian to Ubuntu early in Ubuntu's life, 5.04 Hoary Hedgehog in fact. My rationale was three-fold: Ubuntu was based on Debian (to which I had moved from Red Hat 7.3 instead of going to Red Hat 8.0) and so was not that much of a change; having a corporation managing a Debian-based release on a rigid 6-monthly update cycle appealed; and the staff of the company I was running at the time told me to.

Until 8.04 Intrepid Ibex, I had been happy with my choice and was very much an Ubuntu advocate. There had been some problems with sound in Hardy and Intrepid which hit me but which I sorted, and which I assumed was a one off problem. However, it turns out that this was the beginning of my disaffection with the way Ubuntu was developing. The upgrade to Jaunty Jackalope had various glitches associated with applications I think of as core and were in the main repository (not in the universe or multiverse repositories). The response to the bugs I reported were null: the bugs were ignored and the packages were subsequently dropped from main in favour of other applications with less usability but more trendiness. It seems that there was a turning point sometime around late 2008 that the look and feel of Ubuntu was more important than the functionality. Much of the default application set dumbs down the functionality beyond my ability to live with it. The volume control is perhaps the most trivial and yet is the most obvious instance of my irritations: the mixer is completely missing.

So I have a decision to make: do I retain allegiance to Ubuntu, or do I, like many others say "enough is enough" and, like many others have done, return to using Debian - probably Squeeze (aka Testing) but maybe Sid (aka Unstable). I still like the Ubuntu ideal as of 2005-2009, so still feel an affinity to using Ubuntu, but the continuously updated pool of things to choose from idea of Debian Testing and Unstable is increasingly appealing.

Given a decision of this enormity, I shall have to dither a while.

Developing Django Websites

I am just getting into constructing websites using Django. So I thought "buy some books, that will help". Well the books I got, including the one by the Benevolent Dictators for Life of Django, tell you how Django works and how a final working site works, but they don't tell you how to construct a site, nor how to test a site - neither unit test or integration test are covered. Contrast this with the Grails books which heavily emphasize using test-driven development to create the sites, and have lots of material, and even index entries, about tests and testing. The Django books simply say "you should test, for information see the Django website".

So the inference seems to be that testing is central to the use of Grails whereas with Django it is something you are told you should do, but it isn't something that needs to be presented in the books. Definite case of mixed messages. Conclusion: Django book authors really need to get their priorities in order.

Amazon now owns all Social Networking Systems in the USA

The USPTO has just granted patent 7,739,139 to Amazon Technologies Inc for "Social Networking System " - grant date 2010-06-15, filing date 2008-05-27. This seems to imply that all social networking systems in the USA now have to start paying royalties to Amazon. Will Facebook, Linked-In, GitHub, Launchpad, etc., pay up or will someone fight this in the courts?

The obvious conspiracy theory here is that the lawyers in the USA are gaming the system so as to ensure maximal transfer of funds from clients to lawyers. Lawyers get paid for creating patents, whether or not they are sensible, and then get paid to battle out the subsequent litigation trying to decide if the patent is valid or even reasonable.

The other conspiracy theory is that USPTO is milking the system by allowing any and all applications to succeed and pocketing the money without doing any work, just waiting two years.

The only real conclusion here is that the USA patent system is broken. Let us hope that the UK and EU systems do not further descend into the same mess.

Dell joins the Evil Empires League, and . . .

Dell has entered the League of Evil Empires at Number 7. Dell claim to want to sell you a new computer where you choose whether to have Ubuntu or Windows installed as the operating system. However, they refuse to offer for sale any computers running Ubuntu, they all run Windows. This is Evil.

In the completely and totally unsurprising section, Apple retain their Number 1 position. They used to offer GNU Go to iPhone users in violation of the GPL, but instead of complying with the GPL, they removed the software. Apple only allow you to run software they approve of and for which you pay them, their licence is very restrictive and designed to create and maintain a monopoly over what you can do with equipment you supposedly own. This is Very Evil.

just::thread 1.4.0 is out

If you are a C++ programmer and you do anything with threads in C++ then you really need just::thread, it is a good implementation of the C++0x standard. Version 1.4.0 just came out as noted at

Shared memory multi-threading is anathema, futures are the future.

Copyright © 2017 Russel Winder -