Russel Winder's Website

Litigation Frenzy in the Mobile Arena

Unless you have been living on the other side of the universe for the last few months, you will be aware that Oracle America Inc (was Sun Microsystems Inc, now isn't) is suing Google Inc over software patents associated with the JVM that they claim are being used in Google's "clean room" implementation Dalvik. It is not totally clear what the overall goal of the litigation is, except that Oracle intend to make a profit by it - even though they are annoying essentially all of the Android and Java community in the process. It is probably that they want a better future for Java ME in consumer mobile technology than is currently likely. Given the competition is Windows Mobile, iOS, and Android, attacking Android seems like the easiest path of the moment.

In any event Google has made a response to the initial complaint (I am not a lawyer so cannot comment on how good or bad the response is in law, but it seems sort of OK(ish) from a technology perspective). Oracle has now (via the same lawyers who managed the SCO vs. IBM, and SCO vs. Novell cases, for SCO - hummm..., indeed) have taken the "interesting" route of asking for significant chunks of Google's response to be ruled inadmissible and struck from the record. Most pundits seem to indicate that this might be a ruse to separate Google from the support of the massed ranks of the FOSS community. If anything though, it is likely to increase spending on Android phones and increase the overall Java doubt that is washing over the community just now. As Shakespeare so nicely put it: "O time, though must untangle this, not I".

Of course Android is also getting in the "neck" from Microsoft (vendor of the very, very weak in the market, Windows Mobile). Microsoft are trying to use software patent related techniques to extort () money from makers of Android phones who are based in the far east. One assumes this is not simply a mechanism of supporting US manufacturers against cheaper competition, one assumes this is a last-ditch attempt to save Windows Mobile from the dustbin of history. The strategy here is fairly obvious, either get an income stream from people selling Android phones or force them to switch from Android to Windows Mobile and therefore get an income stream from straight licencing - as opposed to patent troll extortion. Sadly HTC already caved in to the Microsoft "demanding money with menaces" (*) approach. This leaves the other manufacturers rather exposed. At least though HTC continues to make Android phones rather than switch completely to Windows Mobile. So there is some hope.

Defenders of the American Way (of Business) will of course say this is just capitalism working as it should. Sadly in a sense they would be more or less right. Capitalism is after all about exploiting exploitation. Software patents are a tool and so we end up with the litigation frenzy we are seeing. Of course the only real winners are the lawyers. Consumers, and their needs and wants, generally come very low down on the list of concerns for all these corporate types. The danger of software patents though is that they can be more than just a tool for big corporates to slug it out. Software patents can be used to curtail any form of competition from the minnows and indeed curtail any form of development by people and organizations not big enough to compete in the litigation slug-fest. Getting rid of software patents will stop this problem and allow small players to innovate. The big players will then find other tools to continue their slug-fest approaches.

An interesting side question is: how are Intel and Nokia viewing all this? The follow up is: and when is MeeGo going to be available on phones so we can see if it is real competition to iOS and Android or whether it is destined for the dustbin of history (as Windows Mobile seems to be) before it has even come to market? Another one for time to sort out I guess.

(*) _ It seems almost ironic that states make extortion by individuals or organizations illegal, except in the case where the state gives a licence to an individual or organization to undertake extortion by issuing patents. _

(*) _ Demanding money with menaces is also illegal in most states, except where the state licence the act by issuing patents. _

Oracle America vs. Google - Software Patents on Trial?

In what seems like a normal day at the trough for large companies and their patent lawyers, Oracle America (nee Sun Microsystems, now a subsidiary of Oracle) has issued a lawsuit against Google for violation of seven patents and some copyrights (PDFs of the complaint document are available in many places on the Web). At its core, the fight is over Google's Dalvik virtual machine used in Android.

Whilst Dalvik implements the Java Virtual Machine (JVM), it is not a licenced product, it is a "clean room" implementation. Of course, clean room is not a defence against patents, if you use something on which there is a patent, you are liable to pay royalties to the patent holder, even if you didn't know about the patent. Oracle, via its purchase of Sun, now owns patents it thinks applies, and it wants to collect.

Somewhat predictably, the initial knee-jerk reaction of large swathes of the JVM using community on the various mailing lists is one of being up in arms against Oracle, complaining that Oracle are attacking Java, and that this is the beginning of the end for Java. Later on in the various threads, the voices of reason begin to appear. But this article is not about whether Oracle is trying to cause the demise of Java and the Java community, that is really rather unlikely given the importance of Java middleware to Oracle's core income stream. This article is about the instruments being used in this case.

Oracle has purchased a collection of patents, many of them software patents, as part of its purchase of Sun. Many of these patents relate to techniques used in the Sun implementation of the JVM, pure software patents. Patent documents, at least in the USA, often start "A method and apparatus to . . .", the interesting (!) thing about all software patents is there is no apparatus, there is just a method realized as an algorithm encoded in source code. It is a pity that patents do not get rejected for lying when using the standard language.

I have skimmed over the PDFs of six of the seven patents that are the focus of this case (6,125,447; 6,192,476; 5,966,702; 7,426,720; RE38,104; 6,910,205; and 6,061,520), and it seems clear that these are all pure software patents revolving around various techniques used in the JVM, but which are actually so broad that they are techniques used in many other varieties of virtual machine. So this case could be seen as the beginning of a programme to extract royalties from any and all purveyors of virtual machines.

In the end, Oracle have purchased a company with assets and they are trying to create a return on investment from those assets. Natural business activity. The problem is the nature of some of those assets, and in particular software patents. Moreover it isn't just the seven patents listed in this case. Each of those seven make reference to many other patents of similar type, all part of the portfolio. There is therefore a whole pool of patents in play here, and the "long play" may well be to validate all of these patents in court so that they can then be used as mechanisms for extracting more royalty revenues from more companies, and in the end create an effective monopoly on the whole concept of a virtual machine.

Thus one could imagine that Microsoft with its CLR, IBM's implementation of the JVM, possibly VMWare, Parallels, all hypervisors and virtual machines are being lined up. Could they perhaps even go after the Python, Ruby and Perl virtual machines as well? Here lies the seeds of paranoia, but it indicates how dangerous a legally validated software patent might be, at least in the jurisdiction of that patent. Here we see another indication of why the UK and EU must not allow these sorts of patent. They destroy competition and innovation.

From a strategy perspective, it is interesting that Oracle have targeted a big player first, rather than starting with a small player, as is usual in these quasi-extortion rackets. Little players often have to fold for lack of resource, so there is a build up of apparent validity to the patent caused by wining cases, albeit out of court. The problem though is no actual case law has been created, so there is still the risk that when you finally start with the big players, you have to go the full distance anyway, having already paid out a lot to deal with the small fry. Perhaps then the strategy here is to have the big game first, go the whole distance to a judgement, and by doing so immediately create case law. All other players, big or little, then have to fall in line; it is then a simple sweeping up operation, money for old rope.

Over the next few weeks there will be a lot of mud thrown at Oracle, much of it unreasonable, some of it reasonable. Google will try and appear like the FOSS (free and open source software) world's white night, when in fact they are just as grey a player as any other big company (after all they tried to leverage the JVM without paying their dues). The real culprit here is the USA patent system and its penchant for issuing software patents willy-nilly, and indeed at all. Patents are tools for big players to stifle little players, and as noted above competition and innovation. Any pretence that patents are tools for the little guy to get remuneration from their ideas has surely been seen through long ago.

There is though another count in the lawsuit which is potentially far more important in many ways than the software patents ones, and that is the breach of copyright claim (Count VIII). Currently it is not totally clear from the documents available what the claim really is. There are issues of timing and exact claim that introduce uncertainty. One interpretation is that Google simply reused in Dalvik Sun JVM code from a time prior to that code being released as open source under the GPL licence, even though they were claiming Dalvik to be a clean-room implementation. If so this would seem to be a straightforward violation of Sun's copyrights. Another interpretation is that Google have relicenced GPL licenced code under the ASL licence without permission - the ASL is a more permissive open source licence than the GPL, and in order to relicence GPL code under ASL you have to have the permission of the copyright holder. Under this interpretation Oracle is the white knight and Google the enemy of FOSS. This is so against all (prejudiced?) expectation that it is immediate to doubt this is the case, yet it might be. If it turns out that this is, in fact, the case then everyone currently saying Oracle are no longer to be trusted as the owners of Java and the JVM, may have to reassess their position. On the other hand this is all speculation until more details are available: Paragraphs 37 to 46 of the complaint document are not specific enough to do anything other than speculate on the actual claim.

This case is going to be one to watch. Hopefully Groklaw will take this one up. There is _The Oracle-Google Mess: A Question - Are Any of the Patents Tied to a Specific Machine?_, so possibly yes.

In breaking news: Oracle have jumped into the Evil Empires League straight at number 3, forcing Dell, The EU Patent Office, The USA Patent System, Amazon, and even Google, down one place. We conclude that there may be a direct relationship between position in the table and the extent to which bully boy tactics and the patent system is used to extort monies.

Python and Clojure: Parallelism in Play

_ In a comment to my posting "Python Adapts to the Multicore Era", Sam Aaron asked about the arguably contradictory position of mentioning Clojure when arguing for a processes and channels model for Python. He also raised the question of the future of Python's GIL in a world dominated by parallelism. I thought I would respond with a full posting rather than just a comment. _

Clojure emphasizes the use of software transactional memory (STM), threads and agents as tools for handling concurrency and parallelism rather than processes and channels - as is used in Scala, Go, Python-CSP, PyCSP, and GPars; which variously use CSP (Communicating Sequential Processes) or the Actor Model. For me STM is really a bit of "sticking plaster" to make sure that shared-memory multi-threading is more viable than it is using explicit locks, monitors and semaphores. However, others think STM has a promising future.

_ In surveying the field of STM you may come across implementations that talk of storing values in databases. These are not implementations of STM, they are implementations of persistent data storage, which is something very different. Databases generally have transactional state and the abstract concept of transaction is the same as STM, but the realization is something very different - or should be. _

The real questions that drive thinking about STM and threads vs. processes and channels are:

  1. Which computational model best allows programmers to express the parallelism in their application.
  2. Which computational model provides the smaller translation distance between expression of the application in code and the execution model of the machine executing the application. Of course, if the application is inherently and fundamentally sequential then none of this really matters, but then such applications will not get any faster of execution until there is a return to increasing the speed of individual processors. The point is that we should not try and parallelize fundamentally sequential applications just to try and make them faster.

So which is best: STM and threads or processes and channels.

As stated so baldly, the question is probably answered by "choose whichever suits you"; there really isn't any other answer to such a general question with no context. We need to restrict the context so as to be comparing things a little better.

Clojure operates on the JVM which promotes threads and a single global virtual machine viewpoint. The best comparison then is with Scala (and Akka) and GPars - which supports Groovy-based actors and CSP. Also there are STM implementations for Java and Scala which would help comparison. To be honest though, no amount of philosophizing is going to result in any truly useful indicators. Data is what is needed. So there need to be experiments implementing a number of different problems using this set of languages and libraries, and then the following questions need addressing: which programs are the easiest to write; which programs the easiest to comprehend for the author and for people other than the author; and which programs are the most efficient and speedy of execution. I have not yet done such experiments, and am unlikely to as I do not have the resources just now. Of course there is the question of whether Clojure is the version of Lisp that will finally allow Lisp to really make the big time; or will Clojure slide into relative obscurity as all other Lisps have. Lisp has the property of being the most fundamentally unique approach to programming language whilst at the same time never really catching on. This should not of course affect the core "which of STM and threads vs. processes and channels is better for parallelism" debate.

What about natively? Well there is Go which supports processes and channels, animated with its goroutines; there is C++0x with threads, futures and asynchronous function calls; there is Haskell, which supports STM. The problem is, of course, that Haskell has a totally different computational model to Go and C++, so would it be a valid comparison? In a sense yes since Haskell is claiming to compete against C, C++, etc. So probably worth doing. Of course there are STM implementations for C++ and even C - Intel have a C++ STM system but it remains an experimental not production feature of the Intel compiler - so that should be added to the mix. Of course the same argument about data and experimentation applies here: no experimentation, no data, no conclusions just unsubstantiated opinions.

What about Python? Well here there is the GIL (global interpreter lock), at least for the standard CPython implementation. This means that a single PVM (Python Virtual Machine) can be executing only a single thread of Python code at any one time. Thus, no immediate potential for parallelism. There are two solutions to this, write things as extensions in C++ or C so that the GIL can be released by a thread, or use multiple PVMs. Using C ++ and C extensions is not a generally viable approach to parallelism in Python. It has its rightful place and is incredibly useful in that place, but it is not where most Python code is. So Python effectively mandates a process and channel approach to parallelism. Hence the multiprocessing package in the standard (post 2.6, but there are backports to 2.5 and 2.6) Python distribution or Parallel Python. This means that Python naturally gravitates towards CSP and the Actor Model for concurrency to the effective exclusion of STM. So if the GIL is to remain in CPython, multiple PVMs, processes, message passing, etc. are the way of structuring Python applications. This means CSP and actors will be core to the future of Python in the increasingly multicore, and hence parallel, world.

The real upshot of this posting is that there needs to be some experimentation organized to move things away from pure argumentation and creating results by arguing loudest and longest. The STM and threads vs. processes and channels debate needs some work done not just arguments made. Except with Python where, whilst the GIL is present, STM doesn't really have much of a place.

Go, D, the future of programming language, and the future of operating systems

This posting is an evolution of an email I posted in reply to a posting by Andrei Alexandrescu on the golang-nuts mailing list.

I have liked the D programming for a long while. It seems like a really nice evolution of many ideas that have evolved in C++ but without as much baggage. However, until there is a stable 64-bit v2.0, with AMD64 installers, alongside the 386 v2.0, it is not really a viable language in the modern age of a mixed 64-bit and 32-bit economy.

I really like the Go programming language because, via its goroutines and channels, it harnesses the process/channels model of computation which I think is the future of programming in the modern, multicore era. I have been a fan of CSP (Communicating Sequential Processes), and Actor Model, for over 25 years because it brings a structure to programming concurrent systems. Also Go brings to a statically compiled, native target language some reflection capabilities.

The Actor Model is getting a lot of promotion these days via Erlang and Scala (and the add-on library Akka). Software transactional memory (STM) is getting a lot of promotion via Haskell and Clojure. occam lives on as occam-pi in KRoC and JCSP (on which we have constructed GroovyCSP -- part of GPars a Groovy library for handling concurrency and parallelism), CSP is even getting airtime in Python (via PythonCSP and PyCSP). The overall goal here, which to a great extent strikes me as the goal of Go's goroutines and channels, is to commoditize processor and turn it into a resource that is managed by the run- time system just as memory is. Applications should not have to worry about multicore directly just about the abstract, algorithmic expression of parallelism in their application - though they do have to worry about communications distance between processors so as to avoid inappropriate assumptions about communications time and safety. This leads to having to manage locality and communications distance. As yet none of the CSP and Actor Model systems handle this properly, though there is good work happening in local and remote actors.

The problem for me with Go and D is that both languages give all the appearance of being backward looking - though this may just be conditioned by worrying about Posix compliance. The lowest level of libraries have all the naming and parameter feel of 1980s C.

For me there are two questions that should be driving programming languages:

  1. What is the language for writing the next big operating system?
  2. Do PGAS languages have the edge for writing applications in the future?

Linux and Mach, like Windows, are now really in "maintenance mode" their architectures and fundamental capabilities are fixed and unchangeable. Future hardware architectures show all the signs of heading directly towards multiple, heterogeneous, multicore, NUMA (non-uniform memory architecture) architectures with bus-level clustering, local clustering and wide-area clustering (if not more levels of communications) and operating systems and programming languages are not really ready to handle this. Languages like Chapel, X10, even Fortress are doing lots of interesting research in PGAS (partitioned global address space) but because they market themselves in the HPC (high performance computing) arena, they don't get taken as seriously as they should by a wider audience of programmers. Certainly though they are neither ready, nor possible never appropriate, for the leap of being languages with which to write operating systems.

So the question really is whether D and Go are just interesting sidelines in the interregnum between the era of network-connected uniprocessors and that of massively parallel, multi-level architecture systems. Or can one of them be the systems programming language of the next era of computing?

Go and its goroutines handle bus-level multicore processors quite nicely but then the next level is network, there is no concept of layered clustering. C++0x gives us futures and asynchronous function call to give similar, albeit different, functionality - and restrictions. D currently has the problem that it only has a 32-bit realization, the 64-bit realization is not yet released (though it is due soon). Till then it is not possible to experiment with D to see if its Threads library can support the sort of programming abstractions that are required for highly parallel systems and applications: shared memory multi-threading is not an appropriate tool for application programming, which may turn out to be the downfall of Chapel and X10 and the whole PGAS approach as it is now.

Go is probably the first statically compiled, native target language that has a built-in garbage collection system. Such a thing is possible with C++, but at least until recently the idea of using garbage collection and C++ was frowned upon. I suspect though there will be a lot of prejudice against using a language with garbage collection for writing a new operating system, which gives doubt about whether Go will get used for that - despite all the splendid Plan 9 related work. So can D really step up and be a candidate? Or will people just descend to the arguments "C is the only language to use because it is the only one with a low enough viewpoint to write an operating system"?

Then there is the question of what should the architecture of the next operating system be. Will it be an exokernel, a microkernel, a nanokernel or something radically new? Will people choose a modern language to implement them or will they just assume C? Will Google's might push Go (and possibly D) into the limelight. Is the JVM the future virtual machine of choice? What role do hypervisors have? What will VMWare, Parallels, VirtualBox do to perturb things?

To summarize: We live in interesting times.

Python Adapts to the Multicore Era

Thanks to Sarah Mount for pointing out Guido van Rossum's summary of his immediate impressions on EuroPython 2010 on getting home and getting rested. Paragraph 2 is clearly the one that I am most interested in as I get a "name check". :-)

Guido's paragraph acts as an excellent executive summary of the message that I was trying to get across. Applications programming in the world of clusters of multicore machines should not be about shared-memory multithreading but should be about small, lightweight processes sending messages to each other. Dare I suggest that this is a return to the true spirit of object-orientation of the 1980s? cf. Smalltalk-80 and all the research languages that were under investigation in the late 1980s (*).

In a world where applications are developed as small interacting processes either CSP-style or actor-style, or even dataflow-style (a model that didn't get much air time at EuroPython 2010 but is arguably equally important) CPython's GIL (global interpreter lock) is much less of an issue than ensuring that the multiprocessing package (and, perhaps bizarrely, the threading package) are as efficient as it is possible for man and machine to make it: multiprocessing (or something basically the same) is the foundation on which actor, dataflow and CSP based frameworks will be based, so it needs to be as fast as it is possible to make it. I have no doubt that the Python- CSP and PyCSP teams (or a merged team if that happens) will pick up on Guido's invitation to make proposals for evolving multiprocessing if it is seen to be needed.

This raises the question of whether there are efforts to create Python packages to present actor-based and dataflow-based models. I haven't seen anything relating to dataflow, and all the actor frameworks I have seen have been solely about handling concurrency in a single PVM context, they have not been about parallelism in a multi-PVM context. So clearly there are two obvious projects here:

  1. Evolve one of the present actor model packages to foster parallelism as well as concurrency.
  2. Start a dataflow model package.

The only questions are who, where and how. Feel free to email me ideas. Hopefully out of this we can create dynamic and productive activities that will see Python programming evolve to meet head on the challenges that multicore computing brings.

Looking over at the JVM-based milieu, the GPars project (**) has brought actors, CSP and dataflow to the Groovy and Java world. Scala, Scalaz and Akka are bringing actors and data parallelism to the JVM. Clojure, with its STM, actors and agents has truly modernized Lisp. The energy and progress in these communities needs to be replicated in the Python community in order for Python to retain relevance in a world increasingly awash with processing cores. So Pythonistas, sign up, your language needs you.

_ (*) Including Solve the language for which I led the design team. Solve was an active object based, parallel, object oriented language with static strong typing and transactional state that we developed as part of a large ESPRIT project 1986-1991. There is no point in reviving this work per se, but I must find and publish all the documents so they are available. _

_ (**) Yes, I am involved in GPars activity. _

EuroPython 2010 -- The Aftermath

EuroPython 2010 was great. It came in three parts:

  1. The tutorials: I ran a full-day tutorial on SCons. The group was small, but that made for an intimate atmosphere which I thought worked well. I thought the day went well, but you'll have to find members of the group and ask them for a less biased opinion. A couple of people said they would be switching from Autotools to SCons immediately on returning to work, as a consequence of the tutorial.
  2. The conference: I did the opening keynote: The Multicore Revolution: We've Only Just Begun. The original opening keynote person pulled out late in the conference planning and I was asked if I would step in. I had thought it was a 45min slot, but then realized it was 30min, only to have delays in starting leave me 14.5min. In the end I took 25mins, but the conference organizers were fine with that. The talk was a (major) reworking of my keynote from ACCU 2010 to shorten it and aim it at a Python audience. The audience seemed to enjoy the presentation as people kept coming up to me all the way through the conference saying how much they enjoyed it. The slides are here but probably won't mean much unless you were there. There will be a video once Michael Sparks has encoded and uploaded all the material recorded by his video cameras. Single biggest outcome of the presentation was that Guido van Rossum collared me for an hour after the presentation to chat about parallelism and the CPython GIL. My take on this is that the fact that the CPython PVM (Python Virtual Machine) is basically a sequential interpreter really doesn't matter: the future is about lots of smallish processors with local (distributed) memory so having one PVM per processor with the PVMs sending messages to each other. This line of reasoning is obviously a segue into my second presentation. I did a 45min presentation Python Parallelism using CSP. CSP here is Communicating Sequential Processes, a model of concurrency and parallelism created by Tony Hoare in the late 1970s and early 1980s. The core idea is that having many sequential processes with no shared memory but lots of channels down which messages can be passed gives a way of constructing concurrent and parallel applications that means you don't get undebuggable deadlocks and livelocks. The slides are here. There are two implementations of CSP for Python, PyCSP and Python- CSP. Currently there are some differencves but many similarities between them, hopefully soon the two projects can merge to provide a single Pythonic way of managing parallelism using CSP. All the other sessions I went to were great. I was really pleased when Guido, in his keynote Q&A session, mentioned that he had been to all the CSP-related sessions and was going away with a lot to think about.

  3. The sprints: Sarah Mount organized a sprint to work on Python-CSP. Given that I am one of the team it was necessary to be there! We had a small but incredibly active group of people and achieved masses in the two days we had. It was great fun.

The only downside was that my server died during the conference and it has taken me a while to lash up something to cover the email and Web-based activities pending getting a new one.

Silicon Valley no longer the source of innovation?

People, including lawyers, have now had time to research and reflect on the US Supreme Court opinion "In Re Bilski". Having perused some of the analysis, it is interesting that there might be the beginnings of a fundamental shift in attitudes towards software patents -- well by the analysts and pundits, definitely not by the big companies and lawyers.

One really interesting piece is this one. It is good to see the argument that patents are tools for big, monied organizations being increasingly recognized. Also that startups are the point of innovation, not big organizations. Big organizations is where engineering happens: new products are harnessed, made smaller, faster, more efficient. Startups don't do this (at least not generally), they create the first implementations of new ideas. Some innovation are generated in big organizations but nothing like the amount that happens via startups.

Ironically (perhaps), the big US software companies often imply that software patents are somehow the "American Way" and they have the right to be able to destroy or buy out any potential competitor because that is how capitalism works and capitalism is the "American Way". Of course another part of the "American Way" is opportunity for all: every startup should be allowed to try its hand without the fear of being hounded out of play - especially if that is by means of financial muscle. There seems to be two diametrically opposed "American Way"s here.

Big software organizations have as reported in many places been on a campaign to try and convince the EU to adopt the US patent approach. But if patents are a tool for big organizations to preserve their status, often near monopolies, how can the EU justify creating a patent system that stifles innovation, destroys competition, and promotes monopolies? Surely the EU should see through the gambit and cease all moves to allow any form of software patent. Of course it may be that "big money" is getting as powerful in the lobbies of the EU as it is in the lobbies in Washington. If the "big money" was European it would be more understandable, but it is all US.

Let us assume the US patent system stays in the US, and the rest of us have a more sane system. Then it is clear that software innovation cannot happen within US jurisdiction. Innovation is about having an idea and getting on with realizing it. Having to spend all your time trawling patent databases to see if a technique you have used violates a patent, is a 100% killer of enthusiasm and thus innovation. So I agree with Sawyer, it seems like the right move to have all software innovation happen outside the borders of the USA. This means the role of Silicon Valley is due for reassessment. Of course the USA has (arguably) the best infrastructure for supporting startups (and hence innovation) whilst at the same time has the biggest tool for supporting monopolies and the killing of innovation (its patent system).

I guess the interesting question is what happens about importing into the USA innovative software developed outside the USA which violates some US patent. Import will be banned, and the USA will fall further and further behind in the technology race. China is already the centre of manufacturing, is it also destined to become the centre of all innovative software development? I suspect not given the governments view on freedom of information, speech, etc. This is an ideal opportunity for Europe, indeed the UK, to become a serious power in software innovation. Perhaps the UK government should itself innovate and provide the legal and financial infrastructures to support innovation and startups far, far better than the UK does at the moment.

Of course "Silicon Fen" and "Silicon Corridor" are the stomping grounds (literally in terms of stopping startups) of the big money organizations, so it is unlikely that they could be the replacements for "Silicon Valley". Where can be though, and what trendy name can we give it?

EuroPython 2010 approaches . . . rapidly

I have been asked to give the opening keynote! So be there . . . Adrian Boult Hall, Birmingham Conservertoire 2010-07-19 09:15+01:00 - check the details here.

And of course I will be presenting a 1-day tutorial on SCons starting 2010-07-17 09:15+01:00 as part of the tutorials lead up to the conference.

I also get to present my session "Pythonic Parallelism with CSP", 2010-07-20 17:45+01:00.

The only question left, apart from can I get all the material together in time, is whether to organize a sprint for after the conference. And if yes on SCons or Python-CSP?

BCS - The EGM Saga Now Over

So the BCS EGM called by Len Keighly et al. is over and the resolutions all went against him and his group that chose to challenge the actions being taken by the BCS Trustee Board and the CEO. No doubt the Trustee Board and CEO will publicize this widely as a huge vindication, but is it really? Only 32% of the members voted, which means 68% didn't care about the result. So the whole thing was about as interesting to the electorate as a local government election.

What I did find mildly interesting was that whilst around 75% of votes cast were for the Trustee Board and CEO on all but one of the resolutions, for the one that really mattered (Resolution 3) only 62% of votes were for them. So although the resolution to terminate further spending on the transformation until full account was taken was defeated there was clearly less support than for all the other resolutions.

Personally, I was in two minds about the whole thing:

  1. The BCS needed a revamp. Due to the success of ECDL and other ventures the BCS has clawed its way out of the quagmire of some years ago where it was nigh on destitute, and is now relatively flush. It's image though remained the old one which was looking tired. I am not a great fan of the brand, but I am pleased there is a new one.
  2. I had little confidence in the Trustee Board and CEO. The way the CEO and Trustee Board managed the whole revamping thing with the members showed significant condescension. Worse the way the Trustee Board and CEO have handled their side of the EGM campaign has involved throwing members money at a campaign of vitriol against the people who called the EGM. This really irked me. So I guess I am not unhappy about the actual outcome as it means the evolution of the BCS will continue. I just hope that the Trustee Board and CEO will learn a lesson from this. The BCS is a membership organization, as well as being a business that generates income. It's members matter. For the last 15 years or so, the BCS has been managed as though the members didn't matter at all.

A particular hobby horse of mine is the members magazine, which I think is very poor. Compare "IT Now" from BCS with "Physics World" from the Institute of Physics. I flick through "Physics World" and generally read some of the articles. I flick through "IT Now" and immediately consign it to the recycling. Thus, apart from having some postnominals, what do I get for my membership fee? Recycling.

OK so I could attend specialist group meetings and branch meetings, and very occasionally I do. However far too few are actually interesting to me, and all to often they happen on a Thursday. I can't do Thursday evening. Actually, somewhat like Arthur Dent, I still haven't got the hang of Thursdays.

Copyright © 2017 Russel Winder -