Russel Winder, Concertant LLP
The organizing committee has announced that SC08 has 10,023 registered attendees, an all time record number for the conference. It seems then that more people than ever are interested in supercomputers – even though they are hardly everyday machines. Of course, what happens in high performance computing today will be what mainstream programmers are doing in 2 or 3 years time, so there is some importance to the content of SC08!
Much of SC08 is about “Big Science” and the performance of some of the latest hardware leaves traditional “Big Iron” far behind. It seems though that much of SC08 is tantamount to a bragging contest about who can do more things faster, who has the glossiest graphics, etc., etc. – but it has to be admitted that the hardware and the applications on show are very, very impressive. But not everything is about more grunt or more glitz, some people at the conference are concerned about how “Joe Programmer” and “Stephanie Programmer” develop software for the new generation of computers comprised of multiple, multicore processors. For example, a workshop organized by B. Scott Michel (The Aerospace Corporation) and Hans Zima (NASA Jet Propulsion Laboratory) entitled “Bridging Multicore's Programmability Gap” was held on Monday 2008-11-17. This workshop included presentations from a number of people addressing various different aspects of the problem of the “Programmability Gap”. The problem is caused becaused today’s hardware has evolved far faster than the programming languages and techniques used to develop software to run on modern multicore systems. All of the presenters explicitly or implicitly accepted that the core (!) problem is that this new generation of machines bring serious problems of programmability not addressed at the moment by current programming languages. Most of the presenters seemed to assume that C or C++ was the only language of future development, though a few disagreed.
The HPCS programme from the NFS funded work on three programming languages as pieces in the jigsaw of harnessing parallelism: Chapel, X10 and Fortress. All of these were mentioned during the workshop but only Chapel (from Cray) had a specific section of the meeting dedicated to it. X10 was represented by a “spin off” project called Habanero (from Rice University). Fortress (from Sun Microsystems) did not have any representation. Nor was there any representation of the work on standard Java and C++ support for parallelism. There seems to be an inbuilt assumption that parallelism needs either a new language or will just the use of C or C++ with MPI or possibly OpenMP. There was a implicit assumption that Fortran was not important, that C and C++ were the real languages of choice. Surprisingly Java didn't really have any mind share compared to C and C++.
Perhaps then the workshop was too parochial, too academic and government-funded project oriented. Certainly there did seem to be something of a common mindset problem. However, it is the fact that there was an explicit recognition that there is a programmability problem that is the really important thing: the current programming languages and techniques are insufficient to the task of harnessing the power of today's multicore processors.
Interestingly, Haskell showed its face. Unlike imperative languages such as C, C++, Java, and the new languages such as Chapel, X10, and Fortress, Haskell is a functional language, as is Erlang. Harnessing parallelism with a functional language is very different (and very much easier) than with an imperative language. Could Haskell (and/or Erlang) be a viable choice for programming in the future? Technically it certainly can. The problem is though the assumption in the mainstream computing community that C, C++, and Java are the only programming languages for mainstream development. It seems then that Haskell and Erlang, even if they would be better languages, are unlikely to take root widely. Fortunately, C++ and Java are taking on board more and more of the declarative idioms that come from functional programming languages. However this is not enough, C++ and Java are still imperative languages and so the assumptions that can be made in functional languages cannot be made.
Other interesting factors brought up in workshop were the use of hardware accelerator approaches, multi-dimensional communication structures in hardware, and the use of mathematical modelling and automated code generation to avoid the need to do all the programming. There was an interesting tension here between heterogeneity and homogeneity. The hardware accelerator people and the automatic code generator people were advocating focus on heterogeneity and coping with difference. The multi-dimensional communication advocates we focusing on connecting homogeneous components to provide flexibility. At the heart of all this was a view that treating multicore systems in a component way was important. Implicitly, all of these people were attacking the problem of levels of abstraction and having the right tool for the job – with the implicit assumption that we do not currently have this. To do real justice to any of these topics requires more space that in this article. Suffice it to say that abstraction is at the heart of having programmability.
So returning to the title of this article, “Is there a Programmability Gap”. Yes, definitely. Do we have an answer? Not really. Do we have a handle on the problem? Yes. Is there hope for the future? Definitely – but it requires mainstream computer-based systems developers to recognise this fact. What is the future? Well probably a bit of all of the things seem at the workshop – new languages, new libraries and features for old languages, new architectures, amendments to old architectures. At the end though, dealing with the “Programmability Gap” means new abstractions and a lot of training of software developers.