This posting is an evolution of an email I posted in reply to a posting by Andrei Alexandrescu on the golang-nuts mailing list.
I have liked the D programming for a long while. It seems like a really nice evolution of many ideas that have evolved in C++ but without as much baggage. However, until there is a stable 64-bit v2.0, with AMD64 installers, alongside the 386 v2.0, it is not really a viable language in the modern age of a mixed 64-bit and 32-bit economy.
I really like the Go programming language because, via its goroutines and channels, it harnesses the process/channels model of computation which I think is the future of programming in the modern, multicore era. I have been a fan of CSP (Communicating Sequential Processes), and Actor Model, for over 25 years because it brings a structure to programming concurrent systems. Also Go brings to a statically compiled, native target language some reflection capabilities.
The Actor Model is getting a lot of promotion these days via Erlang and Scala (and the add-on library Akka). Software transactional memory (STM) is getting a lot of promotion via Haskell and Clojure. occam lives on as occam-pi in KRoC and JCSP (on which we have constructed GroovyCSP -- part of GPars a Groovy library for handling concurrency and parallelism), CSP is even getting airtime in Python (via PythonCSP and PyCSP). The overall goal here, which to a great extent strikes me as the goal of Go's goroutines and channels, is to commoditize processor and turn it into a resource that is managed by the run- time system just as memory is. Applications should not have to worry about multicore directly just about the abstract, algorithmic expression of parallelism in their application - though they do have to worry about communications distance between processors so as to avoid inappropriate assumptions about communications time and safety. This leads to having to manage locality and communications distance. As yet none of the CSP and Actor Model systems handle this properly, though there is good work happening in local and remote actors.
The problem for me with Go and D is that both languages give all the appearance of being backward looking - though this may just be conditioned by worrying about Posix compliance. The lowest level of libraries have all the naming and parameter feel of 1980s C.
For me there are two questions that should be driving programming languages:
- What is the language for writing the next big operating system?
- Do PGAS languages have the edge for writing applications in the future?
Linux and Mach, like Windows, are now really in "maintenance mode" their architectures and fundamental capabilities are fixed and unchangeable. Future hardware architectures show all the signs of heading directly towards multiple, heterogeneous, multicore, NUMA (non-uniform memory architecture) architectures with bus-level clustering, local clustering and wide-area clustering (if not more levels of communications) and operating systems and programming languages are not really ready to handle this. Languages like Chapel, X10, even Fortress are doing lots of interesting research in PGAS (partitioned global address space) but because they market themselves in the HPC (high performance computing) arena, they don't get taken as seriously as they should by a wider audience of programmers. Certainly though they are neither ready, nor possible never appropriate, for the leap of being languages with which to write operating systems.
So the question really is whether D and Go are just interesting sidelines in the interregnum between the era of network-connected uniprocessors and that of massively parallel, multi-level architecture systems. Or can one of them be the systems programming language of the next era of computing?
Go and its goroutines handle bus-level multicore processors quite nicely but then the next level is network, there is no concept of layered clustering. C++0x gives us futures and asynchronous function call to give similar, albeit different, functionality - and restrictions. D currently has the problem that it only has a 32-bit realization, the 64-bit realization is not yet released (though it is due soon). Till then it is not possible to experiment with D to see if its Threads library can support the sort of programming abstractions that are required for highly parallel systems and applications: shared memory multi-threading is not an appropriate tool for application programming, which may turn out to be the downfall of Chapel and X10 and the whole PGAS approach as it is now.
Go is probably the first statically compiled, native target language that has a built-in garbage collection system. Such a thing is possible with C++, but at least until recently the idea of using garbage collection and C++ was frowned upon. I suspect though there will be a lot of prejudice against using a language with garbage collection for writing a new operating system, which gives doubt about whether Go will get used for that - despite all the splendid Plan 9 related work. So can D really step up and be a candidate? Or will people just descend to the arguments "C is the only language to use because it is the only one with a low enough viewpoint to write an operating system"?
Then there is the question of what should the architecture of the next operating system be. Will it be an exokernel, a microkernel, a nanokernel or something radically new? Will people choose a modern language to implement them or will they just assume C? Will Google's might push Go (and possibly D) into the limelight. Is the JVM the future virtual machine of choice? What role do hypervisors have? What will VMWare, Parallels, VirtualBox do to perturb things?
To summarize: We live in interesting times.