Earlier today I did a very rapid presentation in a somewhat confrontational style. The premise was that all the efforts to make Python code fast of execution was a waste of time and effort: Cython and Numba are mis-directed in trying to treat a dynamic programming language as a static native code language. NumPy provides a C supposedly opaque data structure to be treated as a block box used by employing higher order functions. Yet it too is relatively slow and constraining. The success of Pandas and SciPy, amongst others, in terms of traction and usage hides that, whilst this is a perfect solution for some use cases and users, it is stopping many from achieving serious computations in a short time. Due to the presentation being a short one, the story line leading to Chapel had to be very truncated. I ran some sequential Python code and then some sequential Chapel code which ran about 100 times faster. I then showed a parallel Chapel code which ran 8 times faster because there were 8 cores. The parallel Chapel code was so simple I left it displayed and stopped the presentation. The codes were drawn from the Git repository on GitHub. The slides I uploaded to SpeakerDeck and SlideShare. They are also here.
Update: PyConUK media people have uploaded a video of this session here. I haven’t watched it so have no idea if I like the session or not. Lot’s of people have told me they liked it, so it can’t be too bad.