On twitter recently, someone asked me:
What was your XP picking up languages and paradigms?
How did you discover the value of new concepts you decided to learn?
I wrote an email in reply. It got a bit long for an email, and it began to look like a blog post. So I copy and pasted the text and worked on it till this blog post resulted.
I think the most important thing for me in learning programming was that I was a theoretical physicist and scientist bringing a scientific method approach to the whole activity. I first learned FORTRAN programming using punch cards sent off to a computer with a one week turnaround. This forces you to carefully mentally execute your code. This is I think a skill fewer and fewer people practice.
I then moved on to having direct access to job submission, but still using FORTRAN on cards, but now with a one day turnaround. In effect no change: in terms of development a week and a day leave you with the lack of continuity of thinking. I did though learn about abstract data types as a concept around then. I mock up this idea in FORTRAN for my code. OK in hindsight I was hacking the language to achieve a goal the language was not designed for, and yet my code was easier to evolve with those constructs for the uses I was making of them. Other people programming around me were still working with the standard FORTRAN model and doing much more debugging and getting wrong results they didn’t know were wrong.
Around this time I did some work that involved:
Accessing mainframe codes using editor and job control submitted for day turnaround batch via paper tape. Yuk. I grumbled so much and pointed at a minicomputer in the room so that;
Using a rewrite of the FORTRAN code in Basic on aforementioned local mini-computer. Still paper tape but the turnaround time was fabulous. I had to learn the magic skills of very thin sticky tape to block up holes without blocking the fast tape reader and a hole punch to put corrections in.
So then we are on to the terminal access to medium and big local and remote computers. The crucial thing here is quasi-instant turnaround. Actually still batch job submission, but it was all on the machines so you could track the job and thus schedule your time. Furthermore you could have one job waiting and work on preparing another. It was around this time I got obsessed with program development process and tooling. And learning the subtleties of the only error message IBM machines ever give: 0c4.
Next stage was learning new languages. First stop Algol 68. This was an eye opener, and a revelation. Everything I had been tinkering with using FORTRAN suddenly became easy. Pascal was clearly a failure in comparison to Algol 68 but it had to be used because it had to be used. Except FORTRAN was still the tool. Along with Schoonership and Reduce. Also a couple of assembly language courses, to find out how computer hardware actually worked.
However I then switched career and taught myself C. It was all about hardware, mostly because I was doing systems programming, but C was so much better than assembly language for this. And UNIX 6 on PDP-11. Joyous. With the move to VAX-11 750 things got interesting: octal → hexadecimal. But it was all about making hardware work, but using nice, elegant, understandable C code. I also got involved in some UI and quasi-IDE research work. This was right at the start of the graphical display revolution so everything was actually quite simple but revolutionary. And very exciting. Also learned Prolog which was an eye opener: script submitted to compute engine, a whole new way of thinking.
Also worth noting that editing and executing code instant turnaround, no batch any more. Bugs instantly available. I stopped being careful about carefully mentally executing my code. This was an error. I try harder since then, but the instant turnaround (which is today even more instant) made development time longer because of all the trivial errors.
Then the academic phase teaching in historical order: FORTRAN, Pascal, C++, Scheme, Miranda (which with KRC led to Haskell), Fortran, Prolog, Ada, and Java. The theme here is having to learn new paradigms of computation and teach them professionally to others. Also I was researching psychology of programming, UI, and parallel programming and programming languages. I was part of teams inventing languages. Mostly OO as I was in the OO not functional camp during the great OO vs FP war of the 1980s.
Along with the plethora of languages and computational models, and always present, was my obsession with build automation, and scientific method as an approach to debugging. 1979 was a revolutionary year for me as I found out about and was able to use Make. Till then I have been tinkering creating my own scripts for automating build. Make changed my life.
For the last 20 years it has just been "more of the same". The languages change (and mostly improve), IDE and editors change (and mostly improve), but my mental structures for dealing with it all have not really changed. Yes the facts and beliefs evolve, but the meta-facts and meta-beliefs have stayed fairly constant. Yes, I know that people cannot reason about their own thinking, but I have not had any serious events of crisis of belief about programming and software development in the last 20 years.
The dimensions are simplicity, or at least understandability, symmetry, or at least understandable lack of it, and performance, or understanding lack of it. I am in the "OO failed and FP failed but the amalgam of OO and FP has moved us forward massively" camp. For this we have to thank Scala for providing the bridge in the early 2000s. Also C++ moving from OO to generic as it’s label, and that Go and Rust have eschewed classes but are still object-based languages.
Having a knowledge of processor architectures and executions up to high level programming, with a knowledge of many different paradigms of software execution is for me the core, along with a demand for scientific method and not guesswork.