Tuesday, February 17, 2009

Review - The Psychology of Computer Programming

I got a "new" book in the mail a few weeks ago and finally had time to crack it open, "The Psychology of Computer Programming" by Gerald M. Weinberg published in 1971. There is a Silver Anniversary edition that recently came out but I wanted a cheap copy in hardback so mine came from an online used book site. Why read a computer programming book that is almost as old as myself? Because it addresses the core issue of producing computer programs, the people who read, write, edit, maintain, or otherwise cuss at (other people's of course), code. People are the primary factor in all programs; before hardware, language, specifications, anything. If you don't understand how humans go about creating these abstract entities called programs, you will constantly be mystified by missed deadlines, frustrated by missing functionality, and stymied by endless scope creep. Enough of my pontificating, let's talk about the book.

Table of Contents


  1. Programming as Human Performance


    1. Reading Programs

    2. What Makes a Good Program?

    3. How Can We Study Programming?


  2. Programming as a Social Activity


    1. The Programming Group

    2. The Programming Team

    3. The Programming Project


  3. Programming as an Individual Activity


    1. Variations in the Programming Task

    2. Personality Factors

    3. Intelligence, or Problem-Solving Ability

    4. 1Motivation, Training, and Experience


  4. Programming Tools


    1. Programming Languages

    2. Some Principles for Programming Language Design

    3. Other Programming Tools


  5. Epilogue

"Computer programming is a human activity." A pretty bold thesis from the intro to Part I. Is there a mystique to programming? "Either you can program or you cannot. Some have it; some don't." Both quotes give you a good idea as to what is in this book, tackled expertly by Mr. Weinberg.

Chapter 1 - Reading Programs

Some years ago, when COBOL was the great white programming hope, one heard much talk of the possibility of executives being able to read programs. Weith the perspective of time, we can see that this claim was merely inteded to attract the funds of exectutives who hoped to free themselves from bondage to their programmers. Nobody can seriously have believed that executives could read programs. Why should they? Even programmers do not read programs.
I hear a similar story from when assemblers were introduced - as in, "With the development of assemblers, we won't need programmers anymore!" I believe similar statements have been propagated by hordes of 4th generation language and CASE salesmen.

"Programming is, among other things, a kind of writing."
This is not a very mainstream view in the programming world even though we work with constructs which are literally "languages".

"We shall need a method of approach for reading programs, for, unlike novels, the best way to read them is not always from beginning to end. They are not even like mysteries, where we can turn to the penultimate page for the best part -- or like sexy books, which we can let fall open to the most creased pages in order to find the warmest passages. No, the good parts of a program are not found in any necessary places -- although we will later see how we can discover crucial sections for such tasks as debugging and optimization. Instead, we might base our reading on a conceptual framework consisting of the origin of each part. In other words, as we look at each peice of code, we ask ourselves the questions, 'Why is this piece here?'"
The author begins examining a section of PL/I code showing how certain machine, language, and human limitations influence a program. Machine limitations like lack of memory to hold the entire problem set in memory at the same time causing the use of two loops instead of one. Language limitations like the lack of an end-of-file indicator which forces the operators to include a special character (or card in the 1970's punch-card centric world) and causes the programmer to account for the special character in code. Programmer limitations like not really understanding array notation of the language, in this instance PL/I. As a program is modified over time, machines change, languages are updated, and programmers come and go.

"And so, some years later, a novice programmer who is given the job of modifying this program will congratulate himself for knowing more about PL/I than the person who originally wrote this program. Since that person is probably his supervisor, an unhealthy attitude may develop -- which, incidentally, is another psychological reality of programming life which we shall have to face eventually."
Some programs have inscrutable logic like the use of special characters which are ordinarily invalid; "magic numbers" used for interim states for some long-forgotten problem.

"Once the patch was made, it worked so well that everyone forgot about it -- more psychology -- and there it sat until unearthed many years later by two archeologist programmers."
To different versions of the same PL/I program are compared, the first includes many limitations of which we've already spoken, the second much improved by removal of the previous limitations. Of the comparison,

"When we look at the difference between Figures 1-1 and 1-4, we might begin to believe that very little of the coding that is done in the world has much to do with the problems we are trying to solve"
Would we be any better off if we could use the latest code (Figure 1-4) as our spec?

"Specifications evolve together with programs and programmers. Writing a program is a process of learning -- both for the programmer and the person who commissions the program. Moreover, this learning takes place in the context of a particular machine, a particular programming language, a particular programmer or programming team in a particular working environment, and a particular set of historical events that determine not just the form of the code but also what the code does!

In a way, the most important reason for stufying the process by which programs are written by people is not to make the programs more efficient, more compact, cheaper, or more easily understood. Instead, the most important gain is the prospect of getting from our programs what we really want -- rather than just whatever we can manage to produce in our fumbling, bumbling way"

Hopefully the reason to study a 30+ year-old book is apparent; those who fail to learn from history are bound to repeat it.

Let's study historical mistakes so we can make our own new mistakes instead of repeating mistakes of programmers long-retired.

The rest of the book is just as rich in lessons we can still take to heart. By approaching the book chapter by chapter, I'm hoping to increase my chances of getting all the way through without writing a 50-page article all at once which would never get finished.

Here is the link for the next chapter: Chapter 2 - What Makes a Good Program
Delicious Bookmark this on Delicious

Tuesday, February 03, 2009

Does Anyone Know What Testing Is?

(With apologies to Chicago for the title) I'm a big fan of good design which should be no surprise to those who know me. It can irritate my wife when I analyze a building and then start describing all the different ways in which you can tell it was poorly designed. I do rant when I see great design but those are much more rare in these "I don't care if it's a half-baked piece of crap, we need it ready now!" times. I apologize for any management flashbacks that caused.

In any event, I'm also a big fan of testing. I see testing scenarios around me during the day. This story just happens to be one that is easy to tell. The mens room where I work has three sinks, each with a soap dispenser. There was one sink that more heavily used than the others based on the fact that there was never any soap in the dispenser. You'd push on the plunger a couple of times and when nothing came out you'd shrug and move to another sink. I always figured that it was running out quickly since I would see the buildings cleaning crew in the bathroom a couple times a week. Surely they were refilling the dispensers right? After a few weeks of the "soapless sink shuffle" I mentioned the problem to the cleaning guy. I got his attention and showed him how no soap came out. He looked confused and said something to the effect of "it should work because it has plenty of soap". As I looked under the counter where he was examining the half-full bottle of soap it was clear that lack of soap wasn't the problem. I left him to his work, figuring that maybe the tube from the plunger was disconnected; plus, he didn't need me looking over his shoulder.

Upon reflection, I realized that his 'test' for being out of soap was looking under the counter to see if any of the containers were empty. Our 'test' for being out of soap was trying to get soap from the dispenser. So even though his test passed, it still was failing the end user because he wasn't verifying the expected result; the assumption being that the only reason you wouldn't get soap was if the container was empty.

The bottom line is to always be aware of what your tests are testing. Just because you're tests all pass does not guarantee everything is working as long as you aren't testing the way the system/device is used in real life. So, when you look at your test plan ask yourself, am I checking that we are out of soap, or am I checking whether I can get soap out of the dispenser? It might seem like a small difference but it can be difference between a cleanly running system and one that leaves you all wet.
Delicious Bookmark this on Delicious