Quote of the week – make it easier

Here’s how you could have made it easier on yourselves.

Matt Smith

A couple of years ago I took a class in improvisational theater from Matt Smith. If you are looking for a way to push yourself out of your comfort zone I highly recommend improv., particularly since in our case we did two public performances at the end of the class. If you’re in Seattle, I highly recommend Matt’s classes – the man is a genius.

(If you have a fear of public speaking, improv. might not cure you of that fear, but at least it will demonstrate that there is something worse than getting up on stage with a speech already prepared. Getting up on stage with nothing prepared is in a whole different category of scary).

In 60 hours of class time over 4 weeks, Matt never once told us that we were doing something wrong. Instead he used the quote above – “here’s how you could have made it easier on yourselves” (there’s a whole series of blog posts waiting to be written about what a great way of giving feedback that phrase is).

I was reminded of Matt’s words when I read this list of techniques for robust software (the author is given as Nick P – I would love to be able to attribute this more precisely, if anyone knows who Nick P is, please post in the comments).

It’s a good list of techniques that have been known for many years (and yet I see many teams failing to implement them), but what struck me is how many of the techniques come down to “making it easier on yourselves”. It’s well worth reading the entire list, but here are a few highlights:

  • #2 “they used boring constructs that were easy to analyse”
  • #3 “each module must fit in your head”, “control graph that’s pretty predictable”
  • #6 “adopt languages that make robust development easier”
  • #11 “brilliant tooling that could reliably find race conditions, deadlocks, or livelocks”

Software development is hard, we need to take every opportunity to make it easier on ourselves.

(You should also check out Matt’s TEDx talk on The Failure Bow – a concept that came up over and over again in our class and would improve many workplace cultures).

Quote of the week – testing

Just in case the twitter links stop working:

Code that isn't tested == code that is broken.

Corollary. Multi-threaded code that is tested != code that works.

Wayne Wooten

Ah, the joys of parallel programming.

I did my undergraduate degree at Manchester University in the 1980s. In my “Parallel Computing” class we had a lecture about the Manchester Dataflow Machine – a parallel computing architecture that was going to take over the world.

I did my master’s degree at the University of Washington in the 2000s. In my “Parallel Computing” class we had a lecture about the Manchester Dataflow Machine, and why it didn’t take over the world.*

In 20 years it seems like we came up with a lot of ideas for parallel computing, without ever hitting on the single idea that ties everything together. For serial computing we have the Von Neumann architecture, and even though we now have caches and branch prediction and pipelining and more, we can still regard the basic machine as a Von Neumann machine.

The most practical approach I have seen is the use of parallel programming patterns. Depending on who you talk to, and how they group the patterns there are somewhere between 13 and 20 of these. I am partial to Dr. Michael McCool’s explanation of the patterns:

Parallel Programming Talk 82 – Michael McCool

Structured Parallel Programming with Deterministic Patterns


* I am being cruel for comic effect. It isn’t necessary for a machine (particularly not a research machine) to take over the world in order to be groundbreaking, useful and educational. The dataflow model is still a very relevant one – we used a dataflow architecture for hooking together graphs of image processing algorithms on the Adobe Image Foundation project. Search for “Photoshop oil paint filter” for examples produced by a dataflow graph of 13 image processing kernels.

Quote of the week – shortcuts

… too tight a schedule will inevitably lead to the temptation to take shortcuts. These shortcuts might succeed in getting the system working on time – but only if everything goes right, which it rarely does.

Gerald Weinberg The Psychology of Computer Programming Silver Anniversary Edition p68

The Psychology of Computer Programming was originally published in 1971 but remains a great read and very relevant even in a very different computing world. I have the silver anniversary edition which reprints the original with additional commentary at the end of each chapter – what Weinberg calls “wisdom of hindsight remarks”. Weinberg’s comments – 25 years on – about the above quote state:

The wisdom of age has made me ashamed of this statement. “Rarely” is pussy footing; something always goes wrong. Shilly-shallying like this by authors like me has perpetuated the myth of the “optimistic” estimate – how long a project would take if everything went right. Such “estimating” might just as well be based on the assumption that the Laws of Thermodynamics will be repealed during the project.

Gerald Weinberg The Psychology of Computer Programming Silver Anniversary Edition 5.i

Quote of the week – 6 tools

I imposed on [Jack] Real the requirement that he try to design the helicopter so that it could be serviced with six simple tools – any six of his choice. This was more a challenge than an arbitrary decision. I think most good designers want to keep things simple, but sometimes, for the sheer engineering delight of creating, things become unnecessarily complex and cumbersome.

Clarence L. “Kelly” Johnson Kelly: More than My Share of It All

Kelly Johnson was the head of Lockheed’s Skunk Works and a renowned aeronautical engineer. His Wikipedia article contains many more details, including his 14 Rules of Management.

Kelly: More than My Share of It All is also worth reading. It includes stories of Amelia Earhart, the SR-71 Blackbird and a 400lb lion chasing Althea Johnson (Kelly Johnson’s wife) around a factory.

I am not going to search for a software equivalent of “six simple tools” – that would be an analogy too far, and as Johnson goes on to say, the “six tools” are less of a requirement than they are about setting up an attitude. Maintenance matters. Simplicity matters. What use is the best helicopter in the world if you can’t keep it running?

I think there is a software parallel in this comment from Chris Jaynes (commenting on Tim Bray’s post On Duct Tape, itself a response to Joel Spolsky’s post The Duct Tape Programmer that in turn was inspired by a chapter in the book Coders at Work profiling Jamie Zawinski):

Yes, shipping version 1 is a feature, but shipping version 2 is ALSO a feature!

To that I would add:

Being able to fix bugs is a feature.
Being able to maintain code is a feature.
Being able to add features is itself a feature.

Quote of the week – teaching

Not exactly a quote – I don’t have the exact wording and I am not certain who said it (I am separated from the original by at least two degrees) – but it makes an important enough point that I think it’s worth writing about.

A well designed linear algebra library should teach the underlying mathematics of linear algebra.

Attributed to Jim Blinn

Jim Blinn is a big name in the computer graphics world. I heard this quote in the context of computer graphics (which uses a lot of linear algebra) but you can substitute anything you’d write a library for e.g. statistics, databases, containers, date and time calculations. I don’t know exactly what features of a library Jim Blinn had in mind, but that isn’t going to stop me speculating.

Why might it be important to follow the advice in the quote?

  • If you already know about the subject it will be easier to start using the library.
  • If you don’t know about the subject (but have to use the library) you’ll learn about the subject, and can also supplement that learning with other teaching material.
  • If you are writing a library, this advice gives you an additional design tool.

Moving on, what features of a library help teach the underlying theory? I can think of a few, I am sure there are many more – please add to this list in the comments.

  • Nomenclature. Naming comes up over and over again as an important topic. If there are already standard names for things, use them in your library.
  • Clarity of data. Is it obvious what data is required for a given function (or class or module) to do it’s job? I often find that there’s more data available than is required, and as a newcomer digging through the library it can take me some time to work out what is really needed. Simple functions that take exactly the data they require and return exactly the thing to be calculated are great. The STL algorithms do this – they only take the values that are necessary for them to do their job, and they are very specific about the types of iterators required.
  • No short cuts. In a 3D graphics library it is very tempting to make points and vectors the same thing. They both have x, y, z values, they have a fair number of operations in common and the ones that aren’t strictly in common usually have an obvious and reasonable meaning for them both. However, points and vectors are not the same thing. This is an example where using the type system helps.

Quote of the week – testing

Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don’t buy a new scale; change your diet. If you want to improve your software, don’t test more; develop better.

Steve McConnell Code Complete: A Practical Handbook of Software Construction

This is a great quote, and one that’s had a lot of impact on me. I have often seen teams complain about not having enough time (and I have made that complaint many times myself). When you ask what the teams would do given more time the answer is usually “more testing”. As McConnell points out, more testing isn’t going to help if your software is badly written.

Over the past 15 years or so, two testing techniques have come into more common use – test driven development and the agile practice of having fast unit tests that are run every few minutes. These straddle the border between testing and development techniques – to use the weight loss analogy, it’s like being able to weigh yourself every few minutes throughout the day and get instant feedback on how the decisions you’re making (eat an apple vs. eat a cupcake; sit on the couch vs. go for a walk) will affect your weight (and yes, this is pushing the analogy too far).

For more Steve McConnell goodness take a look at this interview from 2003 where he talks about his career, how he writes books and the challenges of running a business. There’s also a great post on his blog where he puts his estimating techniques to the test while building a fort for his children.

Quote of the week – Atomic Accidents

fissionable materials always seemed capable of finding a flaw in the best intentions.

James Mahaffey Atomic Accidents

I have just finished reading Atomic Accidents – it’s a fascinating and frightening book. Don’t read it if your sleep is easily disturbed (I mean it – I have started having dreams where critical quantities of uranium are about to come together and I cannot stop them).

This post is going to draw some analogies between nuclear engineering and software engineering. Respecting Martin Fowler’s quote about analogies (discussed here) I am going to try to extract questions from the comparison, not answers. In any case, many of the answers from nuclear engineering are unlikely to help us in the software field. For example (off topic fact of the day), the shape of the vessel in which radioactive material is stored has a big impact on whether it will go critical or not. For safety you want as much surface area per unit volume as possible. Long, skinny tubes are safe, spheres and scaled up tomato soup cans are not (food cans are designed to use as little tin as possible compared to the volume that they hold). Given the contents of some office coffee machines you might want to stop using those nice cylindrical coffee mugs.

There are a number of themes that run through the book. I have picked out a few for discussion:

  1. An impossible situation. Something that could not possibly happen did. In line with Murphy’s Law, any container of the wrong shape will inevitably end up containing fissionable material, regardless of how many safety precautions are in place. In one accident, the container in question was a cooking pot from the kitchen that was not physically connected to anything else in the system.
  2. Complex systems. Complexity is mentioned often in the book, and never in a good way. Complexity kills, quite literally in the case of nuclear engineering. The more complex the system the more difficult it is to predict how it will react.
  3. Things are (unknowingly) going wrong even in apparently normal operation. The system appears to be working fine, however an excess of material is collecting in an unexpected place, or some amount of coolant is being lost. None of this matters until the moment it does.
  4. A lack of visibility into what is going wrong. Many of the accidents were made worse because the operators did not have a clear picture of what was happening. Or they did have a clear picture but didn’t believe it or understand it.
  5. Lessons are not always learned.

All of these apply in the context of software:

  1. An impossible situation. I have often seen an engineer (sometimes me) state that something “cannot be happening”, despite the fact that it is quite plainly happening. My rule is that I get to say “that’s impossible” once, then I move on, deal with reality and fix the problem.
  2. Complex systems. We have talked about complexity on this blog before. As if complexity wasn’t bad enough on its own it makes every other problem far worse.
  3. Things are going wrong even in normal looking operation. Every time I dive into a system that I believe is working perfectly I discover something happening that I didn’t expect. I might be running a debugger and see that some pointer is NULL when it shouldn’t be, or looking at a log file showing that function B is called before function A despite that being incorrect, or studying profiler output demonstrating that some function is called 1,000 times when it should only be called once. In Writing Solid Code, Steve Maguire suggests stepping through all new code in the debugger to ensure that it is doing the right thing. There is a difference between “looking like it’s working” and “actually working”.
  4. A lack of visibility into what is going wrong. Modern software systems are complicated and have many moving parts. Imagine a video delivery system that runs over the internet with a web front end, a database back end and a selection of different codecs used for different browsers on different devices. There are many places for things to go wrong, and too often we lack the tools to drill down to the core of the problem.
  5. Lessons are not always learned. I could rant about this for a very long time. At some point I probably will but the full rant deserves a post of its own. I have seen far too many instances in this industry of willful ignorance about best practices and the accumulated knowledge of our predecessors and experts. We’re constantly re-inventing the wheel and we’re not even making it round.

I am quite happy that I do not work on safety critical systems, although I did once write software to control a robot that was 12′ tall, weighed 1,000 pounds and was holding a lit welding torch. I stood well back when we turned it on.

In conclusion, one more quote from the book:

A safety disk blew open, and sodium started oozing out a relief vent, hit the air in the reactor building, and made a ghastly mess.

No one was hurt. When such a complicated system is built using so many new ideas and mechanisms, there will be unexpected turns, and this was one of them. The reactor was in a double-hulled stainless steel container, and it and the entire sodium loop were encased in a domed metal building, designed to remain sealed if a 500-pound box of TNT were exploded on the main floor. It was honestly felt that Detroit was not in danger, no matter what happened.

Atomic Accidents. Read it, unless you’re of a nervous disposition.

Quote of the week – consistency

A good architecture is consistent in the sense that, given a partial knowledge of the system, one can predict the remainder.

Fred Brooks The Design of Design

Fred Brooks is well known for The Mythical Man-Month, but his newer book The Design of Design is also worth reading. He takes general design principles and applies them to computer software, computer hardware, buildings large and small, aeroplanes, bridges and more.

One of the main themes of The Mythical Man-Month is conceptual integrity. He carries this drumbeat over to The Design of Design and explores it some more. For my own part, there is a joy in being able to predict where something is in a system, or what it’s called, because of the consistency running through the system.

Quote of the week

I have always found that plans are useless, but planning is indispensable.

Dwight Eisenhower

Eisenhower was making this comment in a military context, and it fits in well with Helmuth von Moltke’s much earlier quote:

No plan of battle ever survives contact with the enemy.

If we remove the quotes from their military context (although I have encountered plenty of people in my career who can legitimately lay claim to being “the enemy”), they talk about the usefulness of the planning process itself. A good project plan is rarely “useless”, but the planning that went into it should have brought many different aspects of the problem into play, and should have led to a far greater understanding of the problem space. This understanding helps when the plan makes contact with “the enemy” and has to be rethought.

I refer back to this post where the answer to the question “what was the most useful information you recorded during those design meetings?” was “the reasons why we didn’t do something”.

The two quotes also show the differences between “traditional” design processes and Agile design processes. Up front planning absolutely has its uses, but the plan inevitably has to change when you start implementing it.