Quote of the week – testing

Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don’t buy a new scale; change your diet. If you want to improve your software, don’t test more; develop better.

Steve McConnell Code Complete: A Practical Handbook of Software Construction

This is a great quote, and one that’s had a lot of impact on me. I have often seen teams complain about not having enough time (and I have made that complaint many times myself). When you ask what the teams would do given more time the answer is usually “more testing”. As McConnell points out, more testing isn’t going to help if your software is badly written.

Over the past 15 years or so, two testing techniques have come into more common use – test driven development and the agile practice of having fast unit tests that are run every few minutes. These straddle the border between testing and development techniques – to use the weight loss analogy, it’s like being able to weigh yourself every few minutes throughout the day and get instant feedback on how the decisions you’re making (eat an apple vs. eat a cupcake; sit on the couch vs. go for a walk) will affect your weight (and yes, this is pushing the analogy too far).

For more Steve McConnell goodness take a look at this interview from 2003 where he talks about his career, how he writes books and the challenges of running a business. There’s also a great post on his blog where he puts his estimating techniques to the test while building a fort for his children.

3 thoughts on “Quote of the week – testing

  1. That’s an interesting analogy. I agree that the act of executing more test effort doesn’t improve your code. Also, waiting to test is one of the biggest killers of deadlines. So what about WRITING tests. TDD for instance, or just simply getting them written as part of your “definition of done.” I would think writing tests are more like hiring a trainer, using the weight loss analogy. Not everyone knows how to lose weight. They must have advice. Writing tests are akin to having a trainer tell you to flex your abs more, or straighten your back. Yes, I could flex or straighten on my own, but I might not have gotten the form quite right initially. I have the best intentions when writing a new piece of code, but tests help guide me to doing so with better form.

    • Bob says:

      I have the best intentions when writing a new piece of code, but tests help guide me to doing so with better form.

      I really like this sentence. I too have the best of intentions when writing code, unfortunately the computer doesn’t take my intentions into account when running the code – hence the need for good testing.

      The best project I ever worked on had a lot of testing. We spent time writing test infrastructure to make it easy to add tests, and then a lot of time writing the tests themselves (sometimes before the code, sometimes afterwards – we weren’t perfect [but one of the things I like about TDD is that it was useful even when we didn’t do it by the book]). I firmly believe that every minute we devoted to testing paid us back many times over.

  2. John Payson says:

    One problem with testing is that while it’s possible to test a method to ensure that it meets a specification, that won’t ensure that the specification includes everything that’s actually required to meet real-world needs. Further, many bugs arise from failures to consider corner cases; a programmer who neglects a corner case when writing code may likely neglect it when writing tests.

    On the other hand, there can be some considerable value in some sorts of integration tests which monitor object internals. Ideally, the promises objects make in their documentation and consumer’s assumptions about how they work would coincide, but in reality they don’t. Sometimes documentation fails to make all the promises necessary for an object to be useful, and sometimes documentation makes promises which implementations can’t possibly meet, but which go beyond what consumers will actually require.

    This problem is hardly confined to software–many electronic component data sheets, for example, will specify component behavior at 25C, -40C, and +85C, but won’t actually guarantee anything about intermediate temperatures; if a spec guarantees that a device will generate a signal at 4.00MHz +/- 10% at all temperatures in the range -55C to 105C, but +/- 1% at 25C, would such a part be suitable in a product that needs to be accurate to within 4% when used in climate-controlled environments between 20C and 35C? In practice, almost any oscillator whose frequency is stable within 10% at the indicated temperature extremes would be stable within 1-2% in climate-controlled temperature ranges, and would thus have no trouble satisfying a 4% accuracy requirement. On the other hand, nothing in the spec *guarantees* that. Consequently, it may be reasonable to test at least some assembled devices in each batch to ensure that the oscillator frequency is within 3% at 0C and 55C, to ensure that manufacturing variations haven’t produced parts which meet specs but do not meet requirements.

Leave a Reply to Brandon Wittwer Cancel reply

Your email address will not be published. Required fields are marked *