The value of minimal tests

According to legend, Dijkstra-子 said: Testing cannot show the absence of bugs, only their presence. This is true only in the narrowest interpretation: testing cannot generally show that there are no bugs. It can, however, show the absence of a huge class of bugs: all those which cause the program to always fail.

I submit that these are a significant fraction of all bugs. When you implement something, how often does it work on the first try? When you track down a bug in the wild, how often is the affected code so completely broken that it couldn't work for any input?

Of course the answers to these questions depend on whether those bugs get caught earlier, whether by manual testing or static checkers or automated tests. I think much of the value of automated tests is that they are so good at catching those bugs. A single test case takes very little work, yet catches these bugs quickly, reliably, and (unless the test overspecifies the result) with no false positives. That set of virtues is hard to approach by any other means.

Other uses of automated tests are less impressive. Some people write laborious tests for boundary cases and other possible bugs. I'm not terribly enthusiastic about these tests, because each one normally catches only one bug. They still show the absence of specific bugs, but they don't have the power of the most basic tests.

No comments:

Post a Comment

It's OK to comment on old posts.