I've been thinking lately about tests in software development. There are many blogs, talks and articles out there clamoring for more tests, for TDD (or against TDD), for having a more complete test suite etc. I'm coming to the opinion that all these are detrimental approaches to testing. Tests are not inherently good, more tests does not necessarily make your software better. Take, for example, a codebase with 100% branch coverage, but a test suite that takes 14 days to run - are those tests effective? Do they even guarantee that there are no bugs in the code? (No - branch coverage doesn't guarantee proper integration with other systems, that you're not missing necessary branches or the correctness of business logic). If you delivered 100,000 tests and no code, would the project succeed?

Recently, I've started thinking about a metaphor relating tests to nails in the context of building a deck. When building a deck, nails are absolutely fundamental (lets just assume that nails = screws or whatever fastener you want to use for your deck). You can't have a stable, useful deck that has too few nails. Likewise, having enough tests is fundamental in having stable, useful software. But in a deck, if you had too many nails (for example: a nail ever square inch) you could actually begin to damage the boards and your deck would become less stable with each nail. This is to say there is an optimal number of nails, and tests. Too few tests won't detect regressions and prevent deploying broken code. Too many tests can ultimately damage the health of your codebase, either by making it impossible to run the test suite in a consistent manner or by making changing anything so painful that people start to ignore test failures (or perhaps in other, more subtle ways).

The other thought that I draw out of this metaphor (I'm trying not to take it too far) is that nails and tests need to be put in the right places. Nails that just go through the decking, not securing it to anything, are detrimental - they don't count towards that optimum number. Likewise, tests that test implementation instead of results (e.g. we called these three methods as opposed to we got this return value) make refactoring difficult and don't do anything to ensure correctness. Testing private methods or the correctness of libraries (which have their own test suites) probably also fall into this bucket. We need to have good tests in the right places.

This is all, ultimately, to point out that tests are not a goal in and of themselves. They are useful for supporting the stability of software and in preventing bugs - both initially and when making changes. When tests are created for the sake of 'having more tests' or similar, non-business-goal oriented reasons, they can become detrimental rather than useful. TL;DR: Tests need to support the real purpose - working code which delivers business value - not as a value in and of themselves.