Test driven development is great as long as you have proper tests. The problem is that it’s very hard to predict enough edge cases to cover the field of possible scenarios. Code coverage analysis will help developers make sure all code blocks are executed, but it doesn’t do anything to ensure an application correctly handles the variations in data, user interaction, failure scenarios, or how it behaves under different stress conditions.
The fact that tests are helpful, but never complete is something most developers are already conscious of. The danger is that better tests make worse developers! It’s very easy to lean too heavily on passing tests, wildly changing code until the light goes green without spending enough time thinking through the application’s logic.
I’m basically saying that, psychologically speaking, passing tests gives us a false sense of security. They can be a distraction from carefully crafted and thought through code. That’s why I advocate writing tests only for the purposes of regression testing. It should be a follow-up step, not an integral part of initial development.
3 responses to “A False Sense of Security with Test-driven Development”
[…] This post was mentioned on Twitter by Zac Witte and HN Firehose, News Bloom. News Bloom said: A False Sense of Security with Test-driven Development – http://bit.ly/eRDwAt – [Hacker News FH] […]
I find it best to have both tests that define the behavior I expect before I’ve written the implementation as well as regression tests to catch edge cases that are missed on earlier passes. The problem with only having regression tests is that refactoring becomes nearly impossible to get right. Unfortunately a lot of developers leave out the “refactor” part of red-green-refactor and thus don’t see the true value of writing tests first.
Also, having a comprehensive set of tests doesn’t excuse developers from thinking.
I think that writing tests first is as much about API design/documentation as it is about a sense of security that your code is working properly. Essentially, if you do it right, you can start programming with your interface before you have it. This gives you a gut feel as to whether what you’re building will actually help you easily accomplish your goals. If it’s awkward to write a test for a particular design, it’s probably going to be difficult to be sure it’s working, whether you have tests or not. Driving the code through tests can help you identify cases where you might want to rethink your API so that it’s more loosely coupled, easier to debug etc.
I agree with you that, as you start writing tests at higher levels of abstraction, it’s much more difficult to anticipate every possible combination of inputs because so many more components come into play. I think tests serve a different purpose at this higher level. Here, having tests that run through your core use cases is very valuable. Like you said, this detects regressions and gives you some degree of confidence that the product works for the cases you care about. It doesn’t guarantee that you’ve properly composed all of the components to handle every case, and it doesn’t guarantee that it will work in production, but it gives you some more confidence, and like Anthony said, doesn’t excuse a developer from thinking!