Summary: Some thoughts on Test-Driven Development (TDD).
A few weeks ago, Stack Overflow co-founders Joel Spolsky and Jeff Atwood discussed criticized certain aspects of Test-Driven Development (TDD) and SOLID Principles of Object-Oriented Design (OOD) [the latter advocated by Robert C. Martin, AKA Uncle Bob]. Check out the excerpts from the original podcast and the follow-up talk (you may also want to listen to the Scott Hanselman's interview with Uncle Bob, which prompted Joel's rant):
- The Stack Overflow: Podcast #38, Excerpt (14:32) [see also podcast page and transcript]
- The Stack Overflow: Podcast #39, Excerpt (18:35) [see also podcast page and transcript]
I happen to lean more towards Jeff and Joel for the following reasons (in addition to the reasons mentioned in The Stack Overflow podcasts).
First, Joel mentioned the issues related to GUI testing, but it's not just the GUI: the database layer poses another challenge. When unit testing database-driven apps, a common approach is to rely on mock objects (in which case, you totally skip database testing) or use test-specific data in a simplified version of the database (here you will have to figure out how to maintain this database and integrate it into your build process). Even with the 100% code coverage of the business layer, if your unit test skip the GUI and database, about half of the application (more or less) will remain untested, and you'll have to address this gap using functional testing, which will overlap with unit tests. So why spend time writing unit test for the code that will be tested functionally anyway?
Second, claims that TDD improves code quality are rarely, if ever, substantiated by data. You may hear statements like:
"Since I (we) started using TDD, my (our) code has become much better."I'm not claiming that these statements are false, but without data they are just personal opinions. Even assuming that code did in fact improve, could it be due to other factors, such as better programming experience, tools, team composition, processes?
Analyzing effects of TDD on code quality is extremely difficult. One of the better studies on the subject was conducted by Microsoft and IBM (see the 8-minute interview with one of the researchers and the original paper). The study found that TDD improved code quality, but it also increased development time.
There is a trade-off: either use TDD and spend more time writing, maintaining, and running unit tests now or do not use TDD and spend time fixing bugs later. The question is: how much time? If TDD activities take 2 weeks out of an 8-week development cycle (a typical 25% TDD penalty) to prevent defects that could've been found and fixed in 3 days during functional testing, would TDD give you the best return on investment? By the way, it is worth noting that the team with the smallest test coverage (62%) in the study achieved the best results (90% drop of defects), while the team with the highest test coverage (95%) achieved the "worst" (as compared to other teams) results (40% drop of defects; still not bad when compared to the control group). I would also emphasize that the use of TDD in the study was close to ideal: the projects were appropriate for TDD and the teams were not pushed to achieve the maximum code coverage.
The Microsoft-IBM study had a few problems, though. First, the number of participants was rather small. Second, and more important, it tried to compare similar teams and projects, but there were many difference between them, such as team/project sizes, and project durations, which is expected in a study focusing on real life projects. Studies performed in academic settings -- with undergraduate students of approximately the same grade working on identical projects -- attempted to minimize some of the differences and make experiments more controlled. According to an abstract from one of such studies:
"[T]est-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed."How this sounds depends on how you interpret the study's findings. If you're a TDD supporter, you may join Phil Haack and announce that Research Supports The Effectiveness of TDD. Or, you can look more closely at the results of the study and come to a different conclusion. For example, Jacob Proffitt suggested (see also post comments) that:
"[W]ithout question, testing first leads to having more tests per functional unit. The question is if this is valuable. This study would seem to indicate that this is probably not the case, at least if quality is your intended gain. But then, I'm not that surprised that number of tests doesn't correspond to quality just as I'm not surprised that the number of lines of code doesn't correspond to productivity."I would also point that the code in the experiments was not written for database-driven apps with web-based GUI, so again it was an ideal usage of TDD. So I would speculate that if either of these studies focused on database-driven apps with web front ends, the results would probably be even less encouraging.
I'm not an opponent of TDD and agree that it can be valuable when applied to certain types of projects. My gripe against TDD is that it is being often sold as yet another silver bullet and forced into adoption indiscriminately, in the projects where TDD offers minimal, if any, benefits.
I also resent the assumption that TDD always leads to good code. Your My project can have 100% code coverage, but it can still suck. It will pass all unit test, alright, but at the same time it can be unreadable, unnecessarily complex, uncommented, resulting in an app with bad GUI and unintuitive behavior, an app which both the users and support teams hate. But don't you dare to criticize my code: since it's 100% unit tested, it's good by definition. Usability testing? Never heard of it. And as far as everything else goes (like comments, better design, etc), do I have time for this when I spend my better hours working on unit test? Something's gotta give.
Finally, what about all those great apps written by developers, who did not use TDD or any other silver-bullet-like methodology? Take Google developers, who apparently do not follow any software development methodology or approach, but somehow manage to write apps that people love. If they can write great apps (which is one of the goals of good code), why would one want to impose TDD on them?
Now, Google developers are good, but what do you do if your developers are bad? [I'm not talking about developers who make occasional, sometimes serious, errors (this happens even to the best of us); I'm talking about developers who consistently write bad code.] Wouldn't TDD help them write better code? I wouldn't expect it, but if you can share a success story, please leave a comment.
If your organization suffers from bad code, consider adopting TDD (when applied correcly, it may help), but more importantly, consider other factors, such as:
Hanselminutes Podcast 146 - Test Driven Development is Design - The Last Word on TDD
Hanselminutes Podcast 31 - Test Driven Development
TDD Tests are not Unit Tests by Stephen Walther
Test-After Development is not Test-Driven Development by Stephen Walther