15 September 2011

How to write good software and why baby-step TDD is a scam

First off, I am obviously not going to tell you all about writing good software in a single blog post about TDD. Writing good software takes a lot of learning and a lot of practice. There have been countless books written on the subject and since this post isn't about a book list for software engineers either, I'll just mention one to give you an idea: Object-oriented software construction by Bertrand Meyer.
The company I work at has quite a large software development department and quite a good leadership for the latter. Our managers promote autonomy (developers choose the technologies and methods they think are best suited for the work) and learning on and off the job. For example, we have regular (voluntary) coding dojos (practice sessions) where a bunch of developers sits together to solve some simple problems with some new approaches. This is certainly an important part of writing good software.
Recently, we experimented with Test-Driven-Development (TDD), which some people also read as Test-Driven-Design. TDD as my colleagues introduced it to the rest of us is based on the following three rules:
  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
(Something most proponents of TDD would add is a fourth step to refactor the code while the tests are green, but when TDD is introduced and defined this step is usually not mentioned.)
Our company DOJOs and some reflection upon them have taught me that this is plain bullshit and here's why. In the last two decades, the profession of software development has embraced methods like automated (unit and integration) testing, iterative development, early testing (also called "tests first"), merciless refactoring, design patterns, automated builds and many more. All of those practices are great if done right. Now TDD comes along and claims to condense many of them together into an integrated framework based on the above rules. Going back and forth between tests and code is obviously iterative. Tests obviously have to be automated. You obviously need refactoring, because otherwise TDD will produce terrible code. So TDD dresses itself up as the natural evolution of agile development. But the truth is: TDD is a perversion of agile which over-applies agile principles in a way that doesn't make any sense any more.
Somebody who claims to do TDD either doesn't follow the three rules above or they're doing helplessly bad development. TDD is a scam because it contributes nothing new to the set of agile practices. If someone using “TDD” succeeds writing good code, it is due to the other agile practices, not due to the three rules above. TDD even obscures and ignores a lot of other important methods. SCRUM, for example, tells us to define minimal features and implement them including production code, automated tests, and all that's needed to deploy and run the feature live. SCRUM offers a lot of advice on what a minimal feature is, how to split stories and what's small enuf not to need any further splitting. TDD, on the other hand, splits iterations too much, ignoring SCRUM's advice. Design by Contract tells us how to write minimal interfaces by considering both the needs of the client and the provider and describing the interface succinctly in code. TDD, on the other hand, says that interface should emerge while they instead drown into a plethora of special cases. Finally, testing methods teach us how to design good (and minimal) test cases, get good coverage, and test most where it is needed most. TDD, on the other hand, says nothing about where you start, how to continue, or when you are done. Tests are always green, but when do you have enuf tests?
Think about that: there have been countless example demos of TDD on the internet, on conferences, in practice sessions, but have you ever even seen a small program development finished with TDD? To the contrary, the only thing I see are epic failures. (Thanks, Fred, for the great link!)
So, can we please forget about this exaggerated baby-step TDD, stick to established best practices, and move on writing good software?

Addendum, months later: I saw a good example of TDD in Freeman & Pryce's book "Growing Object-Oriented Software". Their interpretation is much better than the baby-step TDD seen in blogs. The book starts by summarizing established best practice OO design. Their example study is much more elaborate and the problem domain is actually related to the kind of software that professional Java developers are writing for money. If you want to know about the real thing, you have to take the time to read something longer than a couple blog posts.


James said...

TDD helps my team achieve the qualities you mention (minimal coupling, good coverage, DBC) and others you did not (quick and early feedback on breakage). Are you actually suggesting we stop practicing TDD?

I agree with the notion that TDD is not an end itself...the qualities you mention are among the desired results. But it REALLY sounds like you're saying that TDD shouldn't be practiced, period. Why?

If you're achieving those results without TDD then great. In my experience, TDD will provide those qualities more consistently and for less cost than writing the code first and imagining test cases later. It's worth at least TRYING it for some time, but really if you can get those benefits without it and TDD just doesn't work for you then don't do it.

Suggesting that others should NOT practice it is just as silly as me telling you that you SHOULD practice it without regard for understanding WHY you're practicing it.

Robert Jack Wild said...

Here's another episode about a failed TDD study project: http://www.objectmentor.com/resources/articles/xpepisode.htm
The authors, including famous Uncle Bob, finish with a working piece of well-tested software which they view as a successful result. However, their program has several severe design flaws such as overusing side-effects, violating the SRP, and solving the same problem twice in different ways in two classes. (That solution duplication is hard to spot, because the solutions are so different. The problem is splitting a sequence of throws into a sequence of frames and they solve that same problem in two entirely different ways.)

I am currently solving the same study using best practices as described in my follow-up post: http://rethinktheworld.blogspot.de/2011/09/feature-driven-design-guided-and-tests.html

Will post the result as a separate blog entry!

Post a Comment