26 September 2011

Feature-driven, design-guided, and tests-in.


I just typed an answer to a new comment on my last post. Apparently it became to long to be submitted as a comment itself, so here I am turning it into a new post. I starts out with some "loud thinking" but ends with some nice insights.

Hi James, thanks a lot for your comment. You're asking just the right questions and those questions help me see more clearly, what "my problem" with TDD is.

Now, ten days after I wrote that post and through your comment, I realize that there's actually a big gap in how TDD is summarized (especially Uncle Bob's version with the three laws) and in how TDD is actually successfully practiced. I find that when the three rules are taken literally (we tried that in some Dojos), then the development and actual design becomes cluttered with detail of each test case and it's less feature-driven as well as less pattern-guided than I would like. On the other hand, if I'm looking at successful agile development with lots of unit tests, then the three rules are just not visible in the process.
I think that maybe two social processes are at work here: on the one hand, good practices spread thru pair programming and people reading a lot of open-source code, but those practices often don't have catchy names. On the other hand, there's a very catchy concept called TDD and very simple "three rules" and people saying that just by following those rules and refactoring, everything else will follow. For example, some people say that good design automatically follows from testability because only loosely coupled systems are easily testable.

So, the reason I wrote this blog post is that the simple, catchy way, TDD is explained, just won't work. It's also just not true that TDD gives you an easy way to tell when you're done. I am currently working on a medium-complex system (roughly developed by a four-person team over two years) with high unit and integration test coverage and we repeatedly had incidents just because we forgot to add something here or there which didn't get caught by the tests. However, our code is simple enuf that those missing parts would become obvious if we just had a final code review after every iteration where we check all production and test code against (a longish) list of the specific level of done for the project. (Which includes error handling, logging, monitoring, etc.) That review is what we now regularly do. Sometimes we find missing things in tests, sometimes we find them in the source, in each case it's easily fixed before going live. So the seemingly obvious things like "TDD always gives you 100% coverage" or "with TDD you always know when you're done" are just not relevant in practice.

My conclusion after working in a "high unit-test coverage" project are that not tests should come first, but the design of very small parts (a method or a small class) should be first instead. The design is primarily guided by the user (caller) of that unit. Design is always finding a sweet spot between a desired feature on the one hand and technical considerations just as available technologies, efficiency, and -of course- testability, on the other hand. I don't think it matters whether you write the implementation (of a small unit) or its tests first as long as you get all tests to pass before you tackle the next unit. (Personally I prefer implementing it first, because the implementation often is a more holistic description of the problem. Only for complex algorithms (which I find to be rather rare), writing tests first seems to give a better start at properly understanding the problem.) By starting with the design (which most often is an interface specification), I find it much easier to think about the method or class in a holistic fashion and also figure out a set of test cases that's small yet covers everything I need. Would you say that this process is still TDD?

Fast tests with high coverage are very important to me, not least because refactoring is very important to me. But I don't like the term "test-driven" because the driver of development is always some external (non-technical) need, such as a feature or some resource-restriction ("make it faster"). Tests are just a technical tool (albeit an important one) and it's the design that creates interfaces which both fulfill customer needs and technical standards. I think of my development rather as "Feature-driven", "design-guided", and last not least "integrated-testing" (because tests are an integral part of the code). Maybe the term "tests-in" is more catchy? As long it isn't "driven...". After all, model-driven also didn't work that well... ;-)

3 comments:

Robert Jack Wild said...

Good software design is mainly manifested in good interfaces, so a better motto for an enlightened agile development might be "requirements-driven, interface-centered, and test-supported development".
This highlights the fact, that tests are done early and mostly done by the developers themselves, but does not suggest that tests are the most important design tool. Automatic tests do not replace real stakeholder involvement. And interfaces need explicit and continuous team attention. (You might not get it right on the first try, but you also can't rely on refactoring alone. A user-visible feature is a better iteration-driver than a single test case.)

Pat said...

Interesting ideas and: The terminologie 'Feature-driven, design-guided, and tests-in' distinguishes very well between an external driver/stakeholder (->driven) and an internal impetus.

Robert Jack Wild said...

I recently realized that TDD has indeed an influence on architecture. A very simple and pragmatic one: if you test from the start, you have good motivation and guidance to make your architecture easily testable. It's obviously easier to do this as you are going with direct feedback instead of adding "test support" to a finished product.

Post a Comment