Thursday, February 28, 2008

Learning to Love Being Test Driven

Modern coders will, no doubt, find this little monologue hopelessly passe and babyish.

On the other hand, if your roots lie in the 'traditional waterfall' approach to software development, then there's a refrain that sounds nonsensical when said out loud:
'Test before you code'
Of course, the waterfall approach has been well and truly superceded in favour of cyclical development and the Agile Manifesto (although some would disagree, sotto voce ;-).

So, what is weird about the concept? The first thing that occurs to you is that effect precedes cause: if you test first, then you test...what?

Actually, if you look at the standard mantra (Analyse, Design, Code, Test) you will see that there is actually plenty to test: by the time you are ready to code even a couple of lines, you should have an idea of what you are coding those lines to do. You will some idea about how that code will be invoked. In short, you will have described an interface.

So, testing an interface sounds less ridiculous, doesn't it? Throw some parameters at it and see what happens. (of course, this assumes you or your client have put some requirements together in the first place so you know what it's meant to do. You have done that, haven't you? Of course you have!)

To encourage this desirable behaviour, a number of ancillary frameworks that support what is collectively referred to as 'unit testing' have recently become available. It's not hard to find a plugin for your language. They provide a base test class from which you derive your own suite, and define each test as a method of that class. The wonders of mirroring allows the base class to invoke all methods that fit a pattern like 'test*' (even non-mirroring languages like C++ can join in, although it requires a little more effort from the tester)

So, the *very* first bit of code you should write is your test cases, in the cheerful expectation that each and every test will report abject failure (although statistically, some random stub is bound to work by sheer fluke. Never mind, nothing is perfect!)

Actually, if you do this, your code will probably explode spectacularly, since you haven't written the interface stubs yet! Never mind, most unit test frameworks (and compilers) will take this in their stride, and let you know *precisely* what they think of your coding abilities (and common sense!)

OK. Take 2, you have now defined and written those interfaces, and the tests run; returning a dutiful set of 'F's. Abject failure is a given at this stage. That's good, because there's only one way things can go from here! Off you go! Busily re-writing your code to bring each and every one of those big F's to a full stop.

And, when your code is such that your test output is a green bar or chain of dots..... then you can rest from your labours, content that you have a set of reproducible results that prove your code meets the requirements, and that you will quickly know if any results subsequently become irreproducible as you start tinkering with the code to extend it, and make it look and work better (a pastime known as 'refactoring').

Now, while this is all very well, there is actually something else you can take away from all this. As you write your tests, you will occasionally come across an interface that is difficult to test comprehensively. When this happens, you can put your head down and write hundred of tests for every occasion. Or, you can rest your head in your hands, and have a think about why the interface is a tough one.

It may be that it's trying to do too much (so break it up). It may be that its effects are not immediately observable (so see how you can open things up). The point that may occur to you from this little piece of introspection is that your unit testing is actually encouraging you to think about your coding habits in a new and generally constructive way: the more open and modular your code is, the more readily testable it is. Well tested code can expected to be more robust code.

Like any infrastructure, it is always beneficial to provide unit testing. The most benefit is derived from installing it as early on in the project as possible. Like most infrastructures, there will be some who perceive this unproductive clutter as time consuming overhead. "Never mind these silly tests! We want to see measurable progress!" (usually measured by lines of code written* or number of features implemented)

Allow me to introduce such people to the concept of 'throughput', which sets the upper bound of your productivity as the value of goods/features/whathaveyous that have been passed on to the client.

The value of an untested feature, to a client, is ... zero. So, it doesn't matter how of these you have rattled off in the past week, your net throughput is effectively... zero.

Or, maybe that would be expressed as 'FFFFFFF Zero'?

When you think about it, zero Fs is the desired result.

* I would be profoundly depressed to hear that anyone is still using LOC as a serious measure of progress! But then, there is always something to be depressed about!