04 July 2007

Start with a working system

Image we've just been commissioned to build a system, we have 30-inch monitors, we have the Customer in the room, we've prioritised some stories, now let's write a unit test! Well, maybe not yet. Our first test should be an attempt at something end-to-end that exercises a visibly useful feature (even if it's really, really tiny). To make this test pass, we'll need a Walking Skeleton, an absolutely minimal sliver of the whole system that actually works. To make the Walking Skeleton work, we'll have to think about the structure of the system, including build and deployment, which will help us flush out a whole Sackful1 of project risk: our chosen messaging system doesn't work on our chosen platform, we need 4 weeks notice and 8 signatures to make a change to production, the target PCs only have 640K of RAM, and so on.

As a rule, we only discover this sort of issue when we actually try it, which is why we like to start with a build and deploy script, and then figure out what to put in it. We also prefer deploying as close to production as possible, since each step in the end-to-end journey brings its own challenges. This approach means we find the blockages at the start of the project, when we still have time to fix them, rather than at the end. In practice, of course, sometimes these steps take so long to implement that we start with a shadow system to stand in for our current interpretation of the real thing. But this is only a stop-gap, a temporary patch to allow us to make progress, it's not an acceptable end solution.

We need to make the point more strongly that unit-level testing (even using mocks!) should take place within the context of higher-level tests that set the direction for the current slice of development. The higher-level tests keep us focussed on what we need to implement next and keep us honest about what is working.

1) that's a level-2 Sackful, as defined in the PMBOK.

What does "easy" really mean?

Steve's recent post about the perceived conflict between Testability and Design included this quote from a user of TypeMock:

The key benefit we get from TypeMock is having the ability to fully unit-test the code without impacting the API design. [...] For us, the API is part of the deliverable. We need to make it fairly easy to consume and can't have the architecture of the solution overshadow the usability of the API.

The crux of the issue is in the words "easy to consume". What does that mean? Easy to learn? Or easy to adapt to new, unanticipated situations.

For example, many developers find the java.io API complicated. This is how to open a file for reading:

Reader reader = new BufferedReader(new InputStreamReader(new FileInputStream("input.txt")));

The equivalents C or Python are much shorter, in Python:

reader = open("input.txt")

The Java version does, however, have a point. Is use of the Decorator and Chain of Responsibility patterns makes it easy to apply in different situations and adapt different underlying transports to the java.io stream model. In the C approach. different implementations are buried in the runtime, so you have to go to a different mechanism to try anything new.

TDD with mock objects drives an object-oriented design towards one like the java.io API. The design process focuses on discovering common patterns of communication between objects. The end-to-end system behaviour is defined by composing objects instead of writing algorithmic code. That makes code more malleable by experienced programmers but, arguably, makes it harder to learn for newcomers to the codebase or to object-oriented programming itself.

The problem can be addressed by layering expressive APIs that support common operations above the flexible, object-oriented core. A simple API is easy to learn but allows the programmer to drill down to the flexible core when new unexpected situations arise.

JMock itself follows this model. The core is an object-oriented framework for representing and checking expectations. This framework is flexible and extensible. You can create and compose objects to represent and test all sorts of expectations. This level of code, however, is too fine-grained to express the intent of a test. It's like trying to figure out what's for dinner from reading the recipes. That's why we also wrote a high-level API that is closer to the programmer's problem domain. It makes it easy for us to write readable code to set up framework objects and we still have the extension points we need when we need a new feature — and we can test jMock without manipulating bytecodes.