
I have a more contentious example of working with objects that are hard to replace: logging. Take a look at these two lines of code:
log.error("Lost touch with Reality after " + timeout + "seconds);
log.trace("Distance travelled in the wilderness: " + distance);
These are two separate features that happen to share an implementation. Let me explain.
Support logging (errors and info) is part of the user interface of the application. These messages are intended to be tracked by support staff, perhaps system administrators and operators, to diagnose a failure or monitor the progress of the running system.
Diagnostic logging (debug and trace), is infrastructure for programmers. These messages should not be turned on in production because they're intended to help the programmers understand what's going on whilst they're developing the system.
Given this distinction, I should consider using different techniques to develop these two type of logging. Support logging should be Test-Driven from someone's requirements, such as auditing or responsiveness to failure. The tests will make sure that I've thought about what each message is for and that it works. The tests will also protect me from breaking any tools and scripts that other people write to analyse these log messages. Diagnostic logging, on the other hand, is driven by the programmers' need for fine-grained tracking of what's happening in the system. It's scaffolding so it probably doesn't need to be Test-Driven and the messages might not need to be as consistent as those for support logs. These messages are not to be used in production, right?
Support, not logging
To get back to the point of the series, writing unit tests against static global objects, including loggers, is noisy. Either I have to read from the file system or I have to manage an extra testing Appender
. I have to remember to clean up afterwards so that tests don't interfere with each other, and set the right level on the right Logger
. The noise in the test reminds me that my code is working at two levels: my domain and the logging infrastructure. Here's a common example of code with logging:
Location location = tracker.getCurrentLocation(); for (Filter filter : filters) { filter.selectFor(location); if (logger.isInfoEnabled()) { logger.info("Filter " + filter.getName() + ", " + filter.getDate() + " selected for " + location.getName() + ", is current: " + tracker.isCurrent(location)); } }
Notice the shift in vocabulary and style between the functional part of the loop and the logging part (in italics)? At a micro level, the code is doing two things at once, something to do with locations and rendering support information, which breaks the Single Responsibility Principle. Maybe I could do this instead:
Location location = tracker.getCurrentLocation(); for (Filter filter : filters) { filter.selectFor(location); support.notifyFiltering(tracker, location, filter); }
where the support
object might be implemented by a logger, a message bus, pop-up windows, or whatever's appropriate; the detail is not relevant to the code at this level. This code is also easier to test. We, rather than the logging framework, own the support
object so we can pass in a mock implementation at our convenience, perhaps in the constructor, and keep it local to the test case. The other simplification is that now we're testing for objects rather than the formatted contents of a string. Of course, we will still need to write an implementation of support
and some focussed integration tests to go with it; Jeff Langr's Agile Java is one source of advice on how to do that.
But that's crazy talk…
The idea of encapsulating support reporting sounds like over-design, but it's worth thinking about for a moment. It means I'm writing code in terms of my intent (helping the support people) rather than the implementation (logging), so it's more expressive. All the support reporting is handled in a few known places, so it's easier to be consistent about how things are reported and to encourage reuse. It can also help me structure and control my reporting in terms of the application domain, rather than in terms of java packages. Finally, the act of writing a test for each report helps me avoid the "I don't know what to do with this exception, so I'll log it and carry on" syndrome, which leads to log-bloat and production failures because I haven't handled obscure error conditions.
One objection is "I can't pass in a logger for testing because I've got logging all over my domain objects. I'd have to pass one around everywhere". I think this is a test smell that is telling me that I haven't clarified my design enough. Perhaps some of my support logging should really be diagnostic logging, or I'm logging more than I need to because I left some in that I wrote when I hadn't yet understood the behaviour. Most likely, there's still too much duplication in my domain code and I haven't yet found the "choke points" where most of the production logging should go.
So what about diagnostic logging? Is it disposable scaffolding that should be taken down once the job is done, or essential infrastructure that should be tested and maintained? That depends on the system, but once I've made the distinction I have more freedom to think about using different techniques for support and diagnostic logging. I might even decide that in-line code is the wrong technique to implement diagnostic logging because it interferes with the readability of the production code that matters. Perhaps I could weave in some Aspects instead (since that's the canonical example of their use). Perhaps not, but at least I now have a choice.
One final data point. A colleague of mine once worked on a system where so much pointless content was written to the logs that they had to rolled off the disks after a week. This made maintenance very difficult as the relevant logs had usually gone by the time a bug was assigned to be fixed. If they'd not logged at all they could have made the system go faster with no loss of useful information.
No comments:
Post a Comment