…so after many days of furious coding the final feature is completed. The quality assurance team have okay’d the change and a gold release candidate is ready to hit the shops. In the meantime some team members tidy up the code, the final master disk is burnt and sent to fabrication plants in some far off land. A few days later, all is good in the world, you’re sitting on a beach somewhere hot with a Cuban cigar and a mojito when a mustachio’d waiter trots up to your sun lounger carrying an inordinately shiny silver platter with a strangely out of place bakelite phone gently ringing. “For you sir…”
There is a problem, we’ve pressed 100,000 DVDs and all of the characters in the game can no longer rotate to the left…
…snapping out of your reverie you realise the reality is much more mundane, less sunny and has you being informed of impending doom in an office somewhere in Surrey. The long and the short is that some subset of the required features were removed or broken and due to the complexity of the product it wasn’t discovered until the code was in the wild. This is considered by some to be a Bad Thing™… so you need to apply a coding band-aid and try and make sure that this. Can. Never. Happen. Again.
To prevent a similar problem we want to automatically run a set of tests which will validate the code and ensure we get the earliest possible warning to any runtime breaking change which is committed to our sourcebase. The tests can either be integration level where we test large scale features or unit level where we can verify the smallest units of code. Rather than retrofitting tests to code that has already been written you can create your tests before a single line has been typed into your favourite editor… this allows you to generate a set of Use Cases which will validate what you’re about to write, and where Test Driven Development (TDD) comes in.
Test Driven Development 101
Test Driven Development is a process by which a programmer creates simple, automated tests which capture application logic and allows for sections of the code to be continuously validated to reduce the chance of bugs being introduced. It originally sprang from the Extreme Programming movement but should really be considered the coding analogue of the Scientific Method. Each test breaks down somewhere along the lines of:
- Create a hypothesis that you want to test.
- Create a test which captures the hypothesis, this should fail as there is no code to fulfil your test.
- Create a “method” containing code which you think will validate your test.
- Collate the results of your handiwork. Does your test pass/fail?
- If it fails, create a conclusion based on the results you’ve seen. Either modify the hypothesis so it correctly captures your problem or modify the method so that you can better test your hypothesis.
- Repeat until the test passes and you’re satisfied that it captures the Use Case or hypothesis that you’ve created.
There are a few, easy to remember rules when creating tests:
- A test should be quick to execute. You want to fail fast if something is wrong and may be running a large number of tests before each code commit.
- A test should be easy to understand. The tests may be followed by any number of people, and the code contained in the test should be as simple as possible to ensure that the largest number of team members can follow the logic and fix any errors which may arise.
- A test should be runtime invariant. Whether a test passes or fails shouldn’t depend on the order of tests run, or how many tests were run.
- A test should use real data as much as possible. We want the test to capture failures which may be seen in our runtime system, ideally they should use exactly the state our endusers will see.
What kind of benefits can we expect to see if we create test cases using TDD?
- Capture of bugs very early in the development process. It costs much more to find bugs later on in the process, by generating tests up front you’re making it more likely that you’ll capture issues sooner rather than later.
- The cost of refactoring goes down. Changing the underlying implementation becomes much cheaper as you can quickly ensure that the new implementation conforms to your tests.
- You reduce the chance that features are inadvertently removed by accidental “over cleaning…” Most modern languages have coding idioms which rely for example, on dynamic/runtime registration of components. Clean code can still compile but may now have key features deleted which will not be obvious until the code is fully regressed.
- By forcing you to think about your tests/use cases up front I’ve found that it changes people’s coding habits and over time makes them much more concise and precise. Their fellow team members can review their tests and point out any inconsistencies which will make for better programmers as they get used to capturing only what is needed to solve the problem at hand.
- Tests become the best documentation for the codebase. One of the problems with documenting code is that as soon as a document is written, it is already out of date. There are some tools which can rip documentation from the code such as Doxygen however even these rely on judicious commenting which can become quickly stale. The test cases are a simple point of call which can be read by developers and will be kept up to date as the system evolves, capturing the usage expectations for the system.
Tests can even be created post-hoc to enable you to annotate your learnings about new systems as you explore functionality. This has allowed fellow coders I have worked with to make significant changes to large live integrated systems and ensure that any modifications did not break functionality for current users.
How Can I Try This Out
The simplest way of playing with this is to create unit tests which cover the smallest elements of functionality using one of the many xUnit frameworks for your chosen language. In my next blog post I’ll cover how to do this in more detail, however for the moment as an overview I’ve enumerated the frameworks I’ve used in the past and can recommend each of them:
- For C# have a look at NUnit. I’ve written a significant number of tests using this framework and it integrates really well into Visual Studio using Resharper.
- For Java I’ve used JUnit and can strongly recommend it.
- For C++ I’ve used CppUnitLite and Unit++. I’d recommend CppUnitLite as a very clean unit testing framework. Unit++ is a bit more C++/template-y, people may find CppUnitLite has a less steep learning curve if they are mainly C-With-Classes programmers rather than C++ developers, and it was written by one of the guys from the original CppUnit project.
Once you’ve chosen and installed your framework you should be able to create tests, check to see how many fail and generate XML which encapsulates the results of your tests using one of the test unit runners which comes with the framework. The XML output can be integrated into your build system to give you a heads up about the state of your code. For example Bamboo and Quickbuild consume xUnit/JUnit format XML and will allow you to continuously test committed code.
…a detailed look at unit testing and how the frameworks can be used for integration testing.