I however have developed a different view. Unit tests work well when the developer understands the "true" requirements. My perceived problem with Unit tests are two fold; (1) most business users cannot understand JUnit output, thus they cannot identify where the wrong functionality was tested, and (2) If you write 2-5x LOC for testing as the original functionality, then how do you know your testing programming logic is accurate?
I have personally been focusing my projects over the last couple of years by looking at functional testing using tools such as FitNesse and WebTest, static code analysis using tools such as PMD, FindBugs and Checkstyle, and finally code coverage to make certain we have fully examined the high risk code. This strategy so far has been successful at identifying defects and risk areas long before QA encounters the defect and at a much lower cost than would have been incurred using pure JUnit. The best part of this strategy is I can take the functional test output and review it with business takeholders to gain their acceptance.
At the close of the Columbus NFJS BOF session most everyone agreed that functional testing strategies such as described above have value. But most believed that the functional testing strategy described needs to also include a strong dose of Unit testing.
The real problem mentioned, I believe by Scott is that only 10% of all projects have unit testing, and that attendees to NFJS shows are special because they are more likely to be in that 10% group. So my closing comment to this blog entry is as follows. If you are not already happily using a Unit testing strategy in your projects, and are looking for a lower cost way to improve project quality, then follow these steps in order:
- Setup continuous builds using any of the fine tools out there (Hudson, Cruise Control, Anthill, etc)
- Include static code analysis tools such as PMD, Checkstyle, FindBugs, etc in your build process. Schedule resources to resolve all of the serious problems identified by these tools.
- Begin writing Functional tests using tools such as FitNesse and Webtest (in a later blog posting I will describe these two tools in more details). You can setup both of these tools in a test first or test last approach. To make the most of your time, review your functional tests with the project BA's and other interested stakeholders. In a test first approach, you might find that after the stakeholder has spent some time reviewing a functional workflow that they might identify some workflow problems which they did not identify while writing stories/use cases. Such discoveries will save you lots of aggrevation later on in the project.
- Setup a code coverage tool (Cobertura, Emma, Clover, etc.) to identify which parts of the application are being exercised by the tests created in step 3. You don't have to achieve 100% code coverage. But you do need to spend some time to review the code which has no functional test coverage. Thus, a higher code coverage rate means less code you have to manually review. I often find that code which does not get hit by the functional test cases is either dead code or gold plated code. Either way such a review provides a great opportunity to remove extra code. Since code is a liability just waiting to become a bug, finding dead code is a great opportunity to remove future bugs.
- For complex code which is still not adequately tested with the functional tests, take a look at some of the BDD tools such as easyB to create behavior tests for your code. Using easyB enables you to perform tests at a unit test level, while producing BA readable artifacts. These artifacts the BA can review and agree that the test validating the intended functionality. As a developer, I also find the easyB BDD style enables me to think of my code in a much more structured approach than using traditional JUnit like tools.
So what do the readers of this blog posting think? I would love to hear your comments.