I (and the rest of my team) do a lot of code review. And when “code” is mentioned, unit tests are included.
I have mentioned before that one of the things that makes unit testing hard is the change of the domain of the problem, taking aside the fact that very few programming courses (if any) place any stress on how to properly test code, not even thinking about software as a whole.
The simplest way one can think of test is some sort of program that executes your code and tells you if what you expected as an output is what actually is the result of the execution. Basically it is a “yay/nay”, which in testing frameworks jargon is an assertion.
When everything is green (in reference to the color most frameworks mark their successful tests) we are all happy, but when things go red (failing tests) a yes/no it is less than optimal. We want to know what happened exactly and we do not need (nor want) to spend any time chasing elusive errors with a debugger, which is one of the (several) reasons we write tests in the first place.
Thus, a context is needed for failing tests to drive our fixing efforts in the right directions.
One way of adding that precious context to failing tests is using the capability most testing frameworks offer of adding a textual message to assertions. That way, when that assertion fails, the message gets displayed. Nifty.
Or is it? Think about it twice. This is basically another incarnation of comments in code, and we all have experienced that they get stale or blatantly lie sooner than later: members change names, logic gets refactored and the expectation changes but not the message… Old annoying story all over again.
Another (and safer) way is letting the testing framework help us. There is a reason there are several frameworks out there, each one with a different set of features, but all of them have one common target: helping you to write more effective tests. To improve that effectiveness, all of them bring a set of custom assertions under their arm that (hopefully) will serve the purpose of expressing what one wants from the code being tests. The more advanced frameworks will let you extend the framework itself with better assertions that suit your scenarios better. In all cases, the assertions will either pass or fail, but they will tell you what happened with certain level of detail.
Why did I start the post writing about reviews? Well, it happens that sometimes people tend to forget the context of failing tests. Be it because they are rushing to get a green-passing test, be it because they are not familiar enough with the framework they are using.
Look at this silly code contained in a test:
It is NUnit code, and when run it will give the following output:
Absolutely no context at all. Have tens of failing tests like this one and I assure you will be giving up the unit testing practice really soon and filing it under the “waste of my time” practices.
How about changing slightly the test to:
That will give an output similar to:
Better context. At least we know what was the expectation and the actual value are, making localization of the error easier to the poor guy fixing the test (it could be you in 3 months time). Even better would be:
With output:
Which even tells you the property that is failing. Of course, as NUnit gives you extension points you could write a custom assertion that uses static reflection to find out the name of the property. Those extensions take five minutes to write and you get a return of the investment almost instantaneously.
Take advantage of the framework and take the time to learn it, as much as you devote to other frameworks (ORMs, web or caching frameworks). After all, tests are first category citizens, as important as the rest of your code.
NOTE: A simple trick I do to figure out if the test is good enough is making it fail and observing its output with a critical eyes asking “Would anyone looking at this error message know how to fix the test without reverting to the debugger to know what happened?”