Implementing Progressive Inner Closure Pattern in C#. 4. Improve Testing Feedback


As I proved at the end of last post, it was feasible testing classes that depended on IUrlWriter using a behaviour-testing style but getting an NullReferenceException when the test fails is far from ideal feedback.

To get a baseline of improvement I have to get into the guy-looking-at-the-console-output’s shoes and wonder what would make my life easier. What would it be? Well, I’d like to see the chain of methods that made the test fail. And I’d like to see the arguments for the expectations. And if I may ask, I’d like to see the actual values that the dependency received.
Uhm, looks like a pretty decent list of requirements. I better be bringing big guns.

First thought. Dead-end and Discarded

At first I thought that using Linq expressions was the way to go as we are dealing with delegates and we definitely need some compile time information and that could be extracted by analysing the expression.

That compile time information (the expectations) and its corresponding runtime information (the real values) could be pushed into some sort of data structure that represented the chain of calls. Whenever an expectation on a method was fulfilled, some manipulation could be made to that structure so that we have information of the remaining expectations.

Not a bad idea but I found a pretty serious problem: expressions do not allow easy interrogation of run-time information and my tries of compiling the expression into a delegate and extract the values from it were pretty unsuccessful. Besides I struggled big time trying to make the Do() handler from Rhino Mocks work correctly so that the data structure could be manipulated on each expectation success. So badly I struggled that I discarded this idea.

Second thoughts. On the way to success

Maybe I was being too optimistic in my idea of traversing a complex chain of method calls. Know what? I was. Let’s make the syntax for the test author just a little clumsier but make the life of the developer of the testing double way easier.
Instead of passing the whole chain of methods we could pass just a single method call expectation and chain them somehow. Sounds good. Something like…

…is not that terrible.

We still have the foul behaviour of stubs returning null whenever expectations are not met, with the subsequent exception. That means we have to wrap the execution of the SUT with something that catches and turns that exception into something more useful. Something like…

…is still not too annoying and naming methods after the AAA style of testing communicates pretty well our intents.

Second thoughts++. Expectation information

Ok, so we have a candidate for the syntax. But the problem with extracting parameter information from both the arrange and act chapters still needs to be solved.
Who might do the same thing we want to do and is way smarter that I am?
Mocking frameworks authors. They sure have access to that information.
And what do they use? Castle Dynamic Proxy.

How about having yet another proxy from our inner IMockClosureMembers interface? That way we could execute the delegate on that proxy and a very extrovert and gossipy interceptor could tell us everything about the expectation.

How would that interceptor look like:

Pretty slick.
Do you notice something weird? Look again. What is that castle::Castle.Core.Interceptor.IInvocation? It turns out that Rhino ships its own version of Dynamic Proxy but with different type visibility. As I referenced a different version of the Dynamic.Proxy and Castle.Core assemblies I had to add an alias to those references so that types in those assemblies are referred by that alias, in my case castle.

But we still need to create that proxy and use the spy. Patience. Here it goes:

What are we doing here? In the constructor: creating a proxy of our inner interface and intercepting its calls with the gossipy interceptor. In the Arrange() method create an Invocation object that will contain the information about the method and the expected arguments.

Actual information

We are getting very close to a solution. A missing piece is getting the runtime information of the arguments, that is the arguments that the method is actually called with when exercising the SUT.
Of course we need that information for each one of the pieces that we are expecting and the shrewd reader will notice the actual missing member in the previous snippet.

Let’s think about it. Who is the one that will know the real arguments?
Correct answer is: the stub for the inner interface.
Uhm… that makes me wonder… Would we be getting help from Rhino Mocks when dealing with stubs?
Of course. Rhino provides a GetArgumentsForCallsMadeOn() extension method that receives a delegate and will return the actual arguments for the method specified as its parameter (in a funny format), but, alas, that information will only be available once the stub has been used by the SUT, that is in the Act chapter. And we need that information in the Arrange chapter. We need to somehow delay the execution o the method.
Wait! Did I say delay? As in delayed/deferred execution?
Oh, I can use a delegate for that! And I will:

Exceptions into Exceptions

Last remaining bit is implementing the Act() method, which merely wraps the call to exercise the SUT, catching NullReferenceException and building a Rhino Mocks-specific exception with the meaningful message, all of this with the help of that Invocation class:

Make the double earn its wages

Last, for not least, I will show you how this changes are indeed improvements by writing some failing test and examining their output.

This failing test…

…will give the following output:

Note that we are not able to figure out the actual argument of the Entity() method as it was never executed.

Another failing test…

…will return a “friendly”:

Wrapping up the series

Time to say goodbye to a really fun and interesting series in which we have seen:

  • How to create fancy and useful APIs with the progressive inner closure style
  • How the cost associated to their creation is paid of by reliability and sheer number of usages
  • How to test the implementation of those APIs
  • How to test consumer of that API overcoming the fact the anonymous methods cannot are difficult to stub
  • How to improve the feedback of a failing test using Dynamic Proxy

The ton of code can be accesses in Google CodeGithub. Enjoy and be gentle with the critics :-p