Pissed by a function


Lately, I have been going through a rough coding patch.
Rough as in getting familiar with a huge codebase that has been cooking for a year and a half and, being mild here, could have received a little bit more care before.

Luckily for me I am getting familiar with it through one of the best ways one can get familiar with a codebase: writing those "unit" tests that are so well needed.


No news I am a big proponent of automated tests in code and I am not afraid to say that I seldom TDD it. I practice it every now and then but I am more comfortable with test-near: that is, writing the tests right after writing some code.
This works very well for me and I still get a very good feedback from my design while the code is still "fresh" and I can jump to and from test and production code easily.

Unfortunately for my case, I was writing tests for functionality that was written months ago for someone that had little regard for good design, let alone testability.
Long story short, I am now waist-deep on some ridiculously complex logic written in such a manner that prevents me from getting the whole problem in my head in one time. My brain is to be blamed, but that's no use, because the code still needs to be tested and refactored.

With such a task in hand my usually low threshold of pain gets even lower and the most innocent piece of code triggers deep, dark reactions. In my defence, this little guy deserves the beating.

But why?

Let's dissect this little fucker:

I know what you are thinking: "come on! nagging for 20 lines of code? seriously?"
I hear you, let me start:

It is a (yet another) helper

Helpers are sort of bad. They place logic and behaviour away from data and the entity that owns it and break encapsulation and cohesiveness and blahblahblah... And, honestly, when 90% of the logic I have been seeing in the last few weeks lives in either a static helper or inside a so-called service, one more makes my temples pulsate and my right eye twitch.

It relies on implicit operators

You have heard me complain about these before, so here we go again.

Check line 18. It is right there. Before your eyes. someAttributeDefinition.Order returns an int. someAttributeDefinition?.Order on the other hand, uses the null propagator attribute, which means it returns Nullable<int> instead. It can be null when someAttributeDefinition and how much is 50 plus null? 50? 0? null?. It is actually null but you either had to learn it by heart (kudos), the hard way (pity) or have to run a snippet somewhere to actually figure out what the value is.

That is not cool. It is magic and we, developers, do not like it.

Better learn operator precedence

Check line 18 again. It is the real gem: we have an implicit operator to nullable, an add operator to perform the sum and a null-coalescing operator.

Better learn their precedence as the author surely does without providing a hint to us mere mortals: is it 50 + (nullable ?? -1) (50 + nullable) ?? -1. When nullable is null, would it be 49 or -1? Place your bets and fire your REPLs.

-1 is the magic number (tune it)

Line 18 again. Man... So the idea is that some number is returned if someName is known or some number otherwise. Wait what?

A negative number is the magic number to be returned when there is no match at all. I can tell you because I could read the calling code. You, on the other hand, cannot infer shit form the signature of the method.
Do you want to know what is a perfect type for a number that, in some cases, might not be a number at all, in which case the caller has to act differently? No, it is not a negative number. No, the developer does not have to create a new type. It is Nullable<int> or int? for the uber-lazy. Shocking. Or one could read the comments...

Comments instead of tests

Isn't it ironic that the only comment is also to be picked? Funny enough, after refactoring the method to return int? instead of the magic number, there was a commit that I merged to my developer branch, that added some more cases to the function.
One change in particular affected the infamous line 18. The change affected the base number to add upon. From 50 to 70. Unfortunately, the comment was not changed (insert bitter face palm).

An innocent mistake. But one that proves, again, the point that comments tend to lie. More untrue as time passes. Do you want to know what does not lie? A test. If the value was the important bit, there would be a test making the change dead obvious. If the value itself is irrelevant, there would be a test stating that certain values of someName are greater or lower than others.
In any case the test would not be deceipful. It would either pass or fail. And the poor reader (aka. me) would know where to stand.

And now, write that damned test on the caller method, the one that has a (https://en.wikipedia.org/wiki/Cyclomatic_complexity)[cyclomatic complexity] of 19!!!.
No Bueno