We’ve Come a Long Way

on Sunday, 4 October 2015

In a previous rant post I complained stated my discontent with Microsoft’s Live Account recovery system but it all stemmed from the fact that I did not own a laptop other that the one I use for work.

No need, really

After my really old Toshiba laptop died quite some years ago, my girlfriend relied on our 2010 iMac for doing her research work and our computing odds and ends (vide encoding, photo management,…). I have said it before, but I would say again: it is a very nice machine. Pricey. But still going strong enough.

The iMac is not portable, though; so when she has had the need for computing on the road she had to settle on the Post-PC experience.
Yes, I said settle. Because for her usage patterns (and most likely 90% of everyone else’s), Post-PC professional computing was (is) an exercise of compromises.

But, after the renewal of my working laptop I asked for a quote for my old one and we were able to get a pretty powerful machine for a very decent price (knowing the previous owner took very good care of it Winking smile).

It’s Not That Cold Out There

She needs to do Office (generic) work: spreadsheets, documents and presentations. And neither her nor me were in the mood to cough extra one thousand plus kroner or get into the subscription trap business that Microsoft Office costs for just convenience when I know that she will be able to use free, legal alternatives such as (the poorly named IMHO) LibreOffice.

Turning it One More Notch

Now that we are going down the free road, I felt the need to check the state of the Linux Desktop. Because…
I’ll confess a secret. I work day in and day out with Microsoft technology, but I have a thing for Linux. Since that very early Read Hat distro (waaaaay before Fedora) I managed to install in my Pentium I 100MHz I have always been trying to suggest Linux as an alternative to average Joes (and Janes) in the hope that its quality would win over the convenience of the known and the here-and-there rough edges.
But, there have been always that something: that driver that does not work, that program that lacks that vital feature, the “what do I need to do to update? does it have to be so difficult to install?” that prevented Linux to stick amongst the receivers of my advices.

Smooth as Silk

Even so, I went onto prove my girlfriend that she can do everything she needs without spending thousands of kroner and not being annoyed one bit. And I installed Ubuntu on my old working Lenovo laptop.

And I was blown away.

  • Everything absolutely works first time: the dual display, WiFi and LAN adapters, Bluetooth, integrated camera, even the keyboard and headlamp worked. I could even use the Fn extended key shortcuts!
  • Installation (with a plethora of software) takes a fraction of what it takes to install a barebones Windows. And the process can be as dumb as any other OS out there (and infinitely more complicated if one wants it to).
  • Booting time is as fast (or faster) than Windows 10.
  • There are tons of documentation online to setup everything that does not come out of the box, explained in such a way that someone detached for many years from the Linux-sphere could follow without too much fears (but maybe too scary for someone not in the business).
  • Everything is crazy snappy. Alright, it still is a powerful machine, but I have no doubt that as time passes, it will still perform beautifully.

All of those geeky bullet points would be desert preaching if its SAF (Spouse Acceptance Factor) score was low. But, to my joy, it is not.
She was productive from minute 2 (after the “where is my start menu/dock”, “my Office-oid” and “my browser”) and I have not heard swearing or blaming ever since. Great Success!

great success

…and I just can’t hide it

Linux desktop has come a long, long way to become usable for the casual user. If it works, it works. And it has worked for my machine. But I am afraid that if it hadn’t… I would have had to revert much more to command line mojo.
Like when I formatted a partition (from Windows) to NTFS (from FAT32) and prevented the computer to boot properly until I edited my fstab  in a recovery consoleEmbarrassed smile.

One has to give Windows some respect for supporting such a variety of devices and be less ecstatic when OS X brought UNIX stability to a mere handful of components.
But if you are in doubt to give Linux a shot in the desktop (or even if you are not) I have only one thing to say: Go For It!

Security by Absurdity

on Monday, 28 September 2015

I am a complainer. I you have read my blog before you know I complain. A Lot. Not that I like. Not that I think it is unfounded (ha, who ever thinks that)


We do (did) not own a laptop in our household (I do have my working laptop, but I use it everyday for... you know... working).

My girlfriend needed a device that could do Office for a course she had to attend in August and I seem to be the official IT supporter of our household (and beyond).

Patching it

We do own an iPad Air (first gen), so I gleefully installed "Office" (Word, Excel and Powerpoint) for iOS for her to use alongside our Bluetooth Microsoft Wedge Keyboard.

I seem to recall that, once installed, one had to provide a Microsoft Live Account (it does not seem to be mandatory nowadays, it might not have been then either) to get things going.
She does not have one, so I created one for her thinking she might use OneDrive to get some of her data afterwards, so I created the account from the iPad, dutifully noting down the temporary password, alongside her newly created outlook email, using her usual Gmail account as an alternate address.

Fast Forward One Month

  1. Moving locations to another city.
  2. Another course with Office needs.
  3. A document previously created (and saved in One Drive) is needed.
    Trying to open it request the password of that Live Account that was add-hoc created by me at midnight one month ago and which credentials were stored in a piece of very secure paper.
  4. First try. Fail.
  5. Second try. Fail. Panic.
  6. All of a sudden someone else forgetting her password becomes an issue of mine. Helpful as I am, I tried my battery of usual stupid temporary passwords.
  7. None of them work and They think I am trying to hack the account so They challenge me with their Captcha (or whatever they name it) and succeed in the challenge part (I can only imagine how difficult it might be for a bot to guess those).
  8. “Sorry, dear, can't remember your password”. Of course that is long forgotten by both of us (and so is the ultra-secure post-it in which was written)

Fixing it. Not!

Alright, let’s recover your password and then we’ll have dinner.

Ha! How much fun would it be the post then? Some, surely. But not as much fun as the process.

They could not simply have a form in which you punch the id of the account which password you can recover, send an email to the alternate email address and reset it from there.
They could even be fancy and require a mobile phone to send a stupid code before being able to reset the damn password (an attacker would have to have gained access to the alternate email –possible– but also to the mobile phone –less likely–).

But no. They prompt you with a whole recovery form in which you have to punch in “as much info as you can” in order to “get back into your account”.

  1. First and Last Name. Yeah, I know that
  2. Birth Date. I better know that unless I want to be physically hurt.
  3. Country/region/postal code. Uhm, did I enter the country/region where she was born or the one in which we lived when I created it. It better be the latter as I sure don’t remember the postal code she was born in. Or maybe the one we have just moved to. Hell, I will try them all.

That’ll do. Double ha! That information is not enough for the for to be sent. They ask for more:

  1. Other passwords you've used for this account. Sorry can’t do that, it is the first and only password this account has ever had. Next
  2. Subjects of your recently sent emails. Sorry, can’t do that either. I have never sent an email with that account. I created it from the iPad app just to sneak a few GB of your cloudy hard disks.
  3. Names of any folders you've created, other than default folders like Junk, Drafts, or Sent. Nah, I did not even created folders in One Drive
  4. Email addresses of contacts you've recently sent emails to. Did I mention that the account was not created with email purposes in mind?
  5. Last five digits of your Xbox Live prepaid card number. Wat? I do not even know what that is and I doubt you can get them in Denmark.
  6. Name on credit card and Expiration date. I signed for a free service and sure as hell I did not enter any credit card information.

Desperate measures…

After being denied the recovery once (I was kind of expecting that, after inventing some of the data in order to be able to send the form) I felt like being inside one of those Sci-Fi movies in which Humans feel corralled by the definitely superior AI of the Machines but are desperate because a joke, a wink or casual flirting won’t soften their neural networks enough to open a possible exit of the loop-hole.
So I decided to phone the entity known as Microsoft Support.

It definitely has to be someone physical (and/or very paranoid) because no one is more careful with giving away their contact information than “Support”. Three or four levels of browsing won’t get you her number. But Search Engines know better and I could get some phone numbers. I was definitely onto something. Those fellow humans will surely understand and laugh with me at the silliness of their recovery system while handing me over some extra swag to keep my trust levels below average.

I only need to find a part of the world in which people would be enslaved working at 21:00 (images of my Spanish origins came back vividly at me) or… I can use my human brain to realize that clocks in Europe are way behind the reality of the North-American Multiverse. Microsoft Support USA I will call despite the long distance call costs.

…are taken by desperate people…

After the longest, most ridicule phone menu ever (that is clearly an over-exaggeration, I’ve seen much, much, much worse) I got to the point in which they are about to  connect me to a human that deals with account information just to remind me that no security or retrieval can be discussed due to security concerns. Sad smile

… just to become enraged.

Damn you to hell positronic brains! I won’t ever find a way to charm a human into giving access to an account I genuinely created.

Seriously now…

…and I mean it.

Someone at Microsoft has to do something to end that madness of recovery process.

It is seriously flawed (at least one user can’t recover her account but I doubt she is alone in this) and is preventing customers to retrieve files they own, punishing them for having human brains.

NMoneys and MongoDB

on Thursday, 3 September 2015

MongoDB is a public favorite of mine. It was the first document database I worked with and despite their quirks (less and less as time passes) I can strongly recommend to have a look to it when you are searching for a server-based, solid document database.

Half-broken by Default

Unfortunately, (and we can blame NMoneys for it) MongoDB does not get on well, by default, with Money instances.
Yes, they can be stored (not “pretty” { "CurrencyCode" : 208, "Amount" : "123.45" } alas), but what it can’t be done by default is retrieving them and that seriously compromises my definition of working “out-of-the-box”.

Fortunately enough, MongoDB offers a pretty extensive customization over the serialization/deserialization process.

Looking Back

There is a big gap from version 1 to version 2 of the MongoDB driver, meaning a single source file cannot target both versions.

Initially, NMoneys will support custom serialization for the version 1 and when time comes a second package will be created to support v2.

Picking up the package


Documentation is available in here.

Compacting JSON

on Wednesday, 26 August 2015

JSON has become one of the most widely accepted data interchange/representation formats. Not only as a Web data interchange, but also as data representation of multiple document databases.

Justified dumbness

One might question why does one have to write JSON at all since there are libraries for that, right?
The only justified scenario I can find (and the one that lead me to write this entry) is testing.

Writing JSON literals using string formatting (or concatenation) would be a top 5 of “What on Earth are you doing there?” in a code review. But it makes sense if one is verifying that the JSON is properly generated.

Pain in The Neck

Thing is, names in JSON need to be written between double quotes, and so are string values. Since I write tons of C# I have to put up with escaping double quotes in string literals (\" for normal literals or "" for verbatim ones). In any case, it is annoying. And when I get annoyed too much, I do something about it.

Working Inspiration

I was working on some MongoDB support for NMoneys (more on this to come in future posts) when, while looking at some of their unit tests, I saw a test like this:

But of course!!!! That is genius! Since I did not come up with it, at least I had to do something. And so I abstracted away the logic to do the replacements, guess a couple of clever names and add some syntactic sugar. And so, JsonString became a new member of the Testing.Commons family.

Triple Clarification: I am not claiming ownership of the idea, I saw it from the MongoDB guys, which they may or may not have invented “the trick”

Compact JSON Vs. Expansion

I love names, despite being one of the two hardest things in Computer Science.

So how would I name a JSON literal that uses ' instead of escaped "? I would name it Compact JSON.
And the proper JSON that results from Compact JSON? Expanded JSON.

Using Compact JSON

There are several ways in which this technique can be used.

Instantiating the class

One can pass the Compact JSON literal to the JsonString constructor. From then one, one can get the expanded JSON by either invoking .ToString() or by passing the instance whenever a string is expected via implicit conversion.

Extension method

Invoke the .Jsonify() extension method on the Compact JSON literal to get the expanded JSON as a string.

Asserting Compact JSON

I mentioned one should avoid writing JSON literals (except when testing). In that case one is more likely to be using those literals while asserting the result of one’s code.
Then it makes sense that Testing.Commons.NUnit contains a JsonEqualConstraint.

Using the Constraint Directly

Using the Must Extensibility Model

Modifying EqualConstraint

Our last option would be continue using the EqualConstraint, but changing how it behaves via a custom IEqualityComparer<> encapsulated in the extension method .AsJson()

OSS Summer Refresh

on Wednesday, 22 July 2015

One would not believe it given the weather here in Denmark, but it is summer and what happens with summer? Amongst other interesting things here comes an update of some OSS projects.


The amendment 160 from ISO has been applied, containing only minor changes in currency denomination.

But majority of changes happen behind the visible artifacts:

  • All tools and libraries have been updated (NuGet, NUnit and psake)
  • The whole build process has been refactored heavily to support...
  • ...continuous integration and remote deployments via AppVeyor

Not So Continuous Deployment

Now it is easier for me to push a new version locally (with the new \build\Deploy.ps1 script) but it is also possible to deploy using AppVeyor. It is a nice to have but it opens new scenarios for quick fixes in which there is no need for a development computer:

  1. Make a small fix using GitHub's text editor
  2. Make a commit
  3. Mend documentation in the Wiki
  4. Code is built, tests are run and all artifacts created
  5. Packages are pushed to NuGet
  6. GitHub Release is created

And all of that can be done away from my development machine! A nice to have, but definitely pretty sweet.

Let the bits flow downward

As usual you can get the packages from NuGet or the binaries from the Release.


Believe it (or not) it has been almost 3 years from the last release of Testing.Commons, more than one year from Testing.Commons.NUnit and Testing.Commons.ServiceStack last ones.

One can see those projects as "stable" or "nearly abandoned", depending on how one can see a non-empty bottle. It is pretty obvious that this project is not as popular as NMoneys, but they are still used in all my professional projects.
But being quiet (dormant) is not a reason for not getting a facelift and new features.

  • Testing.Commons offers a new way to organize your builders.
  • Testing.Commons.NUnit got an update on its dependencies: ExpectedObjects, NUnit and Testing.Commons itself
  • Testing.Commons.ServiceStack got a new .UriFor() method to calculate the URL to a request DTO decorated (or not) with RouteAttribute and got it namespace changed to Testing.Commons.Service_Stack to prevent silly namespace conflicts.
  • All tools and libraries have been updated (NuGet, NUnit and psake)

Time for an update

Get latest version of all three packages frrom NuGet.


The benjamin of the family (but used in all my OSS projects) got a new version for the latest stable release of NUnit.

No fanciness in the build here, just the NuGet package.

Some Order in (Not) Chaos

on Wednesday, 15 July 2015

It has been a long while (2013) since I promised writing about Psake and using Powershell for build scripts.

But sometimes, something unexpected has to happen for old things to become current again. That something was my curiosity in using AppVeyor as a automated deployment platform.

Hosted CI++

It is not now that I “discovered” AppVeyor (I created a working barebones NMoneys build long time ago, when Google Code was still alive) but it has been lately that I was interested in its potential for automating deployments.
I am not going to be analyzing AppVeyor: go ahead, browse their magnificent documentation and don’t take my word on how easy it is to get going. Do try it.

AppVeyor is a hosted Continuous Integration/Delivery (CI/CD) for the Windows platform so you don’t have to have your own server and go through the process of setting up a server.
Best of all it is free for OSS projects.

CI/CD 101

The basic idea of CI is “listening” to your source code repository and when new commits are made, perform some actions (usually compiling the source code, running unit tests, perform some sort of static analysis and some more stuff).
The basic idea of CD is build upon CI in order to automatically (or with minimal human intervention) push your software to the environments that needs to be deployed for people to use.

It can get more deep and complicated than this, but it is basically automation performed somewhere else than your development machine.

NMoneys Build History

In the beginning of time NMoneys was built using nothing but MSBuild scripts. I hated it, so whenever I had to extend it (automating API documentation) I migrated and enhanced the script using Psake as the task framework and Powershell as the language instead of XML. The main purpose for this build was (and is) to generate the artifacts (the various Nuget packages) whenever a new release was due.

A new release was a matter of executing the build script to get the freshly built NuGet packages after making sure everything was ok (tests). This is, but not for every build there is a corresponding deployment, a bunch of manual steps existed like: pushing the packages to NuGet and creating a binary release with unsigned and singed assemblies.


  • Clean: clean solution and recreate the /release folder, that will contain everything we are likely to deploy
  • Compile: invoke msbuild.exe to compile the solution
  • Sign: use IL trickery to sign an assembly once built
  • Document: use ImmDoc.Net and the MS HTML Help Compiler to generate a .chm with the API documentation
  • Test: invoke nunit-console.exe to run all tests of the solution and create visual reports on their execution
  • Copy Artifacts: copy binaries, source files, test reports, documentation and package manifest to the /release folder
  • Build Artifacts: invoke nuget.exe to pack the different packages and zip.exe to generate the bin and signed artifacts to be released

    Could we do better? I would not be writing if we couldn’t Winking smile

    CI with AppVeyor

    Translating the process to AppVeyor was not difficult at all, but extracting a common set of script functionality that can be invoked both by the local Psake build and the remote engine took a bit longer.
    All I can say that the scripts are so much cleaner and easy to follow than before and because of that only, the process was worth it.

    Appveyor has a rich and complex build pipeline and their default build and test stages make redundant the local Compile and Test tasks, as well was the Clean, because, for every build that is started, a brand new machine is provisioned, so there is nothing to clean. Of their complete pipeline we use the following steps:


    The equivalency between the two is roughly like this:

    • Init + BuildCompile + Sign + Document
    • TestTest + CopyArtifacts + BuildArtifacts

    CI Build?

    Isn’t it weird to have a Deploy stage in a CI pipeline?
    Only if the deploy has the meaning of “pushing components to another environment for their execution”.

    Appveyor has two types of deployment: Inline (the last stage of the pipeline) and Environment. Whereas the Inline deployment is executed for each build execution, the Environment deployment is asynchronously triggered by “human intervention”.
    The Deploy stage we refer to in our graphic is, thus, an inline deployment and in our case the “only” thing we do is “promote” artifacts from the /release folder to AppVeyor artifacts for them to be deployed later on in en Environment deployment.
    The manual promotion (as opposed of the artifact definition as configuration of the build) is provoked by the similarity in package naming which would make impossible to deploy only one package to a given environment.

    Details, please

    All these changes have turned the old NMoneys build process into a very powerful, lean and streamlined deployment pipeline.

    Details matter. Details are fun. I have learned a ton of Powershell in the refactoring process, so… why not sharing it? But this post is already long, so there will be another one with those beloved details.

  • Switching Fun

    on Wednesday, 8 July 2015

    In my last post about Feature Toggles I mentioned that, for my simple scenario, I chose FeatureSwitcher amongst the plethora of packages (254 packages for the feature toggle search, never went beyond page 1).

    Some Whys

    I needed configuration support and it is supported by the means of another package. Check for solving my problem.

    I also liked their object model to deal with configuration in code. Likeability check.

    I dislike magic strings. Check for being based on types.

    How did I use it?

    I cannot show the real code but I can show bits of a sample project.
    The project is an OWIN web site that uses Nancy and Razor to render an application with an optional feature.

    As Martin Fowler suggests (and we do not always agree):

    Toggle tests should only appear at the minimum amount of points to ensure the new feature is properly hidden.

    That is, disable the menu that gives access to the feature to the bare minimum to disable the feature.


    What is it that is actually needed to accomplish this bare minimum?

    1. Install the Nuget Package FeatureSwitcher.Configuration. That will reference two assemblies to our solution.
    2. Create a feature. That is a type that implements the IFeature marker interface
    3. Set-up configuration by adding the configuration section group. In this case, have the actual features are located in separate file:
    4. Tell FeatureSwitcher that configuration is to be used and that features are named after the type that implements the interface.
    5. Check whether the feature is enabled and render the markup if it is.

    Pushing through the last mile

    When I mentioned I liked to have types instead of just strings is not only for type safety (aka intelli-sense dependency), but because we can easily attach behavior to types. So now, a feature is not only able to allow telling whether it is enabled or not, but methods can me added to perform the activation steps, for instance.

    If we didn’t do anything else, the feature could be “guessed” and accessed though the URL (no, it is not as crazy as it sounds, I have had clients that demanded to scramble routes to avoid guessing from users and/or competitors. True Story). Now we can have the activation code right inside the feature and interrogate itself for its status:

    So if someone gets a hold on the route, he will be greeted by the hilarious 404 status code: