nMoneys Moves Forward…

on Tuesday, 21 April 2015

…by looking into the past.

Because that is what it should be considered when it stops supporting a version that is 7 years, 5 months and 2 days old; for one that is 5 years and 9 days.

.NET 4.0

Those are the the age difference between the previous .NET 3.5 and the currently supported .NET 4.0 according to WikiPedia. So I guess that very few people will complain about the minimum requirements imposed over this new version of NMoneys.


Well, someone complained about a concurrency problem and even though I have never, ever experienced such a race condition, fixing an issue always feels right.

I could have done differently. I could have back-ported just ConcurrentDictionary, or use someone else’s back-port; but, while at it, I suffered from the “falling into the rabbit-hole” syndrome and foolishly holding onto the ancient version felt like the wrong-thing-to-doTM.

And so I decided that will be the last net35 compatible release.

Home, Sweet (and New) Home

As the reader might have noticed, Google Code is shutting down. That means that the project needs to get a new home and GitHub seems a sensible place to host an Open Source project.

Binary Releases

Nuget packages will continue being the preferred and main deployment method, but binaries will continue being offered.

Since the de-facto closure of Google Code’s download service, binaries had been provided by Bintray. Well. it’s not that thy do not offer a good service, but given the whooping grand total of 4 binary downloads I think I can go with the simpler, more integrated and less-featured GitHub Releases.
As a result, binaries will be available in the Releases area of the project home from now on.

Continuous Integration

Moving away from a dying code repository opens up a world of possibilities. Amongst which is Continuous Integration using Appveyor.

Right now it is an out-of-the-box build (compile + run tests) but I will be investigating further automation in the near future.

Oh, and the badge is nice Smile

And a Small Change

A colleague of mine pointed out the fact that the good people of Sweden changed the way they prefer writing big numeric quantities, as pointed out in this Wiki page (in Swedish). And so, I obliged.

From now on, the default way a big monetary quantity in SEK is not using . as a group separator. But, of course that default behavior can be overriden:

1s and 0s

Handling Multiple RavenDB Databases

on Monday, 23 March 2015

I confess. It was a bit backwards writing about how to handle indexes in development involving multiple databases without writing about handling multiple databases. It’s never too late.


In RavenDB databases are a way to isolate, not only data, but configurations as well as the location of “physical” data. Oh, and it’s fully supported in server mode.
For the amounts of data my applications handle, we are good to go with a single database, but sometimes I have found the need to have separate databases. Hint: has to do with being faster to delete a complete database and start from scratch than delete lots of documents inside an existing one.


Once you have made the choice of having multiples databases you face the challenge of consuming those databases in code.

How I use RavenDB abstractions

The two main entry points for RavenDB are the IDocumentStore and the IDocumentSession (I have not had the chance to use asynchronous session yet) abstractions that are injected by the IOC Container of choice. IDocumentStore is expensive to create and I usually register it as a singleton. As for the IDocumentSession I usually have one instance per request which is disposed when the request ends.

The session is heavily tied to a database so, how does one handle having multiple instances around?

The database-specific session

It is easy enough to have a marker interface that extends IDocumentSession that represents an open session to a given database and use that dependency whenever a connection to a database is needed.

Question is, how is that abstraction implemented?

Some IOCs have the ability to create types on the fly, thus obviating the need of a concrete class that implements the specific session, but I have found those too tightly coupled to the IOC and pretty code-navigation unfriendly.

I would be silly, though, to create a type that implements the interface for each database: after all, one would be implementing IDocumentSession again and again and again. Instead we can use a decorator of the IDocumentSession interface can do the trick very well. The only thing one has to keep into account when instantiating the decorator is passing on the session to the correct database.

Small help offered

rvn-izr can help with this approach by providing the “implementation” for that decorator/adapter: the DbSessionAdapter, that is.

The adapter implements IDocumentSession, but , in my experience, I have found this interface is kind of volatile (has changed in between versions). That kind of instability made me shy away from binary dependencies for this part of rvn-izr and go the source code route instead. With this approach, the adapters are integrated in the source code of the importer and use whichever version of RavenDB the importer is using.

Does that mean that the developer has to manually implement IDocumentSession? I am afraid so. Those twenty-something methods need to be implemented. Delegated, more properly speaking.
And you know what? It’s not a big deal. And with the right tools, one can achieve it in around 3 seconds, being a clumsy typist and even using the mouse. Here is how I do it with Resharper:

Implement DbSessionAdapter R#

Closing the circle

And that is all. At least for sessions, for other database specific artifacts, head on to rvn-izr’s documentation and find out.

RavenDB Indexes While Developing

on Friday, 27 February 2015

At work, my team and I have been working with RavenDB for quite some time.

Despite some people objections, the team usually shares a single development database. There is still the occasional “I made this breaking change… Guys! Get latest!!” but happens quite rarely.

Besides documents, there is another data artifact that is very necessary for RavenDB to work: indexes (and transformers). Favoring explicit indexes (instead of automatically created ones) means we end up with bunch of indexes defined in code.

From Code to Database

RavenDB offers the IndexCreator high-level api to easily create indexes. Question is: when will that code that creates data indexes from their code definitions executed?

When application starts

An strategy we have followed in the past was placing index creation code right after the DocumentStore initialization code. Since that initialization is recommended to occur once per application, we ended up re-creating the indexes every time the app starts.

An app in development re-starts many times and the problem that I pointed out in the first paragraph (developers that do not have the latest version of the code running) worsens seriously: an application with the outdated index definition will override the just-updated index.

So, all in all, that approach was not such a good idea at all.

On demand

Currently we use a small console program (or LINQPad script) that updates the indexes when needed.
Such approach can even be used to automatically deploy index definitions from environment to environment (given we have access to that environment database, of course).

The multiple database challenge

What happens when multiple databases are accessed in the application? Out-of-the box, the simpler to use IndexCreator.CreateIndexes() takes an assembly, which means that if all indexes for all databases live in the same assembly (and we tend to favor single –or few– assembly solutions), all the databases will contain all assembly definitions.

Not all databases contain the same collections, so some indexes are empty (consuming little to no resources); and transformers won’t be used in queries against that database. But having artifacts that should not be there is still a small annoyance.

OSS to the Rescue

To solve this, and some other problems, rvn-izer has been created.


Installing the Nuget package in the project that contains the indexes, gives access to the CreateInAttribute that can be used to decorate indexes and transformers with the name of the database they correspond.

Filter by database name

Using the Indexes fluent interface, we can get the ExportProvider instance that contains the artifacts corresponding to the specified database and invoke the corresponding IndexCreator.CreateIndexes() overload to stop creating artifacts where they do not belong.

Safety first

Another interesting feature Indexes offers is the ability to return artifacts not decorated with the attribute. That opens the chance to create an automated test that verifies, for instance, that we are not forgetting to decorate a given artifact and, thus, not creating it.


Play with it, use and abuse it and, if you fancy more features, PR your way through.

Order of Things used to be Wrong

on Tuesday, 17 February 2015

Well, not everywhere. Only in my project. When I was querying RavenDB. Here how I solved it.

The catalog Model

Our catalog, as many other product catalogs in the world is made up of products and categories. A product must belong to at least a given standard category and can belong to zero or more non-standard categories. Although the most usual case is a product “living inside” one standard and one non-standard categories.



Each category object contains its globally unique identifier and a value that indicates the relative order of the product inside that category.

Querying the Catalog

When displaying the products that belong to a given category (standard or otherwise) the products collection is queried, filtering by the desired category id and sorting by the relative order of the product inside that category.

Raven-izing the query

RavenDB goes as far as requiring an index for each query you do to your data (or an ad-hoc index will be created the first time the query is performed). In that index, we will include the fields that are part of the filter (the category ids) as well as the fields that are used for sorting.
But… we do not have an specialized sorting field for each category in our products. No we don’t. But we can ask Raven to create one while indexing:

New entries in the index will contain a dynamic field that is named after the category and valued after the relative order. With those fields in the index we can query our catalog with the following query:

It seemed to work. The feature was demoed several times and approved by several people: change the order of a category –> products are displayed in that order. End of story? Hardly.
It did not pass the test that every feature has to pass in order to be considered a success: the reality test.

Real-World problems

We started t get bug reports in production:

— customer: “it does not work. Products are not in the order specified in the category”.
— developer: Head scratching. ”documents have the right information”. More scratching. “it works in staging, you saw it with your own eyes, look”
— “not in production, fix it”
— “what is what you are doing exactly?”
— “I change category A, then category B, then…”
— Vicious scratching. “let me look into it… once more”

And indeed, as usual, customer was right. When changing more than one category at a time, the order of products was less than predictable.

Why? Oh why?

After asking around, a colleague pointed me the solution: Do not use .OrderBy() extension. Instead, use .AddOrder() which allows the type of the field that we are using.

And the customer has not complained since.


on Thursday, 8 January 2015

With a new year, comes a new release of NMoneys.


The amendment 159 from ISO has been applied. Amongst the changes are:

  • LTL has been deprecated in favor of EUR
  • Changes to CVE information


Issue #28 has been fixed and now, one has full control over what Money.Parse() does. It is still a pretty weak method (and that is hardly fixable, but ideas and contributions are welcome) and its usage is still not recommended.


Use your favorite package manager (or Nuget Smile with tongue out) to reference it:

If you get your kicks from downloading random zip files and manually referencing assemblies, follow the badge:

Dynamic Documents

on Saturday, 22 November 2014
More things coming out from my recent session about MongoDB.

Documents and Types

MongoDB is a document database. Such documents are stored as JSON (well, kind of). JSON has a very challenged simple type system.
.NET is an object-oriented environment that promotes types. Such type system is pretty rich.
Are MongoDB’s documents and .NET Framework types doomed to miscommunicate? They would be if not for the existence of the serialization bridge the driver provides.

Types Everywhere

Since .NET is type-happy, majority of the times we are working in our happy strongly-typed world.
Believe it or not, there developers out there that are very happy (and productive) in a less strongly-typed world.
I will measure my words carefully, trying to prevent them understood as heresy:
There are certain types of applications (or parts of all applications) where strong typing bring little benefits.
Wow. There I said it.
Imagine this scenario: data is carefully crafted and validated and taken care of at the time of writing it to persistence storage and whenever needs to be presented to the user, there is little (if any) manipulation of it, because those details were taken care of beforehand, at a time when computing cost is not so important as write operations tend to be fewer than read operations. For what is worth, we could be storing textual representations of the data and no one would care if that was the case.
In those scenarios we do not need types. The structured data only travels through the wire to be presented on a screen.

Type-wannabe: the Document

The serialization part of the MongoDB driver for C# offers a straightforward type to solve the problem: BsonDocument. It is a nicely designed and useful type that allows reading information from MongoDB without much fuss about the underlying type.
The “downside”? C# is not helping to have a clean syntax.
Let’s take this simple document:

And how to access when I started snowboarding:

From 33 to 42 characters. It does not look like much, but it's a whooping 27% increase in typing.
I do not know about you, but I would gladly accept a 27% increase in my paycheck, thank you very much.

Lipstick on a Type

In .NET 4, dynamics were introduced as a way to marry two different worlds: strongly and loosely typed with a more palatable syntax. I suspect it was just a trick to prevent developers doing Interop from quitting in mass.
That sounds like almost useful and applicable to our small problem in hand.
Unfortunately, the driver still lives in a pre-.NET 4 era, so they cannot provide out-of-the-box support for dynamics. There are some attempts out there, but I was not happy with taking a dependency on another (arguably more capable) JSON parsing library (JSON.Net) to just do that.
Since it was for educational purposes you are allowed to simply waste some of your scarce free time in search for a solution that no one cares about Smile

Dynamic document

It turns out it is really easy to provide support for accessing instances of BsonDocument as dynamic object, taking benefit from the usual dot notation. Thanks to DynamicObject.
By simply wrapping an instance of BsonDocument inside an inheritor of DynamicObject and overriding .TryGetMember() to wrap nested documents we are able to happily use dot notation (with a slight overhead):

To use it, just invoke the .AsDynamic() over an instance of BsonDocument

With this minimal baggage, you are able to, for example, write razor views that display data coming from an instance of such DynamicBsonDocument.
As for getting dynamic objects into BsonDocument instances… Well, that is a whole different story. That I am not ready or willing to tell… for the moment Winking smile

Easier projections in MongoDB

I have already written about the talk I gave in the local user group about MongoDB.
I always try to push the limits of what I want to show by adding a couple of more advanced (yet useful) scenarios. The one I want to describe here is document projections using the .NET official driver.

What is a projection anyway?

The term refers to a result of a query that is a subset of the source of information.
In relational databases, it is a subset of the complete list of columns being queried.
For MongoDB, it is a subset of the fields returned for all matched documents.