Migrating libraries to .NET Core. Post-mortem 2


I have already gone through some of the choices I considered when porting my Open Source Libraries to support the .NET Core platform.

Let's dive into details of each of them, as each has its own peculiarities.

NMoneys, Testing.Commons

For both libraries there were a bunch of unsupported features (in .NET Core –netcore-), mainly related to serialization, but also some touching globalization and XML handling.

For the main libraries: NMoneys, NMoneys.Exchange, Testing.Commons and Testing.Commons.NUnit .NET Standard 1.3netstandard- (the new .csproj format, not the old and more likeable project.json) projects were created in the same folder that contains the .NET Framework (net) project. Since all files residing in the same folder as the .csproj and its subfolders are part of the project by default… Boom! Done, next... Weeeeell, not really.


Caveat one:
I you, like me, are weary of automatic package restore for .NET Framework projects, you are going to confuse the hell out of the tooling if you have the twos project in the same folder. To avoid confusion, change your classic net .csproj file to include this piece of XML: <ResolveNuGetPackages>false</ResolveNuGetPackages>

With the netstandard project in place, there is code that will just not compile when targeting netstandard.
That net-specific code will be moved to *.net.cs files (fortunately, NMoneys classes where already spread out in different files per feature, making the partition much easier).
For instance, ICloneable implementation lives in Money.Cloning.net.cs; and Testing.Commons binary roundtrip serialization lives in BinaryRoundtripSerializer.net.cs.


Once net-only code is segregated, such files can be easily excluded from the netstandard project by adding this piece of XML to the new .csproj file:

	<Compile Remove="**\\*.net.cs" />

That way, all unsupported code will not be included in the netstandard project and, thus, not compiled. Mission accomplished with zero conditional compilation directives.

Non-portable code solved, next in line are APIs that are are different in netstandard from the ones in net. For instance, there is no CultureInfo cache, instances are “newed up” (or else a third party package needs to be installed) or Stream does not have a .Close() method.
For those cases, we will have to create adapters that can be consumed by both net and netstandard projects. Those adapters are placed  in *.polyfill.cs files and use compilation directives for each target. That is the only place that I allow myself the visual noise of conditional compilation directives.

This dual, in-place project structure works fine for this type of project, but I found out it has some minor downsides that were not obvious at the time of migrating, but exposed themselves when working with the codebase:

  • it is really easy, while navigating code, to end up in the file within the context of the netstandard project. The fact gets unnoticed immediately, but shows up when red squiggly lines appear under an API that SHOULD be there, but is not. At least not in the netstandard context you are seeing on the editor
  • It is equally easy to add a file, do some work and them find out something is not compiling because the artifact you just added is not available. That happens when the file gets added to the netstandard project (only), meaning that it will not be compiled as part of the old project, as new files need to be explicitly added

Minor annoyances, outweighed by the flexibility and power of this way of architecting the projects.


“Production” code would be pretty much ready with these simple steps. But, what about tests? Are they problematic in some way?
Bad news is that they kind of are.
Not code-wise, though. Following the aforementioned techniques to “duplicate” the test project and isolate net-specific code, projects compile without major problems. But… what good would it be a test that cannot be run? And hence the challenge.

Let’s put aside for a moment running the tests as part of the usual workflow while making changes to the codebase. Those can be run using tools inside your IDE. But when it comes to running tests as part of the build (local or as part of CI) tests are often run using a console runner.Both NMoneys and Testing.Commons tests are authored and run with NUnit.
Unfortunately, the console runner only supports a classic .NET Framework.
Would it be acceptable to not run tests on the netstandard code and hope they work because, hey, “if it compiles, it must work” ®? I don’t think so either.

Woefully, at the time of the migration, NUnit did not have a “dotnet test” compatible runner for netcore environments. At that time, the “solution” was to turn your test project into a netcoreapp console project and use NUnitLite to run your tests.
It does work relatively well, but I guess is kind of a temporary hack until we can run the tests using the dotnet CLI.


For the less loved SharpRomans a different approach was taken, which I already wrote about.

Basically, since the library is simpler and was already a PCL project, a new netstandard .csproj project substituted the old one and called it a day.

I also took advantage that some APIs were not available in netstandard 1.1, to remove them as they did not made a lot of sense in the first place (mostly IConvertible implementation, when better named methods already exist.


As I mentioned in the post about the update, I chose to migrate my tests to xUnit.net, which already supports a dotnet CLI runner, so the CLI is used for everything build-related: compiling, running tests and creating/publishing packages.

Even though the “production” library only supports netstandard1.1, I went onto multi-targeting the test project, to verify that the code really does really work with “classic” net46, as well as with netcore runtimes:

<Project Sdk="Microsoft.NET.Sdk">
    <ProjectReference Include="..\SharpRomans\SharpRomans.csproj" />
  <TargetFrameworks>net46; netcoreapp1.1</TargetFrameworks>


This is the "new" project released (under the identity of my company) which I recently wrote about.

The project was originally a net45 project which has been migrated to a new .csproj supporting netstandard1.5 . However,  support for net45 projects has been maintained via multi-targeting, because API-wise, there was no reason to leave net45 behind.


For testing, I kept NUnit (after the mild disappointment of the xUnit.net experience). Since some time had passed between I migrated NMoneys and Vertica.Utilities was released, a native dotnet CLI runner had been released.
I can say I nearly regret having used it, because it is so damn slow. I understand it is a very beta version and I am sure it will get better, but still.
Had it not been that such slowness is only suffered by the CI build server and the occasional local build, I would have resorted to the NUnitLite workaround again.

Worth it?

Well, times are changing and it would be weird not to jump the wagon (late enough, mind you) to try how future tastes like. It was an interesting, frustrating learning experience.
I explored different strategies and I am happy with the result for each one of them. That is not to say I would not change my mind in the future.

But it's almost hypnotic to watch a tests run (and fail!) on an Ubuntu box…


… or a Mac.