tag:blogger.com,1999:blog-27799553137074909822024-02-19T07:49:12.748+01:00Yet another layer of indirectionOn technology and geekiness...
Mostly .Net developmentDaniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.comBlogger245125tag:blogger.com,1999:blog-2779955313707490982.post-1639394582764141452023-09-12T16:17:00.004+02:002023-09-12T16:26:48.843+02:00Dotnetifying Testing.Commons<p>It should be no secret that I have been very absent from both my OSS libraries and my writing.<br><br>
Maybe that will get better, maybe not. But one thing is for sure: the .NET scene has moved and I have not moved along with it. Time to slowly amend that.</p>
<a name='more'></a>
<h2 id="revisiting-the-testing.commons-family">Revisiting the <em>Testing.Commons</em> family</h2>
<p><em>Testing.Commons</em> family of projects (<em>Commons</em>, <em>NUnit</em> and <em>ServiceStack</em>) was written (more of distilled) quite a few years ago as a way to be able to reuse some of the techniques that I had used while testing the code me and my teams were writing.</p>
<p>It came in three flavors:</p>
<ul>
<li>
<p><em>Commons</em>: a collection of helpers that enabled writing slightly better tests in whichever testing framework being used. It included things like extensions and helpers to create dates, GUIDS, strings with a more expressive API, a non-committal way of writing object builders and some scaffolding to deal with testing logic involving localization, configuration and serialization. As well as defining a framework-independent way of defining testing constraints.</p>
</li>
<li>
<p><em>NUnit</em>: enhancing <a href="https://docs.nunit.org/articles/nunit/intro.html">NUnit</a> with <a href="https://docs.nunit.org/articles/nunit/extending-nunit/Custom-Constraints.html">Custom Constraints</a> that reflected my way of using (an abusing) NUnit’s <a href="https://docs.nunit.org/articles/nunit/writing-tests/assertions/assertion-models/constraint.html">Constraint Model</a> to make tests more expressive and robust. Dates and collection assertions, composability of assertions, events and serialization.</p>
</li>
<li>
<p><em>ServiceStack</em>: at the time, I wrote a fair amount of service using that framework and being able to write integration tests against them was quite a big thing, so I packed some of the plumbing to make it easier.</p>
</li>
</ul>
<p>They were (past tense) <a href="https://learn.microsoft.com/en-us/dotnet/standard/glossary#net-framework">.NET Framework</a> projects even though I kind of got a glimpse of the future by dual targeting <a href="https://learn.microsoft.com/en-us/dotnet/standard/net-standard?tabs=net-standard-1-0">.NET Standard</a>.</p>
<h2 id="and-then-it-happenned">And then IT happenned</h2>
<p>And by <strong>it</strong> I mean:</p>
<ul>
<li>me not writing as much (or at all) .NET code</li>
<li><em>.NET Framework</em> stopping being a thing, pretty much like <em>.NET Standard</em></li>
<li><a href="https://learn.microsoft.com/en-us/dotnet/standard/glossary#net-core">.NET Core</a> or <em>netcore</em> or <em>dotnet</em> or <a href="https://learn.microsoft.com/en-us/dotnet/standard/glossary#net-5-and-later-versions">.NET</a> or… taking over</li>
<li>multi-platform for the masses and existing in a Windows-less (not completely free) world in which “sane” developers do not need to install Visual Studio to write C# and few remember <a href="https://www.iis.net/">IIS</a></li>
<li>me being older by the second</li>
</ul>
<p>Oh, and the natural decay process that software projects suffer when more of life takes their place and the user base is slim and contributor base is slimmer.</p>
<h2 id="and-now">And now?</h2>
<p><em>Testing.Commons.ServiceStack</em> does not make sense anymore (for me, anyway) and we can move on by seriously deprecating it.</p>
<p>The rest of the projects are no longer .NET Framework friendly. They target <em>net6.0</em> <strong>only</strong>.</p>
<p>Quite a few features have been deprecated. Refer to the project <a href="https://github.com/dgg/testing-commons/wiki">Wiki</a> for details.</p>
<p>The whole building process has been revamped and we see the CLI approach shine for automation.</p>
<p>C# muscle-memory has been regained and a crap-load of knowledge gained to catch up with the times.</p>
<h2 id="the-result">The result?</h2>
<p>Major versions (with loads of breaking changes) for Testing.Commons](<a href="https://www.nuget.org/packages/Testing.Commons/3.0.0">https://www.nuget.org/packages/Testing.Commons/3.0.0</a>) and <a href="https://www.nuget.org/packages/Testing.Commons.NUnit/5.0.0">Testing.Commons.NUnit</a></p>
<p><img src="https://img.shields.io/badge/Testing.Commons-v3.0.0-blue?logo=nuget&link=https%3A%2F%2Fwww.nuget.org%2Fpackages%2FTesting.Commons%2F3.0.0" alt="Static Badge"></p>
<p><img src="https://img.shields.io/badge/Testing.Commons.NUnit-v5.0.0-blue?logo=nuget&link=https%3A%2F%2Fwww.nuget.org%2Fpackages%2FTesting.Commons.NUnit%2F5.0.0" alt="Static Badge"></p>
<p>I will be writing about tooling and process in other posts.</p>
<p>Until then, keep up the joy.</p>
Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-10788503269112726852019-08-26T20:32:00.003+02:002019-08-26T20:32:58.821+02:00Docker: "Saving Lives" for occasional instructors<p>Some months ago, I was <a href="https://github.com/dgg/intro-to-databases">introducing different types of databases</a> to an internal audience.<br/>
Such task would have meant tedious environment preparations.<br/>
<a href="https://www.docker.com/">Docker</a>, however, has (also) changed this scenario dramatically.</p>
<a name='more'></a>
<h2 id="toc_1">One's Computer as a Host</h2>
<p>That is the one I would have probably done: install the different database servers in my machine and have them running as I was going through the different paradigms.</p>
<p>Probably, some installation would have left some rubbish behind after uninstalling the server.<br/>
But my computer gets restored every so often, so this option would not have been the biggest of deals.</p>
<p>Likely, some eager server would have tried to take all my RAM over and performance would have been a little bit more sluggish that it could have been.<br/>
But, again, nothing one can't live with and blame on hardware.</p>
<p>More annoyingly, I would not have been able to change startup toggles and most definitely I would have bumped into a server that does not take Windows (the OS of my work computer) as a first-class citizen.<br/>
Which leads to...</p>
<h2 id="toc_2">The Virtual Machine Dance</h2>
<p>Some of the servers I showcased are *nix dwellers, so I could have provisioned some Virtual Machines using <a href="https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/">Hyper-V</a> and manually installed the services needed and started them on demand as I changed subjects.</p>
<p>I could have even automated the process doing some <a href="https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/try-hyper-v-powershell">Powershell</a> scripting or even streamlined provisioning even more using (the somewhat on its way-out) <a href="https://www.vagrantup.com/">Vagrant</a>.</p>
<p>But VMs, though convenient, are still somewhat heavy-weight and slow to kickstart (talking about tens of seconds, even minutes).</p>
<h2 id="toc_3">Docker as a game-changer</h2>
<p>Docker was totally a game changer for my requirements.</p>
<p>I could easily run some simple <a href="https://docs.docker.com/engine/reference/commandline/cli/">docker commands</a> to provision each database server and have it running in a matter of seconds.</p>
<p>For example:</p>
<pre class="line-numbers"><code>docker run --name some-postgres -e POSTGRES_PASSWORD=1234 -p 5432:5432 -d postgres:10.5-alpine</code></pre>
<p>or</p>
<pre class="line-numbers"><code>docker run --name some-mongo -d -p 27017:27017 mongo:4.0.1-xenial --smallfiles</code></pre>
<p>No installers (beyond a working Docker installation in Windows 10), no leftovers, blazing fast to start and pretty lightweight.</p>
<p>When the course finshed, I could easily prune the system to remove containers, images and such free the few resources small images take.<br/>
A true instructor's dream.</p>
Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-44532323447228114932018-01-04T07:00:00.000+01:002018-01-04T07:00:44.359+01:00Anatomy of a Breaking Change<p>I <a href="https://dgondotnet.blogspot.dk/2017/12/a-new-year-new-version-of-nmoneys.html" target="_blank">mentioned</a> that latest release of <em>NMoneys</em> included an "unusual" breaking change.</p>
<p>Let's do some forensics on the internals of such breaking change and how easy it is to solve.</p> <a name='more'></a> <h2>Consuming a library</h2>
<p>In order to remove noise, I will exemplify the issue with <em>NMoneys</em> with other types that remove all surrounding noise and are focused to reveal the peculiar breaking change.</p>
<p>Let's have a library that exposes a method defined in a type inside the main namespace for the sake of discoverability.<br>
The method takes an argument and returns a type. Those types, however live in a child namespace because they are closely related and it just seems tidy not to have those types in the main namespace since they are focused to a specific scenario.</p>
<p>Such method would look like this in its initial version:</p>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Library.v1.cs"></script>
<p>The layout of the files (that mimics the logical namespace layout) looks like this:</p>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Library.v1.layout"></script>
<p>A client that consumes the library by referencing the library assembly (good old "add reference" or "nuget reference", it does not matter) and calls such method would look like this:</p>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Client.cs"></script>
<p>Note how the client needs to use both namespaces (root and child), since arguments and return values are defined in the child namespace whereas the method "lives" in the root namespace. But the program works:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_lFlSmW7GEhj91VQHAIMw3gBlWjn2zFoefVZl_hmRdfmVl2zdXgqY1I4KAeMuGmiQiXhqfRsgG3IFtq8G-kInewOJdTdoA6jDm0KpKcT157gqv8GUEzX77JoyXxGjtriorwchXxjQaWaW/s1600-h/v1%255B4%255D"><img width="90" height="44" title="v1" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="v1" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNMtaB6SncG0QLam3chaoF6gt88ky_Ix0JQWJQVpkdt2ZEryMCnW9OWtx2hskMIKHbEdV1ay0a0vITl8pLSFTzU8YN371uKDuk3BuF2NTQvCViMtKyWkA4ouWsmEPfjcgwJwH_KSiaV65O/?imgmax=800" border="0"></a></p>
<h2>A compatible change</h2>
<p>Let's imagine the library releases a new version, let's say 1.1 for the sake of the argument and to follow the always sensible <a href="https://semver.org/" target="_blank">semantic versioning</a>.</p>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Library.v1_1.cs"></script>
<p>If we were to drop the new version assembly of the library in a location where it could be loaded by the client and run it, we could be to see that it just works without further fiddling:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8lcmOHOnwfRP7GIyt4nptZPNpkqn9riL0J0ruRbcOFUWwKkrFoaOx5Uhnc0bB92PnbzCPkvuXbFXxJjPlrPCzEHm3PZBTXf8HlIxr07gnuTX16IDTcLs5vQbS30qSk46EDeOO1h0f1ufz/s1600-h/v1_1%255B3%255D"><img width="90" height="44" title="v1_1" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="v1_1" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqRVdOxI3gSVyVUHWLlvoCzbULzh3ZL9cQumnhJFJYtWtB8BT8etXDh0qlXZXU1ywbsyPsTxPi77ChYtiL6JNz2rezmKmZEUER9bNtEtH4SlEBsZBupgLH1XE40uo8AQLPATwkKVG5kdpD/?imgmax=800" border="0"></a></p>
<h2>A not-so-compatible change</h2>
<p>Examining the library we can see that the type uses the child namespace and the child namespace uses information defined in the root namespace.<br>
This was pointed out by <a href="https://www.ndepend.com/" target="_blank">NDepend</a> as something to avoid, since it resembles a circular dependency, only that, since we only have one assembly, the compiler is kept happy.
One way to break such dependency would be transforming the method to an extension method and place that extension method in the child namespace, alongside its arguments and return type.<br>
Doing so, the root namespace remains ignorant about the child namespace and the cycle does not exists anymore.</p>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Library.v2.cs"></script>
<script src="https://gist.github.com/dgg/e00e1bd8f501685ac2de64f10c144a84.js?file=Library.v2.layout"></script>
<h3>Breaking badly</h3>
<p>If we were to do the same as before: dropping the assembly where the client could load it and run the client again we would be greeted with an exception.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtNM2GIDpKTqvC1Num2MO7qpS8_3Izsu_RYnaMzoFo8ki9YNTi4ucd1IMFplqBKW9uA0tSQqLHmZQd_X9vlP1-q-KQLWLrHMj5IZnTOvQZvtnCl6IpigswQwpydzIvimr_hjuVqIjWme6L/s1600-h/v2_noRecompile%255B4%255D"><img width="403" height="66" title="v2_noRecompile" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="v2_noRecompile" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXUfyDtgGa8R0BlSKF3wtnH_mQ_DN8XQIoQsHzNXBHNY3sh6pRgo-srym9XL7e6mFJ6HGLX4O6729iadOfdTbnTRNs1fzBu0JiimJ1d0MO8KVkIoX7leIsjH1qmAeNu6TaQljOo6YrFd28/?imgmax=800" border="0"></a></p>
<p>The explanation is obvious: the method is gone from the type. Extension methods are just syntactic sorcery to make them appear like they are defined in a type, but they are not.</p>
<h3>Solving it</h3>
<p>To solve the issue we just need to recompile the client against the new version of the library.</p><p>No a single change needed.</p>
<h2>How bad is it?</h2>
<p>As one can see, the client is only broken when the dependency is replaced <strong>without</strong> recompiling the client.</p>
<p>I might have been doing it all wrong all these years, but I have never ever <em>x-copied</em> a dependency without deploying a recompiled version of the client. Not even for "compatible" releases.</p>
<p>So if you are like me and always recompile before deploy. This kind of breaking change is not so pernicious.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-3586364384682245402017-12-31T07:01:00.002+01:002017-12-31T07:01:35.716+01:00A New Year, A New Version of NMoneys<p>It is not a coincidence that there is a new release of <em>NMoneys</em> around the New Year.</p>
<p>Countries usually take advantage of this hard change to perform drastic changes in their monetary systems and so it is up to NMoneys to catch up with reality.</p>
<p>This year, however, there is so much more...</p>
<a name='more'></a>
<h2 id="toc_1">The usual End-Of-Year changes</h2>
<p>This years' changes in currencies came from the amendments 164: introducing <code>STD</code> instead of <code>STN</code> to the people of Sao Tome and Principe; and 165: introducing <code>MRO</code> instead of <code>MRU</code> to the people of Mauritania.</p>
<h2 id="toc_2">The unusual breaking change</h2>
<p>I mentioned in my <a href="https://dgondotnet.blogspot.dk/2017/12/a-heavy-weight-champion.html">last post</a> that I ran a copy of <em>NDepend</em> on some of my codebases.<br/>
<em>NMoneys</em> is one of those codebases.</p>
<p>It came with some interesting suggestions which I implemented, but vast majority of them were internal.<br/>
There is one, however, that is not.</p>
<p>The suggestion came from the fact that there was a "mutual dependency" between members of the namespace <code>NMoneys</code> and the members of the namespace <code>NMoneys.Allocations</code>. That dependency comes from the fact that the <code>.Allocate()</code> methods (declared in <code>NMoneys</code>) take a list of arguments declared in <code>NMoneys.Allocations</code> and those types make use of members of the <code>NMoneys</code> namespace.</p>
<p>This can be seen as a circular dependency. It would, in fact, be if the namespaces turned into assemblies (definitely no plans on making that change). And I took the decision to break such dependency at the cost of a breaking change.</p>
<p>The steps to unbreak are simple: <strong>recompile</strong> your code against the new version if you use any of the <code>.Allocate()</code> methods and you are good to go. I plan a new post going deeper into why is that necessary.</p>
<h3 id="toc_3">The very minor</h3>
<p>There is a new extension method over <code>CurrencyIsoCode</code> enumerations to efficiently compare against another enumeration.</p>
<h2 id="toc_4">Impact internal housekeeping</h2>
<p>There is a project within <em>NMoneys</em> that is used internally: <code>NMoneys.Tools</code>.</p>
<p>This project provide a command line interface to inspect and compare NMoneys's implementation against two of the "authoritative" sources: the <em>iso.org</em> website and the <em>System.Globalization</em> namespace.</p>
<p>The code for the tool kind of worked but did not receive a lot of love and it was becoming an annoyance. So I decided to give it a facelift.</p>
<p>The consequences of the facelift is that from now on it would be easier to analyse the changes to be made to NMoneys to improve its accuracy.<br/>
As a matter of fact it has made my life much easier to reconcile a lot of information with <em>System.Globalization</em>, since they have done a great job to support many more cultures and currencies.</p>
<p>The complete list is in the <a href="https://github.com/dgg/nmoneys/wiki/Changelog#6000">changelog</a>.</p>
<h2 id="toc_5">Words are cheap</h2>
<p>And they usually don't compile.</p>
<p>So head up to <a href="https://www.nuget.org/packages/NMoneys/6.0.0">Nuget</a> to fetch the latest version and enjoy the upgrade!</p>
Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-38150437100013865382017-12-15T08:10:00.000+01:002017-12-15T08:12:25.343+01:00A heavy-weight champion<p>Some weeks (months ago) I received a free license for <a href="https://www.ndepend.com/" target="_blank">NDepend</a> and I was kindly asked to use it and write about it.<br/>
Well, I have used it to some extent so I can write something about it.</p>
<a name='more'></a>
<h2>Second Impressions</h2>
<p>I have known the tool since it was a Beta and has been a fan of the original author (<a href="https://blog.ndepend.com/author/psmacchia/" target="_blank">Patrick Smacchia</a>) from the old <a href="http://codebetter.com/patricksmacchia/" target="_blank">CodeBetter</a> days.</p>
<p>I have even evaluated the tool when it was a beta and I actually had the time to toy around with beta products. My first impression was feeling overwhelmed. The tool had a lot of features and measured a lot of things that I did not actually understand: cyclomatic complexity, afferent coupling,...<br/>
Not that I <em>fully</em> understand them know, but the tools (and his blog entries) served me as a gateway to get to know some advanced design concepts that have come handy throughout the years.</p>
<p>Fast-forward some years and the tool is definitely waaaay more polished from the UI perspective, but it stills offers waaay too much for me to easily grasp. But that will not deter me from trying again.</p>
<h3>The UI</h3>
<p>The tool comes in several flavors:</p>
<ul>
<li>a stand-alone application that is zip-deployable</li>
<li>a console application to be run when automation is required</li>
<li>a <em>Visual Studio Add-On</em> for getting feedback when authoring code and when tight integration is wanted</li>
</ul>
<p>I have only been able to try the stand-alone application, but I guess the VS version should be as good-looking (although a bit too <em>professional</em> if there is such a thing).</p>
<p>I had pointed to Patrick that when I first tried the tool in my old coding-laptop I have had issues with the pop-ups getting in the way, but not that I have spent more time with it, I have not had much trouble (and I am not fond of overlays anyway, so they have been disabled).</p>
<p>Word of advise: have <strong>plenty</strong> of real state for the tool.<br/>
I usually use my tools in split screen for my 27" monitor but with <em>NDepend</em> I had to make an exception to be able to visualize the huge amounts of information that it provides.</p>
<h2>Quickstarting</h2>
<p>The documentation is pretty good at keeping you up and running and getting your first results.</p>
<p>However like it happens with most (if not all) code analysis tools, results on a medium sized project are totally overwhelming. Be prepared to be greeted with a lot of warning lights and even red flags. Do not freak out, although I myself have had a hard time not to.</p>
<p>Oh, and if you are committed to the tool and ready to check in the <code>.ndproj</code>, get ready to push almost half-a-Megabyte of XML. And that is without any customization. 😮</p>
<h2>Being ruled</h2>
<p>To be completely honest, I have only scratched the surface of what the tool can offer by running the report on two of my OSS projects: <a href="https://github.com/dgg/SharpRomans/" target="_blank">SharpRomans</a> and <a href="https://github.com/dgg/nmoneys" target="_blank">NMoneys</a> and focusing on the results from the out-of-the-box rules.</p>
<p><em>NDepend</em> has an edge over other static analysis tools that I know in the sense that all rules can be tweaked and customized by editing the queries in <em>CQLinq</em> (a custom query language focused on analyzing code) with syntax highlighting and auto-completion.<br/>
Very, very cool , but I have not had the time (nor the energy) to come to that point.</p>
<p>Code analysis tools tend to be picky, warning you about the minor thing that might lead you to solve a major problem you were not aware of; or, most of the time, being a sign of nothing and be easily dismissable.<br/>
The challenge is commit enough time and ponder whether the hint makes any sense in the first place, then whether it applies to your scenario and then whether it even makes sense to do something about it. This is not anything against <em>NDepend</em> specifically, but to all static analysis tools I know of: the need of a <strong>skilled human</strong> to initially intervene on the feedback.</p>
<h3>Some wins</h3>
<p>The tool does a good job of suggesting easy fixes and I strongly believe that after acting upon some of the warnings, the design of my projects is slightly better.</p>
<p>I fixed potential issues with member visibility, immutability, turning reference types to value types and some other "minor" fixes. And the tool does a wonderful job at telling you the exact point to apply the fix, the cause and provide meaningful potential solutions.</p>
<h3>Some dead ends</h3>
<p>Of course, not everything is rainbow, unicorns and free time invested...</p>
<ul>
<li>There was a case in which there was a warning that method overrides should be called with <code>base.</code>, but I could not find an instance of such warning in this code:</li>
</ul>
<script src="https://gist.github.com/dgg/9ade1c7208b1d4fb5b7aeaff3992a03b.js?file=not-calling-base-method.cs "></script>
<ul>
<li><p>Also, a rule suggested to nest a class into a parent type, but since that class contained extension methods, doing so would lead to compilation errors, since static method cannot be located on nested classes.</p>
</li>
<li><p>A rule suggested turning a class into a struct, but the class had custom logic in the default constructor which would be close to impossible to achieve if it was a value type.</p>
</li>
<li><p>There were a lot of boxing warnings. But majority of those boxing/unboxing issues cannot be fixed, since there is no generic API to be used. There were a ton of warning for calls to <code>string.Format()</code> and the canonical implementation of <code>.Equals()</code> in value types.</p>
</li>
<li><p>This one can be a tough and debatable one (I imagine): a rule suggested not using members marked as <code>Obsolete</code> (great advice), but it failed to grasp that the single usage was wrapped in <code>#pragma</code> directives, meaning that it was OK to ignore the warning from the developer perspective.</p>
</li>
<li><p>Another violation that got me scratching my head was the advice against marker interfaces (interfaces without members), which can be useful. However in my case, it was a "lesser marker" interface because, even though it did not have members on its own, it did aggregate two different interfaces, which, in my opinion, validates its existence.</p>
</li>
</ul>
<script src="https://gist.github.com/dgg/9ade1c7208b1d4fb5b7aeaff3992a03b.js?file=lesser-marker-interface.cs"></script>
<p>I am pretty confident majority of these rules could have been tweaked for my projects, but again, it is neither a task for a first-timer, nor a trivial one.</p>
<p>End result is: one of my projects is red flagged and the things that are pointed as show-stoppers (quality gate conditions) are, in my opinion, definitely not red flags in this particular case.</p>
<h3>Final words</h3>
<p>Do not let my "dead ends" with the analysis rules fool you. It IS a very good tool, but boy isn't it huge!</p>
<p>I have dipped my pinkies in it by checking the results from the rule execution, but you can get easily waist-deep: dependency graphs, dependency matrices, all sorts of heat-maps, code diffs, trend analysis, execution from the build process,...</p>
<p>A powerful monster it is (wrapped in an effective UI) but definitely not one to be left in the hands of unskilled developers.<br/>
And not a monster to be left alone to roam your coding lands without human supervision and dedication, it will become more of a trouble than a problem revealer.</p>
Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-63450035821026671832017-10-21T22:07:00.001+02:002017-10-21T22:07:40.951+02:00Pissed by a function<p>Lately, I have been going through a rough coding patch.<br/>Rough as in getting familiar with a huge codebase that has been cooking for a year and a half and, being mild here, could have received a little bit more care before.</p>
<p>Luckily for me I am getting familiar with it through one of the best ways one can get familiar with a codebase: writing those "unit" tests that are so well needed.</p>
<a name='more'></a>
<h2 id="toc_0">Test-Last</h2>
<p>No news I am a big proponent of automated tests in code and I am not afraid to say that I seldom TDD it. I practice it every now and then but I am more comfortable with test-near: that is, writing the tests right after writing some code.<br/>
This works very well for me and I still get a very good feedback from my design while the code is still "fresh" and I can jump to and from test and production code easily.</p>
<p>Unfortunately for my case, I was writing tests for functionality that was written months ago for someone that had little regard for good design, let alone testability.<br/>
Long story short, I am now waist-deep on some ridiculously complex logic written in such a manner that prevents me from getting the whole problem in my head in one time. My brain is to be blamed, but that's no use, because the code still needs to be tested and refactored.</p>
<p>With such a task in hand my usually low threshold of pain gets even lower and the most innocent piece of code triggers deep, dark reactions. In my defence, this little guy deserves the beating.</p>
<h3 id="toc_1">But why?</h3>
<p>Let's dissect this little fucker:</p>
<script src="https://gist.github.com/dgg/378f325ba4bb2513d3f2bb66c138b681.js"></script>
<p>I know what you are thinking: "<em>come on! nagging for 20 lines of code? seriously?</em>"<br/>
I hear you, let me start:</p>
<h4 id="toc_2">It is a (yet another) helper</h4>
<p>Helpers are sort of bad. They place logic and behaviour away from data and the entity that owns it and break encapsulation and cohesiveness and blahblahblah... And, honestly, when 90% of the logic I have been seeing in the last few weeks lives in either a static helper or inside a so-called service, one more makes my temples pulsate and my right eye twitch.</p>
<h4 id="toc_3">It relies on implicit operators</h4>
<p>You have heard me complain about these <a href="https://dgondotnet.blogspot.dk/2009/05/i-hate-magic.html">before</a>, so here we go again.</p>
<p>Check line 18. It is right there. Before your eyes. <code>someAttributeDefinition.Order</code> returns an <code>int</code>. <code>someAttributeDefinition?.Order</code> on the other hand, uses the null propagator attribute, which means it returns <code>Nullable<int></code> instead. It can be <code>null</code> when <code>someAttributeDefinition</code> and how much is <em>50</em> plus <code>null</code>? <em>50</em>? <em>0</em>? <code>null</code>?. It is actually <code>null</code> but you either had to learn it by heart (kudos), the hard way (pity) or have to run a snippet somewhere to actually figure out what the value is.</p>
<p>That is not cool. It is magic and we, developers, do not like it.</p>
<h4 id="toc_4">Better learn operator precedence</h4>
<p>Check line 18 again. It is the real gem: we have an implicit operator to nullable, an add operator to perform the sum and a null-coalescing operator.</p>
<p>Better learn their <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/">precedence</a> as the author surely does without providing a hint to us mere mortals: is it <code>50 + (nullable ?? -1)</code>
<code>(50 + nullable) ?? -1</code>. When <em>nullable</em> is <code>null</code>, would it be <code>49</code> or <code>-1</code>? Place your bets and fire your REPLs.</p>
<h4 id="toc_5">-1 is the magic number (tune it)</h4>
<p>Line 18 again. Man... So the idea is that some number is returned if <code>someName</code> is known or some number otherwise. Wait what?</p>
<p>A negative number is the magic number to be returned when there is no match at all. I can tell you because I could read the calling code. You, on the other hand, cannot infer shit form the signature of the method.<br>
Do you want to know what is a perfect type for a number that, in some cases, might not be a number at all, in which case the caller has to act differently? No, it is not a negative number. No, the developer does not have to create a new type. It is <code>Nullable<int></code> or <code>int?</code> for the uber-lazy. Shocking. Or one could read the comments...</p>
<h4 id="toc_6">Comments instead of tests</h4>
<p>Isn't it ironic that the only comment is also to be picked? Funny enough, after refactoring the method to return <code>int?</code> instead of the magic number, there was a commit that I merged to my developer branch, that added some more cases to the function.<br/>
One change in particular affected the infamous line 18. The change affected the base number to add upon. From <code>50</code> to <code>70</code>. Unfortunately, the comment was not changed (insert bitter face palm).</p>
<p>An innocent mistake. But one that proves, again, the point that comments tend to lie. More untrue as time passes. Do you want to know what does not lie? A test. If the value was the important bit, there would be a test making the change dead obvious. If the value itself is irrelevant, there would be a test stating that certain values of <code>someName</code> are greater or lower than others.<br/>
In any case the test would not be deceipful. It would either pass or fail. And the poor reader (aka. me) would know where to stand.</p>
<p>And now, write that damned test on the caller method, the one that has a (https://en.wikipedia.org/wiki/Cyclomatic_complexity)[cyclomatic complexity] of 19!!!.<br/>
<em>No Bueno</em></p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-44814429080245184902017-09-28T20:22:00.004+02:002017-09-28T20:22:57.577+02:00 A tale of two trips<p>Some days ago I came back from a visit to a friend that lives abroad.</p>
<p>While canned in the plane back I started to compare the whole travelling experience to what it was some not so many years ago.</p>
<a name='more'></a>
<h2>Not so long ago...</h2>
<p>...or so we all like to think. Let me tell you what my experience used to be.</p>
<p>One would have to physically go to a place to buy plane tickets. One would usually go to the local (that is close to where you happen lo live) travel agency or one you know it consistently offers cheaper prices.<br/>
Once there, one would tell a human being where one would like to travel and when. The broker would type on a terminal (sometimes not even a computer but an actual dumb terminal) and tell you about the different options and prices. Once decided, you would usually give real money to the broker and him or her would give you a real plane ticket printed in some sort of cardboard.</p>
<p>The day the travel takes place one would take some sort of transport to the airport and be careful to not forget the cardboard ticket that needs to be shown to the airline ground staff who would ask you where do you want to seat and try to satisfy your requirements of not being too far behind or getting a window seat because there is a chance of flying over a pretty city.<br/>
That staff would tear part of your cardboard ticket and provide you with a paper boarding pass with your seat on it.</p>
<p>In-flight one could probably push one's seat back and "enjoy" some low-quality meal and the worst coffee ever brewed by man or beast, but one would drink it because a) you were offered it and b) it is complimentary (that is, you already paid for it).</p>
<p>After landing one would have to find ones way to the ground transport that takes you to the city. Since this happens years ago, that would be the cheapest mean of transport: bus or train or a shared cab ride.<br/>
If gone for the public transport one would have to queue to get a paper ticket and struggle to make oneself understood, riding the transport amongst doubts that the paper ticket one carries would be valid at all or a theatrical "no comprendo" would be needed.</p>
<p>In the city, one would have to queue again, struggle with communication again and to purchase another ticket for the transport that would hopefully take closer to your final destination.</p>
<h2>Last weekend</h2>
<p>One has to still physically move to get your ass in front of the computer if old school like myself or hunch to do the job of searching for the cheapest fare in your phone. In any case, there is no human contact, just filling in some input controls and a myriad of mostly useless results appear in front of you. Pick the least annoying amongst the cheapest, punch in your credit card number and you'd receive an email confirmation that everything is alright and you would be traveling the desired date.</p>
<p>The day of travel is taking place no one is going to ask you which seat you prefer, because previously you have checked-in online and your boarding pass (a humble scannable code) has been delivered to either your email or your airline app.<br/>
Furthermore, the seat has been allocated randomly, that is in the worst possible spot to bump up the chances of you coughing some more money for an equally terrible seat inside the plane.</p>
<p>Once in-flight your seat won't recline, nor you'd like to, because that would mean even more pain in your already pressed knee-caps. You are likely not to receive anything to eat or drink unless you spend five-fork prices for plane food. At least there is no inner pressure to drink terrible coffee.</p>
<p>No need to queue for the ground transportation because you got another ticket delivered to your phone and, in the case of my destination, I would only have to either top up a contactless transport card or I could have used my contactless credit card to pay my local fares.<br/>
Either go public or book some cheap-er private transportation that'll take you from A to B and you would not have to even have any cash on you.</p>
<h2>Is it any better?</h2>
<p>Nostalgia aside, no doubt about it.</p>
<p>It is one of those silent evolutions that improve your experience one order of magnitude at a time:</p>
<ul>
<li>e-tickets instead of cardboard</li>
<li>price comparison "virtual" mega-portals instead of visiting a "real" business and trusting the operator</li>
<li>printable boarding passes and then paperless QR codes</li>
<li>digital train tickets and contactless transport payments</li>
<li>credit card everywhere instead of exchanging paper money and expending the remains in the airport on useless souvenirs</li>
</ul>
<p>The only thing I miss about the old days is complaining about the terrible coffee, because, I am too cheap to pay for the right to complain gratuitously.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-85775310857375750322017-09-23T20:11:00.001+02:002017-09-23T20:11:32.022+02:00.NET Core SDK 2 on Unsupported OS<p>I have been without a “proper” development laptop for a while. But that fact did not prevent me from hacking my way around in my old iMac.</p>
<p>Do you know what would have prevented my hacking? The fact the the newer <em>.NET SDK 2.0</em> would not install in my non-Sierra system.<br />
But it didn’t, want to know how?</p>
<a name='more'></a>
<h2 id="toc_1">Living in the Past</h2>
<p>I might be considered a "Windows guy" since that is what I run the most. However my hate for labels and love for different things make me do things like having an iMac as a home computer and a dual-booting (Windows 10 whatever-fancy-name-of-the-version-is and Ubuntu Mate Xenial) second-hand Lenovo laptop as a work laptop for the rest of the family.<br />
The iMac sits on its own desk on a common area of the house, so I tend to use it a lot lately.</p>
<p>I was upgrading some training material to the current times and few things are more present in the .NET arena than .NET Core and since 2.0 is still fresh and warm off the oven, in my made, made sense to go with it.</p>
<p>I set off to install the <em>.NET Core SDK 2.0</em> (<em>SDK2</em> from now on)in my mid-2010 iMac, when I found out that Yosemite is not supported and therefore, one cannot install the <em>SDK2</em> in anything older than Sierra on a Mac.</p>
<p>As it happens, I run Yosemite because anything newer than 10.10 makes my veteran computer crawl and me wanting to murder people in the waiting.<br/>
Trust me, I tried newer versions and had to undergo the stupid pain of going back to a previous MacOS version because murder is still uncool.<br/>
So Yosemite will endure until the computer either gives-in or it gets sold.</p>
<h2 id="toc_2">Looking at the Future</h2>
<p>Would I just fold and run to my laptop, install <em>SDK2</em> and roll with it admitting defeat? Hell, no!</p>
<p>I made a simple observation: <em>SDK2</em> is "just" a bunch of command line utils.<br/>
With that observation in the back of my head I wondered: Is there an easy and performant enough way to run those utils outside my unsupported box while still remain local for the file editing?<br/>
I'll give you a hint: the answer I chose involves thinking <strong>inside</strong> a <em>box</em>.</p>
<h3 id="toc_3">Joining the craze</h3>
<p>And by <em>box</em>, I mean container.</p>
<p><a href="https://www.microsoft.com/net/core#dockercmd">One</a> of the means to install <em>SDK2</em> does not involve "installing" it in your system. One can run "something" that has the <em>SDK2</em> inside so you can open a shell to it and still be able to access your filesystem while using your favorite editor in your box. That "something" is a container.</p>
<p>By running <em>Docker</em> in your box, you can download an image with <em>SDK2</em> (or, basically, whatever) and run <code>dotnet ...</code> commands in a remote shell, bypassing the fact that the underlying system, my oldie-but-goldie iMac, does not support such software.<br/>
And all that without dealing with virtual machines and virtual hard-drives in the size of gigabytes.</p>
<p>What is not to like?</p>
<h4 id="toc_4">Revealing the (command) lines</h4>
<p>First thing one has to do is pull the right image from the <a href="https://hub.docker.com/">Docker Hub</a>. From the wealth of <a href="https://hub.docker.com/r/microsoft/dotnet/">dotnet images</a> available, since we are building software, we need one labeled <strong>sdk</strong> and a newer <em>stretch</em> Linux since I really do not need to target the classic <em>.NET Framework</em>.<br/>
For my own sake, I like tagging the image with a shorter name, so that short name can be used in subsequent commands.</p>
<script src="https://gist.github.com/dgg/f80922d66ab9ee857584597cd6fe4812.js?file=pull.sh"></script>
<p>Pulling the image takes some time that depends on your connection speed and your computing power.</p>
<p>With the image in our system, we need to run the container based on it, interactively with a terminal. If such command is run from the project folder, such folder will be mounted and available from the running container.<br/>
The container is also named for convenience.</p>
<script src="https://gist.github.com/dgg/f80922d66ab9ee857584597cd6fe4812.js?file=run.sh"></script>
<p>Once in that terminal, <code>dotnet</code> commands can be run happily. When done running the commands, the session can be finised with <code>exit</code>, which will close the container terminal, stop the container and get us back to our shell.<br/>
When more commands need to be run, just attach to the container once it is started:</p>
<script src="https://gist.github.com/dgg/f80922d66ab9ee857584597cd6fe4812.js?file=reconnect.sh"></script>
<p>One less excuse to embrace the command line.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-20121116495613007552017-07-03T12:38:00.000+02:002017-07-03T13:00:34.651+02:00Multi-targeting source code packages with Nuget<p>Time goes on and change is constant. What we thought it worked, it may not now; sometimes for a good reason, sometimes not.</p>
<p>This time I'll dig on what I had to do to attach code to a solution, even if the solution is targeting the classic <em>.NET Framework</em> (<em>net</em>)or the newer <em>.NET Core Framework</em> (<em>netcore</em>). Because that is what I had to do myself.</p>
<a name='more'></a>
<h2>Because I had to</h2>
<p>I recently <a href="https://dgondotnet.blogspot.dk/2017/06/nmoneys-serialization-gets-lifting.html" target="_blank">wrote</a> about the lifting <a href="https://github.com/dgg/nmoneys" target="_blank">NMoneys</a> third-party serialization packages got.<br>
The lifting was motivated by the will to check whether such serialization code worked in newer versions of the serialization libraries and, furthermore, in the new .NET Core Framework (<em>netcore</em>).<br>
Long story short: code kind of worked, but the package did not, which kind of diminish the "working" moniker of the code because, what good is working code that cannot be executed?</p>
<h2>Why didn't it?</h2>
<p>Third party <a href="https://www.nuget.org/packages?q=%22nmoneys+serialization%22" target="_blank">serialization packages</a> are deployed as "source code packages", that is, a <em>NuGet</em> package that does not include any binary but, instead, packs some source code files written in C# that, when the package is installed, they get added to your solution and compiled along.</p>
<p>When authoring such packages, one would include some files under a <code>/content/*</code> folder for convention and declare them in the <code><files></code> element.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnJaQxJEBkZF22ykvJD4ONx85SXY74DUrcTwpIvhuE7JfXHjduBpTzd67IOlrN-fgppekxrFD9JCTn4DMT_oTT1SCQGwyPOvgQOOXKDMphB7t9XKfkhpLvBp3oVSulTzpJ-3kme_2c1x_M/s1600-h/old_source_package%255B4%255D"><img width="574" height="141" title="old_source_package" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="old_source_package" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUvsVwSHEjyLPpoURaKDihkVcIm7MYOoE_EviygfPhXwGh9eE8OxEe1xjwNFT3WkMmsrQfg0cTSLtrWNgZidmEkqG9fmhQoHnV310K61q33HPkYoqvIDt-PcbadNS3KjGouDSfTMmW6A83/?imgmax=800" border="0"></a></p>
<p>When I installed such package on a <em>netcore</em> solution, no matter which method of adding the package:</p>
<ul>
<li><code>dotnet add package ...</code> from command line</li>
<li>Add it from the <em>Manage Project Packages</em> dialog</li>
<li><code>Install-Package ...</code> from the <em>Package Manager Console</em></li>
</ul>
<p>It did-not-work. I could neither see the code as part of my solution nor use the classes after compilation.</p>
<p>I was kind of puzzled until I found this <a href="http://blog.nuget.org/20160126/nuget-contentFiles-demystified.html" target="_blank">post</a> from NuGet's blog itself. The post is kind of "old", but I have not had the need to dig into the differences between <em>NuGet 2.x</em> and <em>3.x</em> because: a) I barely worked with <em>netcore</em> solutions b) it just worked for me in classic <em>net</em> solutions (as it should).<br>
What the post says is that from <em>Nuget 3.0</em> and on (and <em>Nuget3</em> is the one that supports <em>netcore</em>), language-specific files need to be located under folders following the convention <code>/contentFiles/{codeLanguage}/{target_moniker}/*</code> declared in the <code><metadata>/<contentFiles></code> element, but <u>also</u> in <code><files></code>, which is not that obvious from reading the post or the specific <a href="https://docs.microsoft.com/en-us/nuget/schema/nuspec#including-content-files" target="_blank">documentation</a> about the subject.</p>
<h3>Out with the old</h3>
<p>Putting the good-old trial-and-error algorithm to work I finally found out a way to get my code compiled and usable from <em>netcore</em> solutions. Not without caveats, mind you. But I will rant later on those.</p>
<p>However, when following the new way of doing things, people that installed the package from an old net solution, would not get the code added to their solution <br>
There has to be a way to make it work, but I could not find anything on the web, so I write this post.</p>
<h2>Multi-targeting source files</h2>
<p>The key is do both things:</p>
<ul>
<li>for the old net projects: place the files under <em>/content/*</em> and declare them in <code><files><file src="content\..."></files></code></li>
<li>for the new netcore projects: place the files under <em>/contentFiles/cs/any/*</em> and declare them in <code><metadata>/<contentFiles></code> and <code><files><file src="contentFiles\..."></files></code></li>
</ul>
<p>You can check the <em>.nuspec</em> file for <a href="NMoneys.Serialization.Json_NET"><img alt="NuGet" src="https://img.shields.io/badge/nuget-NMoneys.Serialization.Json__NET-blue.svg?style=flat-square&colorB=60a7cb"></a>:</p>
<script src="https://gist.github.com/dgg/21e3b42e45f7bd8b8e38243785cae7bb.js?file=NMoneys.Serialization.Json_NET.nuspec"></script>
<p>And this is how the files look when packing the .nuspec file:</p>
<script src="https://gist.github.com/dgg/21e3b42e45f7bd8b8e38243785cae7bb.js?file=file_structure.sh"></script>
<p>And this is how the package looks like:</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg4XrROEEKqSgGPUREV7wVXZFyBWqCxvcafrLibOi-iXnihRlFjtBff4ah6n2sWgV5BOUlLa8PIZuc94pkNZmMVQVGkHUxtnXeVzJycnQV2DxFZ9zSF7mRZzyMswch-1CU_HdHQO7JJLWb/s1600-h/new_source_package%255B4%255D"><img width="574" height="200" title="new_source_package" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="new_source_package" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho8yg_DRyNayAm0mCGCKUXCY292ph-lzjCWig_mS23Vrp_cM_HosbbUMYC0spbLt_fl23gTesKQW4w_pdIwDQmFo70px3Nrnlz89OF03DYr-kHI61oVyAKJACwJWHCZ80BYFhFkdZZGxi1/?imgmax=800" border="0"></a></p>
<p>When the package is created this way and installed to a <em>net</em> project, the files under <code>/content</code> will be added to the solution; and when installed to a <em>netcore</em> project, the files under <code>/contentFiles/cs/any</code> will be "referenced". In both classes, source code will be compiled along your project and classes can be used.</p>
<p>Is that it? For the "uncurious", yes. But if you are reading this, obviously, you are not. Things get more complicated.</p>
<h3>Caveats, please</h3>
<p>I mentioned that the same package can be installed and code compiled from both <em>net</em> and <em>netcore</em> projects, what else you could possibly have to tell?</p>
<p>When the package is installed to a net project, source code files are copied "physically" to the subfolder within the project.<br>One can see the code and change it at will (upgrading packages with content has always been a sensitive topic, to the point of suggesting a non-enforced folder convention that includes locating the source files under <code>/content/App_Packages/{package_id}.{package_version}</code> to tackle the problem -a nasty workaround, if you ask me-).</p>
<p>When the package is installed to a <em>netcore</em> project, source code files are not copied to the project folder. They are deflated to the local <em>NuGet</em> package cache and "magically" they are available for consumption for your solution. Magic is relative, though.<br>
If Visual Studio is used to install the package, nothing seems to happen. However, when the project is opened again, the source files will "appear" as linked files. I believe this is a bug and it might be solved in further releases.<br>
But, for the moment, the end-user experience is terrible: install a package and nothing happens. First thing a user will think is "broken package, uninstalling" and that is really unfair for the package creator.<br>
I, for one, will be adding a warning in my description and updating the project documentation, but strongly believe the team have not solved a bad scenario (package upgrade) that happened only under certain circumstances (when source code was mutated and package upgraded) only to introduce a terrible scenario that occurs <em>every</em> time a source-code package is installed.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6yGxTaAODurrgwFkNcoYN5awrmLC-TLoHFxv_AY8P4cRe02ATm76bTMzvuvx965Wf0EhbH05M48i2xUR-iVmtsuOum9yDOsAGsoWIlb778GGSFIk3dqNcR3wtxDRU-3j90StAitSVTuOh/s1600-h/no_ui_feedback%255B4%255D"><img width="753" height="720" title="no_ui_feedback" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="no_ui_feedback" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZyPlddmFd_lOSOSKTTcEFAteIvWclpwWOTfasScxgy3lM0RJ5NjgSfw8C2h-QBMKSfXA15_tib24sARSjoBnlxizJVnpdlqKLG4ugOKAae4TH4RwirQkRyCO6e8nKjqFwryfHRMMR3iA0/?imgmax=800" border="0"></a></p><p>After closing and re-opening the solution:</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHvPs6hvxfHwHEOlXvWVnMDD0QgtOhUL98oXSwg7mSPKKhqNnrOjzjiRAXTVnz3gPdw9ED-ALXoJ3J1z-1D3xpqrTpU184XfKXUfNu7uBHlLPtzF3dvow48BPbVx1loR2-IJGKl4TsN2O_/s1600-h/ui_hint_after_reload%255B4%255D"><img width="753" height="720" title="ui_hint_after_reload" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="ui_hint_after_reload" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSpyNwytVO5XeabBm-LVaxOrAmkhamSqHptwkWLql7pJJTgbHSNQ6hiBNRltiwb5BMNuWF39crm1Niu6u1VBr8co9huG_OC-4OH2t8nJG5D-QjbQcqRPXg1KZS1G-KE6W5Ahz5Ql5cerN6/?imgmax=800" border="0"></a></p><p>If <em>Visual Studio</em> is not used (for example, <a href="https://code.visualstudio.com/" target="_blank">VSCode</a>), there is absolutely no hint that those files are available for the solution and they not appear in the .csproj file.</p>
<p>Another caveat is the fact that those source code files are not meant to be changed (hence the static files nick-name). However... nothing prevents you from navigating to their content and change them. Any guess at what will happen? Yes, they will be changed for <em>every</em> project that links to them. Talking about surprises.</p>
<h2>Working for the moment</h2>
<p>I still believe source code files can play a role in code distribution. And I got mine working in a multi-targeting manner. But I think the caveats are serious enough to got me thinking moving onto a binary distribution instead.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-40565771853312738382017-06-30T14:17:00.001+02:002017-06-30T14:17:51.017+02:00NMoneys serialization gets a lifting<p>After migrating <a href="https://www.nuget.org/packages/NMoneys/" target="_blank">NMoneys</a> (and <a href="https://www.nuget.org/packages/NMoneys.Exchange/" target="_blank">NMoneys.Exchange</a>) to support <em>.NET Core</em> it is time to lend a little love to the support for serialization via third-party serialization libraries.</p>
<p>This support operates in a very different way than the usual: 1. install package 2. some library is referenced 3. time to code against that library as it comes in the shape of source code that are added to your solution and compiled as part of it.</p>
<a name='more'></a>
<h2>How it was done</h2>
<p>When I <a href="https://dgondotnet.blogspot.dk/2014/08/nmoneysserialization.html" target="_blank">started</a> supporting third-party serialization I made a misguided choice.<br>
Yeah, I admit I was wrong when I removed the dependency on the supported serialization libraries. In my brain, taking a dependency meant: "whenever a package was update I would need to update mine". That really is <u>not</u> how it works <strong>at all</strong>.<br>
Taking a dependency means: "<em>in order for my code to work, I need to have <u>at least</u> this other library version in place</em>". If a newer version comes up, the default reaction is "alright, my library should continue working with that newer version". That is, as a package author, one does not have to upgrade package dependencies with every new version of the dependency.<br>
The only case I can think of that will break this "leave-version-as-it-was" happiness, is when a newer version of the dependency actually breaks your code. One can have an optional release with the old code and a dependency maximum version, but from then on, a newer version of your package needs to be released pointing to the version of the dependency that broke your code, <em>thankyouverymuch</em>.</p>
<p>Besides not taking a dependency I delivered my code as source code. That is, <em>my</em> code is part of <em>your</em> solution and you are happy to change it or delete some of it.</p>
<p>One of the things I usually did every now and then was to install newer versions of the library and checking whether the tests still passed.<br>
That was a totally manual and transient process: install the new version of the dependency in the test project, run tests and rollback changes when everything is green (or get rid of the branch, if I bothered to create one in the first place).</p>
<h2>Upgrading is always fun</h2>
<p>Big change came, with the <em>.NET Core</em> support, so thought I would test whether the serialization artifacts worked in <em>netcore</em> projects.<br>
I figured that starting with <a href="http://www.newtonsoft.com/json" target="_blank">Json.NET</a> made sense, since I know for sure they support <em>netcore</em>.</p>
<p>As the serialization test project is not a <em>netcoreapp</em> project, I created a new <em>netcoreapp</em> project in the command line and proceeded to install the packages for <em>Json.NET</em> and my <a href="https://www.nuget.org/packages/NMoneys.Serialization.Json_NET/3.0.0" target="_blank">old</a> serialization package. And then, my newly created project did not compile.<br>
So I went to copy the serialization code in the newly created <em>netcoreapp</em>, compile and... money! It compiles. How odd is that?<br>
Stupid CLI... let's fire up <em>Visual Studio</em> and do like real men do: create a new “<em>.NET Core Console App”</em>, right click “<em>Manage Nuget Packages</em>”, install... wait a minute! Where is my serialization code?<br>Use <em>Package Manager Console</em>, comfy middle ground… Still not there.</p><p>When things get serious, time to hit the Internet...<br>
And then is when I <a href="http://blog.nuget.org/20160126/nuget-contentFiles-demystified.html" target="_blank">found out</a> that since the inception of <em>.NET Core</em> and its ripple effect on <em>Nuget</em>, files placed under <code>/content</code> will mean nothing to <em>netcore</em> projects.</p>
<p>What I thought it was a routine check, it, once again, turns out to be a rabbit hole. A medium-shallow one, mind you.</p>
<h3>While you are down there, love...</h3>
<p>So, since I was to rewrite how serialization packages are created, I might as well do the right thing and take a dependency on the serialization framework.</p>
<p>Oh, and since the initial idea was to support newer platforms, why shall I leave the old platforms behind? Let's support <a href="https://docs.microsoft.com/en-us/nuget/create-packages/supporting-multiple-target-frameworks" target="_blank">multiple frameworks</a> and take the minimum dependency that works for each platform.</p>
<p>Oh, and while you are at it, let's clean the code a bit so that nullable instances are handled properly.</p>
<p>Oh, and, by the way, your serialization code is broken for newer versions of <em>MongoDb</em>. Time to do it <a href="https://docs.mongodb.com/v3.4/tutorial/model-monetary-data/#using-the-decimal-bson-type" target="_blank">properly</a>.</p>
<p>Well, I can repeat myself here, but you might as well read it from <a href="https://github.com/dgg/nmoneys/wiki/Changelog#serialization-json_net-40-raven_db-40-mongo_db_legacy-20-mongo_db-20" target="_blank">the horse's mouth</a>.</p>
<h2>End result ≅ Happiness</h2>
<p>In this "routine check" I learned quite a few things:</p>
<ul>
<li>how to create "modern", multi-platform <em>Nuget</em> packages that contain source code. And I will write further about it</li>
<li>a new <a href="https://docs.mongodb.com/v3.4/reference/bson-types/" target="_blank">decimal</a> BSON datatype supported in newer versions of <em>MongoDB</em></li>
<li>my serialization code still worked (for the most part) with newer versions of the serialization libraries.</li>
</ul>
<p>Go ahead and get any of the updated versions for third party serialization packages: <a href="https://www.nuget.org/packages/NMoneys.Serialization.Json_NET"><img alt="NuGet" src="https://img.shields.io/badge/nuget-NMoneys.Serialization.Json__NET-blue.svg?style=flat-square&colorB=60a7cb"></a> <a href="https://www.nuget.org/packages/NMoneys.Serialization.Raven_DB"><img alt="NuGet" src="https://img.shields.io/badge/nuget-NMoneys.Serialization.Raven_DB-blue.svg?style=flat-square&colorB=60a7cb"></a> <a href="https://www.nuget.org/packages/NMoneys.Serialization.Mongo_DB.mongocsharpdriver"><img alt="NuGet" src="https://img.shields.io/badge/nuget-NMoneys.Serialization.Mongo_DB.mongocsharpdriver-blue.svg?style=flat-square&colorB=60a7cb"></a> <a href="https://www.nuget.org/packages/NMoneys.Serialization.Mongo_DB"><img alt="NuGet" src="https://img.shields.io/badge/nuget-NMoneys.Serialization.Mongo_DB-blue.svg?style=flat-square&colorB=60a7cb"></a></p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-92215494228440667052017-06-29T10:48:00.001+02:002017-06-29T10:48:47.868+02:00Migrating libraries to .NET Core. Post-mortem 2
<p>I have already <a href="https://dgondotnet.blogspot.dk/2017/05/migrating-libraries-to-net-core-post.html" target="_blank">gone through</a> some of the choices I considered when porting my Open Source Libraries to support the <em>.NET Core</em> platform.</p>
<p>Let's dive into details of each of them, as each has its own peculiarities.</p>
<a name='more'></a>
<h2>NMoneys, Testing.Commons</h2>
<p>For both libraries there were a bunch of unsupported features (in .NET Core –<em>netcore</em>-), mainly related to serialization, but also some touching globalization and XML handling.</p>
<p>For the main libraries: <a href="https://www.nuget.org/packages/NMoneys/" target="_blank">NMoneys</a>, <a href="https://www.nuget.org/packages/NMoneys.Exchange/" target="_blank">NMoneys.Exchange</a>, <a href="https://www.nuget.org/packages/Testing.Commons/" target="_blank">Testing.Commons</a> and <a href="https://www.nuget.org/packages/Testing.Commons.NUnit/" target="_blank">Testing.Commons.NUnit</a> <em>.NET Standard 1.3</em> –<em>netstandard</em>- (the <u>new</u> <strong><em>.csproj</em></strong> format, <u>not</u> the old and more likeable <em>project.json</em>) projects were created in the same folder that contains the .NET Framework (<em>net</em>) project. Since all files residing in the same folder as the <em>.csproj</em> and its subfolders are part of the project by default… Boom! Done, next... Weeeeell, not really.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlgChTFULY6Yk33gogVWWNAkgbh1ItBgERPX09HAmXHF4YKEIZMoRsJ5TWBg1I379eXXivvzBoYahXYEll_guQWSR65cyXyT2xX8mzrmoEFmjM8lZU-P1gt6_I6kApPCwub1tD1HpPezr1/s1600-h/nmoneys_projects%255B4%255D"><img width="331" height="172" title="nmoneys_projects" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="nmoneys_projects" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjk6TtJzU_HNfzZFCIvfInb5PloO5itAuOIOJ7Pe-T1Cn5d_ptEDI9Mu7naYsfvsgEojTDZ-7c1T2f2nKHcXWwVLEoW8FxpQx_lxVPyi7UrCs760sNPVaEjSBAK58BLz1ImLRh8HDYVPIzA/?imgmax=800" border="0"></a></p>
<blockquote>
Caveat one: <br>
I you, like me, are weary of automatic package restore for .NET Framework projects, you are going to confuse the hell out of the tooling if you have the twos project in the same folder. To avoid confusion, change your classic net .csproj file to include this piece of XML:
<code><ResolveNuGetPackages>false</ResolveNuGetPackages></code>
</blockquote>
<p>With the <em>netstandard</em> project in place, there is code that will just not compile when targeting <em>netstandard</em>.<br>
That <em>net</em>-specific code will be moved to <strong><em>*.net.cs</em></strong> files (fortunately, <em>NMoneys</em> classes where already spread out in different files per feature, making the partition much easier).<br>
For instance, <code>ICloneable</code> implementation lives in <em>Money.Cloning.net.cs;</em> and <em>Testing.Common</em>s binary roundtrip serialization lives in <em>BinaryRoundtripSerializer.net.cs</em>.</p>
<p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8EwVv6qAYzKVOuVlPL05OILYn-WidAgG2hP1BBOPQZFHDxjolA9J8OisGrPMDXkgOqWjYzf8Oe28MBQRK5gKLL4KOdDSJRO2dgsTB6UZoEw2VZaf8Mqg_9SrLGVqhOX_mEXDeWTXVAlMY/s1600-h/binaryserializer%255B4%255D"><img width="330" height="109" title="binaryserializer" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="binaryserializer" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNYrMz7ZKarZfJLPVHlWsE23y9TjkKhuHt-cqEpWM8UAHFTNWYfulXNbthV1cKUhVLiceBjk_VPVpuh0SWLj3d4qgba30nIl4UcY-G0tMboqrsDEE3xPNvnCwaMZ0fpCS3eAI3R3jog9QU/?imgmax=800" border="0"></a></p>
<p>
Once net-only code is segregated, such files can be easily excluded from the <em>netstandard</em> project by adding this piece of XML to the new <em>.csproj</em> file:</p>
<pre><code class="language-xml"><ItemGroup>
<Compile Remove="**\\*.net.cs" />
</ItemGroup>
</code></pre>
<p>That way, all unsupported code will not be included in the <em>netstandard</em> project and, thus, not compiled. Mission accomplished with zero conditional compilation directives.</p><p>
Non-portable code solved, next in line are APIs that are are different in <em>netstandard</em> from the ones in net. For instance, there is no <code>CultureInfo</code> cache, instances are “newed up” (or else a third party package needs to be installed) or <code>Stream</code> does not have a <code>.Close()</code> method.<br>
For those cases, we will have to create adapters that can be consumed by both <em>net</em> and <em>netstandard</em> projects. Those adapters are placed in <em><strong>*.polyfill.cs</strong></em> files and use compilation directives for each target. That is the <strong>only</strong> place that I allow myself the visual noise of conditional compilation directives.</p><p>This dual, in-place project structure works fine for this type of project, but I found out it has some minor downsides that were not obvious at the time of migrating, but exposed themselves when working with the codebase:</p>
<ul><li>it is really easy, while navigating code, to end up in the file within the context of the <em>netstandard</em> project. The fact gets unnoticed immediately, but shows up when red squiggly lines appear under an API that SHOULD be there, but is not. At least not in the <em>netstandard</em> context you are seeing on the editor</li>
<li>It is equally easy to add a file, do some work and them find out something is not compiling because the artifact you just added is not available. That happens when the file gets added to the <em>netstandard</em> project (only), meaning that it will not be compiled as part of the old project, as new files need to be explicitly added</li></ul>
<p>Minor annoyances, outweighed by the flexibility and power of this way of architecting the projects.</p>
<h3>Tests</h3>
<p>“Production” code would be pretty much ready with these simple steps. But, what about tests? Are they problematic in some way?<br>Bad news is that they kind of are.<br>Not code-wise, though. Following the aforementioned techniques to “duplicate” the test project and isolate net-specific code, projects compile without major problems. But… what good would it be a test that cannot be run? And hence the challenge.</p>
<p>Let’s put aside for a moment running the tests as part of the usual workflow while making changes to the codebase. Those can be run using tools inside your IDE. But when it comes to running tests as part of the build (local or as part of CI) tests are often run using a console runner.Both <em>NMoneys</em> and <em>Testing.Commons</em> tests are authored and run with <a href="https://github.com/nunit/nunit" target="_blank">NUnit</a>.<br>Unfortunately, the <a href="https://github.com/nunit/docs/wiki/Console-Runner" target="_blank">console runner</a> only supports a classic .NET Framework.<br>Would it be acceptable to not run tests on the <em>netstandard</em> code and hope they work because, hey, “if it compiles, it must work” ®? I don’t think so either.</p>
<p>Woefully, at the time of the migration, <em>NUnit</em> did not have a “dotnet test” compatible runner for <em>netcore</em> environments. At that time, the “solution” was to turn your test project into a <em>netcoreapp</em> console project and use <a href="https://www.nuget.org/packages/NUnitLite/" target="_blank">NUnitLite</a> to run your tests.<br>It does work relatively well, but I guess is kind of a temporary hack until we can run the tests using the <a href="https://docs.microsoft.com/en-us/dotnet/core/tools/" target="_blank"><em>dotnet</em> CLI</a>.</p>
<h2>SharpRomans</h2>
<p>For the less loved <a href="https://github.com/dgg/SharpRomans" target="_blank">SharpRomans</a> a different approach was taken, which I <a href="https://dgondotnet.blogspot.dk/2017/05/for-fun-of-it.html" target="_blank">already</a> wrote about.</p>
<p>Basically, since the library is simpler and was already a PCL project, a new <em>netstandard</em> <em>.csproj</em> project substituted the old one and called it a day.</p>
<p>I also took advantage that some APIs were not available in <em>netstandard 1.1</em>, to remove them as they did not made a lot of sense in the first place (mostly <code>IConvertible</code> implementation, when better named methods already exist.
<h3>Tests</h3>
<p>As I mentioned
in the post about the update, I chose to migrate my tests to <a href="https://xunit.github.io/" target="_blank">xUnit.net</a>, which already supports a <em>dotnet</em> CLI runner, so the CLI is used for everything build-related: compiling, running tests and creating/publishing packages.</p>
<p>Even though the “production” library only supports <em>netstandard1.1</em>, I went onto multi-targeting the test project, to verify that the code really does really work with “classic” <em>net46</em>, as well as with <em>netcore</em> runtimes:<p>
<pre><code class="language-xml">
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<ProjectReference Include="..\SharpRomans\SharpRomans.csproj" />
</ItemGroup>
<PropertyGroup>
<TargetFrameworks>net46; netcoreapp1.1</TargetFrameworks>
</PropertyGroup>
...
</Project></code></pre>
<h2>Vertica.Utilities</h2>
<p>This is the "new" project released (under the identity of my company) which I recently <a href="https://dgondotnet.blogspot.dk/2017/05/utility-openness.html" target="_blank">wrote</a> about.</p>
<p>The project was originally a <em>net45</em> project which has been migrated to a new <em>.csproj</em> supporting <em>netstandard1.5</em> . However, support for <em>net45</em> projects has been maintained via multi-targeting, because API-wise, there was no reason to leave <em>net45</em> behind.</p><h3>Testing</h3>
<p>For testing, I kept <em>NUnit</em> (after the mild disappointment of the <em>xUnit.net</em> experience). Since some time had passed between I migrated <em>NMoneys</em> and <em>Vertica.Utilities</em> was released, a native <em>dotnet</em> CLI runner had been released.<br>I can say I nearly regret having used it, because it is so damn slow. I understand it is a very beta version and I am sure it will get better, but still.<br>Had it not been that such slowness is only suffered by the CI build server and the occasional local build, I would have resorted to the <em>NUnitLite</em> workaround again.</p>
<h2>Worth it?</h2>
<p>Well, times are changing and it would be weird not to jump the wagon (late enough, mind you) to try how future tastes like. It was an interesting, frustrating learning experience.<br>I explored different strategies and I am happy with the result for each one of them. That is not to say I would not change my mind in the future.</p><p>But it's almost hypnotic to watch a tests run (and fail!) on an Ubuntu box…</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrR_rJDxDagCuyGI8zDGDsPKA6u2PzPjRPNWHCxSancWY7P4F4_KxOE4Kv3Qbl8q-ATp1Q5vCND4sQ2j0KttAAjlk7m6MoudiLpqGhSauEG13osyZl9Yt5kefxHY_t3EwNj6p1bzAeZyKC/s1600-h/nmoneys_linux%255B4%255D"><img width="591" height="365" title="nmoneys_linux" alt="nmoneys_linux" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDHBnHPg45L1gTapzIsex1rijA2pMyybYksbIkDwKmhgMi0Q0mVZbfmakESApVPbKqdXmKx9QklK2gvryjdnXBPPfck9krBXNfK_1_LiycdL54D-eoCt0yVbRppey6e16VMklNzPbTzMnY/?imgmax=800"></a></p><p>… or a Mac.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR3AoQPWcLhQcUZXlzFsC1EeJhnESB7lk8iZZKiuW09JeJBGVMdSbRbv7YmBQ73TzfltEwivzyyR7hr0VGJcImv7h5SZFfqfnUNgfPvhnxQdQFer5YANv200AJNlZFN2oTLuC7gazIqvIs/s1600-h/testing_commons_mac%255B4%255D"><img title="testing_commons_mac" alt="testing_commons_mac" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUifMNisv7sDajYCl-lVMik0na-gruq433r34LDt4upZ3cyDEKh7KJEEggE5abHOrQ1SMGIPMO8W-V4NrHxzaLEHt7RxUaRm9NVwv-5n9vkSx2UZVwiwFXAkgb1YrNA-5d2qfTMyI9TYOe/?imgmax=800"></a></p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-66582975226338175912017-06-15T13:24:00.001+02:002017-06-15T13:56:20.846+02:00Got change?<p>This is a honest question for which, fortunately, there is an answer now.<br>I am sure that there was one before, but sure as well that <a href="https://github.com/dgg/nmoneys/" target="_blank">NMoneys</a> could not provide an answer easily.</p><p>Until now.</p>
<a name='more'></a>
<h2>NMoneys 5.1.0.0</h2><p>New features implies new version of the library and that can be obtained from <a href="https://www.nuget.org/packages/NMoneys/5.1.0" target="_blank">Nuget</a> itself.</p><p>From the <a href="https://github.com/dgg/nmoneys/wiki/Changelog#5100" target="_blank">changelog</a> one can see that two “major” features have been added: some denomination changes corresponding to the <a href="https://www.currency-iso.org/dam/downloads/dl_currency_iso_amendment_163.docx" target="_blank">Amendment 163</a> for the ISO Standard 4217 and a bunch of methods in the <em>NMoneys..Change</em> namespace.</p><p>Those are the really meat for the release. And if you want to know how they work, I encourage you to visit the project <a href="https://github.com/dgg/nmoneys/wiki/DeveloperQuickStart#make-change" target="_blank">Wiki</a> for an overview and the <a href="https://github.com/dgg/nmoneys/tree/master/src/NMoneys.Tests/Change" target="_blank">tests</a> for deeper learning.</p><h2>The complimentary background story</h2><p>It’s funny how features become features in the first place. In this case, I received a <strike>spam email</strike> summary notification for some LinkedIn group I seem to be subscribed and one thing was caught by the corner of my eye: <a href="http://www.csharpstar.com/csharp-coin-change-problem-greedy-algorithm/">C# – Coin change problem : Greedy algorithm</a>. The code is not particularly glorious but it sparked my curiosity. This is a answer to a problem that <em>NMoneys</em> could help answering.</p><p>So I did what any sane person with spare time and a OSS project could do: create a branch and TDD the feature.<br>I have to admit the process did not go particularly well. TDD can be painful for totally alien problems, but in the end, I had a working feature. And then I read (I should have totally started by reading about the problem, I know) and found out about canonical denomination systems, recursion and dynamic programming.</p><p>And then I was totally hooked. Dynamic programming is something that I have not come close to use since I was in the university, many moons ago.</p><h3>Copy and Paste from the Interwebs with Care</h3><p>A <a href="http://www.algorithmist.com/index.php/Min-Coin_Change" target="_blank">description</a> of the problem suggested that my greedy approach could not actually give the best possible answer. It also points out to a very inefficient recursive solution. Could there be some working code out there that I can adapt to be included in <em>NMoneys</em>?<br>The key here is <strong>working</strong>. I was surprised to find that majority of code that claimed to solve the problem… It does not. At least not for the cases I had tests for. Meaning, that, sorry to say, it does not work at all. So I had to <a href="http://algorithms.tutorialhorizon.com/dynamic-programming-minimum-coin-change-problem/" target="_blank">search</a> <a href="http://www.geeksforgeeks.org/find-minimum-number-of-coins-that-make-a-change/" target="_blank">hard</a> to find something <a href="http://interactivepython.org/courselib/static/pythonds/Recursion/DynamicProgramming.html" target="_blank">decent</a> and then “<em>trans-compile</em>” it to C# and then tweak it. And then refactor it to not use just numbers and arrays. And then…</p><h3>Get it Over it Already</h3><p>But it was fun. And frustrating. I felt so bad that I was not able to come up with those clever algorithms myself. I felt I should be able to do it. I have been trained for it. It turns out I could not. A lot of people would not admit it, but I will.<br>I suck at algorithmia. Big-freaking-time. I felt envious of the guys discussing improved data structures, mathematical proofs of something not working but I also felt so useless. Terribly and utterly useless. Total disgrace of a programmer.</p><p>It was a humbling experience to realize how badly you ignore things so close to what you are paid for.<br>The blast of getting it done in the end does not quite erase that feeling. The feeling of “it should not have been that hard”</p><h2>The future</h2><p>In the end I solved a couple of simple problems, but there is so much more to explore in that area. I would really, really like the community to help this one. We have a base from which grow features around. Young programmers, students,… could give a hand with something that relates to their academia background to be very valuable in this contribution.</p><p>I will create a bunch of <a href="https://github.com/dgg/nmoneys/labels/help%20wanted" target="_blank">issues</a> with new features that I hope someone can help with.<br>If you know someone brainy in algorithmia with a desire to help, please let them know there is something cool to do.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-32721913032119580492017-05-29T10:04:00.001+02:002017-05-29T10:05:48.776+02:00Utility Openness<p>Under this very weird title comes an important piece of news: the <a href="https://vertica.dk/">company I work for</a> (proxying through me -or the other way around-) has released new Open Source Library: <a href="https://github.com/vertica-as/Vertica.Utilities">Vertica.Utilities</a>.</p><p>And it is worth checking.<br></p><a name='more'></a><br><br><p></p><h2>A bit of background</h2><p>This is not a new library. Not at all. It is something that has accompanied me and most of the projects within Vertica for a number of years.</p><p>Majority of developers have a toolbelt of utilities that they are confortable with, improves their productivity and follows their style. I do too and I was happy to maintain it and share it with all my colleagues. But now it is time to share it with everyone else.</p><p>The seed for the library comes from way before I joined Vertical, where I used some helpers to create valid <a href="https://en.wikipedia.org/wiki/Query_string">query strings</a>.</p><p>By that time I was heavy on learning about unit testing and a library of utilities served me perfectly to sharpen my skills and it grew and grew as I found out pieces of reusable code within my day to day work and my sessions of Internet browsing and sample hoarding. I became sort of a magpie, fetching interesting code that I could improve and add tests to.</p><p>All of a sudden it became a useful, interesting collection of extensions, pattern implementations, utility classes and value objects that, continuously integrated and released to our internal NuGet repository could be used by all developers within our company.</p><p>After a couple of conversations in the past, this year seemed like the best chance to open source it. And there it goes.</p><h2>Anatomy of a trasure hoard</h2><p><em>Vertica.Utilities</em> was born as a .NET Framework 3.5 class library. It went a major make up operation when I migrated it to a .NET Framework 4.0 library and now it is a .NET Standard library that targets both the .NET Framework 4.5 and .NET Standard 1.5 with minimal dependencies (none "external” as in not from System.*).</p><p>It is a single, multi-targeting project developed in Visual Studio 2017 but perfectly usable with the .NET Core SDK toolset. The fact that it also targets a “classic" Framework for compatibility reasons makes it unfit for multi-platform building. One can compile it and use it, of course, in any platform, but, in order to create the package one has to have a Windows machine.</p><p>It is hosted in <a href="https://github.com/vertica-as/Vertica.Utilities">GitHub</a> (what isnt nowadays?), uses <a href="http://www.nunit.org/">NUnit</a> to be tested and leans on <code>dotnet</code> for all its (minimal) build scripting and it is continuously built by <a href="https://ci.appveyor.com/project/VerticaAS/vertica-utilities">Appveyor</a>, its test coverage published to <a href="https://codecov.io/gh/vertica-as/Vertica.Utilities">Codecov</a> and deployed as a <a href="https://www.nuget.org/packages/Vertica.Utilities/">Nuget Package</a>.</p><h2>Brighter future</h2><p>Utility libraries are dime a dozen and plentiful. But this one has served me well, reflects my style and I hope it can help someone else even if only as a learning tool.</p><p>It would be a blast if we could get contributions. So there it goes the dare. Time to pick up the glove.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-83604453876959037132017-05-16T11:36:00.001+02:002017-05-16T11:36:41.080+02:00Migrating libraries to .NET Core. Post-mortem 1
<p>I recently went about the task of migrating most of my OSS libraries to the "new" .NET Core platform.</p>
<p>I would like to share with others some of the choices I made and the motivations I had. Besides, I also want to reflect about the pains and gains of doing so.</p>
<a name='more'></a>
<p>There are many worthy articles of people that have done it before me, at much earlier stages of the platform. But I'll write one myself because:</p>
<ol type="a">
<li>Some of those valuable resources are from brave people that worked when JSON project format was a thing whereas I have done it in the bleaker days of XML project format</li>
<li>I have not-so-unique scenarios that are useful to me and I did not find anywhere else</li>
<li>It's my blog and there is no such thing as a "find duplicated topics and delete" bot in the Internet (yet)</li>
</ol>
<h2>Obligatory mention to tooling</h2>
<p>I have to admit this is my second round to porting those libraries to support <em>.NET Core</em>.<br>
The first round was with Beta versions of the tooling (but RTM versions of the runtime) and it was anything but not pleasant. I naively supposed that with RTM tooling it was going to be much easier, but it was not.</p>
<p>I gave up on the otherwise wonderful <em>Visual Studio Code</em> to make the newer <em>Visual Studio 2017</em> my weapon of choice.<br>
It is waaay better than the experience I had, but still not the same as "classic" .NET. Half the blame has to lie on <em>Resharper</em> acting super weird in a mixed environment of full .NET Framework and .NET Core.</p>
<p>Other than <em>Visual Studio 2017</em>, or more accurately, inside of, a tool to run before embarking the migration is the <a href="https://marketplace.visualstudio.com/items?itemName=ConnieYau.NETPortabilityAnalyzer" target="_blank">.NET Portability Analyzer</a>. This <a href="https://github.com/Microsoft/dotnet-apiport/blob/master/docs/VSExtension/README.md" target="_blank">Visual Studio extension</a> (there is a <a href="https://github.com/Microsoft/dotnet-apiport/blob/master/docs/Console/README.md" target="_blank">console version</a> for those that do not dig Visual Studio anymore) would produce a report about how portable your code is to several platforms in terms of API usage and would suggest you possible workarounds.<br>
It is basically a prettier output than changing the target of your project, hit compile and fall into despair.</p>
<p>The other must-have "tool" is, ironically, one that you cannot <em>have</em>, but the most useful website you can imagine for the migration task: <a href="http://packagesearch.azurewebsites.net/" target="_blank">a reverse package search</a>.<br>
Instead finding packages, one types an API (a class, interface, method,...) and it will display a list of packages that contain that member.<br>
Imagine you get a compilation error in the library that is being ported, stating that <code>IConvertible</code> is not found. That means that member is not part of the core library, but may come from a package. Type the name of the member in the site and you'll know that from netstandard1.3 and upwards, if you reference the <em>System.Runtime</em> package you are going to be able to use that API.</p>
<p><img title="ReversePackageSearch" alt="ReversePackageSearch" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGkxqRrG120PIgAMv3p6ZIGavnpUWbZ3DZqN7WheAN9RaUCfy3wf4nnTCfekv9unvIVn4J3O55RmBZjKbBMzPzwYwAIHsW_AGeGOnZseiKTvsY57bVS3dbjbU915mQ-aqottdgleVOFpk2/?imgmax=800"></p>
<p>Other than those tools, is "<em>just</em>" old boring compile, fix and repeat kind of work. But before it comes to it, there are some hard choices to be made.</p>
<h2>A choice is a future murderer</h2>
<p>There is a very <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/porting/project-structure" target="_blank">interesting document</a> from Microsoft themselves that explain some of the various possibilities that you can take when you decide to support the full .NET Framework on top of the new .NET Standard/Core.<br>
From the document, I ended up taking the project replacement route in one of my projects but I took another approach not mentioned in the document for two of the others.</p>
<h3>First choice. .NET Core or .NET Standard?</h3>
<p>Do you want your library to have further reach by supporting the most platforms?</p>
<p>Sounds like a good idea, but many platforms mean lower API surface since not all platforms are equally capable.</p>
<p>In my case, this was the easiest decision: <strong>.NET Standard</strong> will be. Let's give other platforms the ability to use my code.</p>
<h3>Second choice. Ready to part with functionality?</h3>
<p>In any case, <em>.NET Core</em> or <em>.NET Standard</em> offer a trimmed API surface. It may be impossible to retain all your features, because the API might simply not be there.<br>
Keeping as much as possible is also feasible once you come to terms to the fact you are going to bring a lot of <em>NuGet</em> packages with your library.<br>
Another case is depending on a project that does not have a portable version. And there are many.</p>
<p>If one is happy to give features up for the sake of portability, replacing the project and whichever code that is not compatible for either compatible code or deletion is a way to go.<br>
It is easy to maintain a single project and the build process becomes way easier.</p>
<p>If giving up features makes you feel a little sick, then multi-targeting (that is different versions of your library for multiple platforms) is your way to go.<br>
Once multi-targeting, once needs to decide what to do with the unsupported features:</p>
<ul>
<li>have a base package that targets as many platforms as possible and release the extra features as add-ons of some sort that target only the supported platforms. That is not always feasible, but when it is, it sounds like a good trade off to justify the extra project.</li>
<li>have a single package with two versions of the library: the .NET Framework assembly gets all the goodies and the portable assembly gets a washed-down set of features</li>
</ul>
<p>But multi-targeting can be achieved is several different ways, each with its unique trade-offs.</p>
<h4>Single-project multi-targeting</h4>
<p>You are likely to migrate your project to the new format and then, perform some conditional compilation tricks to remove the unsupported parts from the un-supportive platforms, while keeping them in the supported ones.<br>
Conditional tricks include:</p>
<ul>
<li>conditional compilation directives. Meaning sprinkling your code with <code>#if</code>, <code>#else</code> in your source code files. Definitely feasible but noise and in the verge of becoming a maintainability liability.</li>
<li>isolate the un-supported parts in their own files (partial classes are good tools when applicable) and use project conditional excludes to not compile those files in certain platforms.</li>
</ul>
<h4>Multi-project multi-targeting</h4>
<p>The old project stays the same and there is an extra project in order to target the newer platforms.</p>
<p>The aforementioned document points out this option: having a new project in another folder. One can do file linking to bring source code to the new project but it is definitely weird. And we still need to do the conditional tricks to bring the right files.</p>
<p>What the document does not point out is that you can have the projects in the same folder. So less file inclusion magic needs to be done so other tooling than Visual Studio can be used.</p>
<h2>But What did I do?</h2>
<p>In the end, I ended up doing different things for every project.</p>
<p>I'll go through the details in another post as I feel this is already getting too long.<br>
Keep an eye on it.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-48277764820189990852017-05-11T10:37:00.001+02:002017-05-11T10:37:00.902+02:00Matching Standard<p>Behind this, somewhat, catchy title lies a new version (4.1) of <a href="https://www.nuget.org/packages/Testing.Commons.NUnit/" target="_blank">Testing.Commons.NUnit</a> and no golden rule about how to discard potential dates in the digital era (would be rubbish advice anyway).</p><p>It is not a major version, but I believe it is an important one since there is a little history behind it.</p>
<a name='more'></a>
<h2>On the shoulders of middle-sized giants</h2>
<p>Quite a few years ago, I <a href="https://dgondotnet.blogspot.dk/2011/10/testing-with-expectedobjects.html" target="_blank">wrote</a> about how <a href="https://github.com/derekgreer/expectedObjects" target="_blank">ExpectedObjecs</a> can be used to aid writing assertions that check just the important part of a complex object or hierarchy.<br>Not so long ago, I also wrote about the <a href="https://dgondotnet.blogspot.dk/2017/04/stepping-forward.html" target="_blank">migration of some of my OSS projects</a> to support .NET Standard.</p><p>I briefly mentioned about the <a href="https://github.com/dgg/testing-commons/wiki/Changelog#2000-4000" target="_blank">sacrifices</a> (not many, fortunately enough) needed to get a standard version out of the door.<br>In this particular case, the sacrifice for users not targeting the full .NET framework, was not being able to use <code>MatchingConstraint</code> because this piece of functionality relies on the aforementioned <em>ExpectedObjects</em> library and that library <strong>was</strong> not targeting others than the full .NET framework and, therefore, I could not depend on it for the <em>netstandard</em> version of my library.</p><h3>Missing no more</h3><p>But, since Derek worked the <a href="https://github.com/derekgreer/expectedObjects/issues/10" target="_blank">issue</a> I got the needed green light to re-include such feature and painlessly release another version of the <em>Testing.Commons.NUnit</em> so that everyone can match objects at their fully earned will, regardless of the platform they are running into.</p><p>Match with care, folks.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-650140883393075032017-05-05T14:32:00.001+02:002017-05-05T14:32:59.545+02:00For the fun of it<p>Because there is little justification besides learning to spend the time upgrading SharpRomans to .NET core.</p><p>But learning is fun and quite enough for me.</p> <a name='more'></a> <h2>Curious?</h2><p>If you ever had the need of working with roman numerals and enjoy the type system, you can go to <a href="https://www.nuget.org/packages/SharpRomans/" target="_blank">NuGet</a> and get it:</p><p><img src="https://github.com/dgg/SharpRomans/wiki/img/SharpRomans.nuget.png"></p>
<p>The package now targets <strong>.NET Standard 1.1</strong> (before it was a portable library) so people in challenged runtimes (Silverlight people in all their decadent flavors) won’t be able to get it (I don’t think they complaint anymore <img class="wlEmoticon wlEmoticon-smilewithtongueout" alt="Smile with tongue out" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNW_IUh_2zvY5R0zZC9xJw_nhsnjrQwwYNtOXqm3e0ny4PADtfTKE_X3y5n3cgSEhnqhgmikjh5IkF139tQlNbvg_n1LeskGE8fTpN5LtUee0HmZvhW7jUMn2W1r3domnpQWLzBGhfVZIN/?imgmax=800">).</p><p>That means that the <code>IConvertible</code> features are out of the door to reach as many platforms as possible. And, besides, I do not think it ever made much sense anyway.</p><h2>Behind the scenes</h2><p>For such a small change (and a breaking one no less) there was a ton of soul breaking work to be done, mainly having to do with unsupported tooling.</p><h3>Native Taste</h3><p>Whereas in other OSS of mine (<a href="https://github.com/dgg/nmoneys" target="_blank">NMoneys</a> and <a href="https://github.com/dgg/testing-commons" target="_blank">Testing,.Commons</a>) I approached a dual project approach to maintain the features for the users targeting the older frameworks without having to raise the minimum framework version for little reason, for <em>SharpRomans </em>(harsh as it sounds) I felt no remorse to leave very few users behind. So, instead, I maintain a single project targeting the <em>netstandard</em> world.</p><p>That means leaving behind developers (if any) that have not jumped the wagon of <em>Visual Studio 2017</em> or the <em>.NET Core SDK</em> world. But again, zero remorse for the even smaller user base.</p><p>Going native has a nicer side-effect: all scripting done in the build script is performed via <code>dotnet</code> commands and, let me tell you, that cut the size of the build script from 98 lines to 57. That is huge gains for such a simple script (considering that the newer one is more full featured than the older).</p><h3>Vintage scent</h3><p>Since I went all “netcore native” and <em>NUnit</em> still does not support the <code>dotnet test</code> command (it seems it does <a href="http://www.alteridem.net/2017/05/04/test-net-core-nunit-vs2017/">now</a> <img class="wlEmoticon wlEmoticon-sadsmile" alt="Sad smile" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgojiJJZp7vd-lziqg6xIh9eDzmiLBYoRpkGxXJ9GE746ABmymK0mm-lmS3fhwT7fbQy2VS27Tq6mW9XDiAlmf9PKgVlHeWDqeB0suKYmbiNJ4ARgZ0RtI5iUrTPFh3N0eYjrxsgFpmtTLc/?imgmax=800">) made me look away from <em>NUnit</em> towards the only supported testing framework that was supported (I refuse to use <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/testing/unit-testing-with-mstest" target="_blank">MSTest</a>, call me a bigot): <a href="https://github.com/xunit/xunit" target="_blank">xUnit</a>.</p><p>A lot of people like the simplicity of the framework. For me, removing a handful of attributes while having an awful, terribly dated default assertion library was not a step forward at all.<br>I know that one can run whichever assertion library one feels like (<a href="https://github.com/shouldly/shouldly" target="_blank">Shouldly</a>, <a href="https://github.com/fluentassertions/fluentassertions" target="_blank">Fluent Assertions</a>,…). Hell, I could have even used NUnit’s sweet <a href="https://github.com/nunit/docs/wiki/Constraint-Model" target="_blank">constraint model</a> for the assertions only. But I did not feel like bringing another dependency.</p><p>What I liked the most, however, was not its simplicity, but the somewhat complex <a href="https://xunit.github.io/docs/shared-context.html#collection-fixture">collection fixtures</a> feature that allowed me to run across-fixtures initialization code before any test was run.<br>That and the extensible trait model that helped not only to categorize tests, but enable some interesting reflection scenarios for documentation generation.</p><h3>BDD refresh</h3><p>I have written about BDD <a href="https://dgondotnet.blogspot.dk/2014/09/bdd-mini-match-up.html" target="_blank">before</a>. And then, I mentioned <a href="https://github.com/TestStack/TestStack.BDDfy" target="_blank">BDDfy</a>, which happens to support .NET Core, unlike the hardly supported <a href="https://www.nuget.org/packages/StoryQ/" target="_blank">StoryQ</a>.</p><p>It turns out that migrating all the scenarios from one framework to the other was not at all an easy task. It required a crapload of boring text search&replace. It now dawns on me that I might have made it more interesting by using <em>Roslyn</em>, but I do not think it would have saved me any time (but maybe the pain might have been spared).</p><p>I am cool enough with the framework and thrilled about the out-of-the box markdown support and how easy is to extend. I took advantage of that fact when I created a processor that instead of writing one big markdown file with all the stories, it is able to write one file per story, so that I can easily change the <a href="https://github.com/dgg/SharpRomans/wiki/Specifications" target="_blank">specification</a> that lives in the project’s Wiki.</p><p>Two things I would change (or even better, contribute to the project):</p><ol><li>fix the documentation site. A pity that a well documented project seems like it is not anymore because of broken links and missing images</li><li>implement better way to inject step arguments in the description of the step. Right <a href="http://www.mehdi-khalili.com/bddify-in-action/fluent-api-input-parameters" target="_blank">now</a> one has to rewrite the step narrative (injecting the parameter placeholders). This way works fine, except that one has to repeat oneself: name the method and rewrite the narrative. A better approach was taken by <em>StoryQ</em> by performing a parameter substitution of tokens in the method name.</li></ol><h3>Another way of scripting</h3><p>I recently learnt about <a href="https://github.com/nightroman/Invoke-Build" target="_blank">Invoke-Build</a> as an alternative to the venerable <a href="https://github.com/psake/psake" target="_blank">PSake</a>. So I thought I would give it a shot since I was changing majority of the build script anyway.</p><p>And to be perfectly honest, I did not even use the feature I thought I was going to like more: detection of the correct version of <a href="https://github.com/nightroman/Invoke-Build/wiki/Resolve-MSBuild" target="_blank">MSBuild</a> (because I am using <code>dotnet</code> instead and expect it to be in the path). So, besides some nifty <a href="https://github.com/nightroman/Invoke-Build/wiki/Script-Tutorial#jobs-are-references-and-actions" target="_blank">composition of tasks</a>, I do not think I am getting anything from it, that I would not get from PSake for the complexity of my scripts anyway.</p><h2>Was it worth it?</h2><p>Surely it was. And very educative for <em>.NET Core</em> concepts such as multi-targeting and learning about <em>xUnit</em>.</p><p>But to be honest, I would not like to go through the hell of migrating all BDD scenarios again.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-11445352365042460222017-04-25T09:28:00.001+02:002017-04-25T09:28:42.587+02:00Packaged Madness
<p>I recently went through the recurring software refresh of my development laptop. I usually do a full cleanup (OS included) and this time was even better thanks to the "<a href="https://www.howtogeek.com/132428/everything-you-need-to-know-about-refreshing-and-resetting-your-windows-8-pc/" target="_blank">Reset this PC</a>" feature.</p>
<a name='more'></a>
<p>I wanted to do it better this time in terms of installing all those programs you always carry over your next installation and set my eyes on using <a href="https://chocolatey.org/" target="_blank">Chocolatey</a> for installing and maintaining those.</p>
<p>However this post is not about how easy it is or taking "about time, dude"s or rating (it kind of is). It is about choices. And not the lack of them. Quite the contrary. It is (<a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html" target="_blank">again</a>) about the sprouting of similar solutions to a problem that should have been a solved problem ages ago.</p>
<h2>I just want to install some stuff. Where do I start?</h2>
<p>I you are in the Windows arena you have been laughed at many times for those Linux gurus (and, painfully, noobs too) praising the glory of <a href="https://wiki.debian.org/Apt" target="_blank">APT</a> (Debian/Ubuntu), <a href="http://yum.baseurl.org/" target="_blank">Yum</a> (Red Hat/CentOS), <a href="https://wiki.gentoo.org/wiki/Portage" target="_blank">Portage</a> (Gentoo), <a href="https://en.opensuse.org/YaST" target="_blank">YaST</a> (SUSE/openSUSE) or whichever the hell distribution they happen to be running that day.</p>
<p>You might also have experienced a smirk from the MacOS guys when they proudly present you with <a href="https://brew.sh/" target="_blank">HomeBrew</a> and go on firing up a fullscreen terminal with a three-line-long prompt in full color glory.</p>
<p>You smile politely at their beards (those are inevitably worn for users of such platforms, regardless of their age or gender) and fire up "Programs and Features" GUI and mumble about how <a href="https://technet.microsoft.com/en-us/library/cc978328.aspx" target="_blank">MSI</a> does all that and more (or so you've been promised) and why you do not need a stinking package repository when you have the <em>InterWebs</em> at the tip of your cursor.</p>
<p>If you are resilient to mockery you can fire back with that monochrome <em>CMD</em> instance and type <code>choco</code> (ignore the chuckles, since... well, beer trumps chocolate any time and typing <code>chocolatey</code> would only make things worse) and look pretty proud about how well it does the job (ignore the chuckles again).<br>
You can even stretch your suspenders (because you are wearing those, right?) and boast: "Microsoft even went one step further and presented to the world a package manager manager” (or <a href="https://github.com/OneGet/oneget#what-is-packagemanagement-oneget" target="_blank"><em>unified interface to package management</em></a> if you are in a sales pitch). “You'll be hearing from <a href="https://github.com/OneGet/oneget#what-is-packagemanagement-oneget" target="_blank">PackageManagement</a> a lot in the future”. Probably they won't.</p>
<p>But, but... Installing I can handle. Keeping up to date is the hard one.<br>
Sure, because I forgot to mention that regardless of whether a program was installed through a package or not, modern software tends to update itself. Some of it.<br>
And those tend to do it in their own unique and "superior" way:</p>
<ul>
<li><a href="https://www.techopedia.com/definition/31094/evergreen-browser" target="_blank">ever-green browsers</a> do it their way</li>
<li><a href="https://en.wikipedia.org/wiki/ClickOnce" target="_blank">ClickOnce</a> applications (whoever still uses them)</li>
<li><a href="https://github.com/Squirrel" target="_blank">Squirrel</a> applications ("ClickOnce done right")</li>
<li>store apps: Windows, Mac, Android, Apple Store,... do their thing as well</li>
</ul>
<h2>I am a developer, already too many options</h2>
<p>If you are a developer of some sort, chances are you are already using some sort of repository to bring down dependencies to your software. Most likely those repositories use some sort of command line client because, well, who doesn't like the sound of pounding keys or the feel of scripting?</p>
<p>Chances are those repositories are focused on your language/platform (my shallow knowledge of every platform out there is limited, so pardon my ignorance and use the comment sections for my and other people's enlightenment):</p>
<ul>
<li><a href="https://maven.apache.org/" target="_blank">Maven</a> for Java developers</li>
<li><a href="https://rubygems.org/" target="_blank">RubyGems</a> for Ruby stuff</li>
<li><a href="https://pip.pypa.io/" target="_blank">Pip</a> for Python people</li>
<li><a href="https://www.nuget.org/" target="_blank">NuGet</a> for .NET (Microsoft development platform) thingies</li>
<li><a href="https://www.npmjs.com/" target="_blank">Npm</a> for the Javascript minded (and not only for the server-side, event-looping loving ones it seems)</li>
<li><a href="https://bower.io/" target="_blank">Bower</a> for The Web (although it seems in its way out)</li>
<li><em>Powershell</em> humans (let's tag them as developers as well, because they kind of are, in their own <em>scripty</em> way) now have <a href="https://github.com/PowerShell/PowerShellGet" target="_blank">PowerShellGet</a> to pull down useful modules</li>
</ul>
<p>I am sure there is one for every major (and medium and possibly minor) developer ecosystem but forgive me for not listing every one of them.
I will solely point out those that have some sort of experience with.</p>
<p>And things get even more confusing because some of the package managers host programs and utilities that are useful in their own right, besides libraries, modules, etc... For example, a lot of interesting console apps distributed via NPM or as gems.<br>
To include a bit of recursion and nesting, since some managers are targeting different scenarios, it is not uncommon package managers being installed from other package managers.</p>
<h2>Already too much</h2>
<p>Barely madness.<br>
I am sure a human can take a bit more, so, let's talk about newcomers that are likely to gain more and more popularity:</p>
<ul>
<li><em>Bower</em> light might be fading, but here comes <a href="https://yarnpkg.com/" target="_blank">yarn</a> to save the web or <a href="http://duojs.org/" target="_blank">Duo</a> to re-save it</li>
<li><em>NuGet</em> has big guns behind, but some people swear by <a href="https://fsprojects.github.io/Paket/" target="_blank">Paket</a></li>
<li><em>Chocolatey</em> has come a long way, but <a href="http://scoop.sh/" target="_blank">Scoop</a> is here to claim its share</li>
<li>...</li>
</ul>
<p>Let's face it: there are a lot of different managers and tools. Too many. And they interbreed. And each one of them "believes" is better than the rest of their competition.</p>
<h3>As a consumer</h3>
<p>Some consumers have a challenge: install and maintain software and libraries.<br>
They may not even be able too because of company policies which sole purpose is prevent productivity (non-professionals won't be protected from themselves anyway).<br>
But before those lucky ones that are able to can even start, they need to pull down three of four package managers that enable them to install <strong>some</strong> of the software they need and feel comfortable with. And learn their about how they work.</p>
<p>But to pick that handful one has to suffer the anguish of picking right:<br>
am I going to miss a key applications for me if I choose Scoop over Chocolatey? <br>
In which way is Paket better than NuGet? I have never had any of the terrible damages one seems to inflict on its users.<br>
Some months ago NPM was the shit, but now? <a href="https://docs.npmjs.com/how-npm-works/npm3-nondet" target="_blank">Non-determinism</a> does not sound appealing to me.</p>
<p>And then some programs will update themselves, overtaking the package management system and when and upgrade of the package is performed, then nasty versioning conflicts scare the hell out of the one thinking he or she made a smart move embracing the console.</p>
<p>It is a messy and painful experience and I am starting to believe package managers have not made a consumer life sensibly easier.<br>
Alright, I am exaggerating, because at least for developers it makes a notable difference to have one instead of zero. But having multiple? Breeding more is not helping.</p>
<h3>As an author</h3>
<p>Consumers are screwed, but authors that want to have happy consumers are not better. They will have to deal with several sizable tasks:</p>
<ul>
<li>creating kick-ass software</li>
<li>picking the flavors of the month to get their software reaching as many people as possible</li>
<li>learning about the inner workings of all of them because, after all, you do not want your consumers to have a bad experience and take the blame</li>
<li>helping volunteers that create packages for their creations out of kindness or delivery platform promotion</li>
<li>coping with those that need one more package format or else they will spend their money and/or time somewhere else</li>
</ul>
<p>We are nice people but the world is against us :-p</p>
<h2>Light at the end of the tunnel?</h2>
<p>Honestly, I cannot see one, but quite a few competing ones and I am scared of them all.</p>
<p>Will a super directory help? Shall a superior authority dictate which one is the lucky survivor? Shall we let <a href="http://knowyourmeme.com/memes/life-uh-finds-a-way" target="_blank">live , uh... find a way</a>?</p>
<p>I am not super-smart but I have this itch that the cambric tooling explosion is not as helpful as it may seem. Or maybe I am just old and grumpy and angry for finding it hard to keep up.</p>
<p>Now, if you excuse me, I need to look for yet another "package manager done right" because it is time to update some programs just in case one of them if selling my soul to foxy <a href="https://theinfosphere.org/Scammer_Aliens" target="_blank">alien scammers</a>.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-71731472180929577322017-04-24T10:31:00.001+02:002017-04-24T10:31:54.076+02:00Stepping Forward<p>People had been <a href="https://github.com/dgg/nmoneys/issues/44">asking</a> for it and now it has finally come: .NET Core support for <a href="https://github.com/dgg/nmoneys">NMoneys</a> and <a href="https://github.com/dgg/testing-commons">Testing.Commons</a>.</p><p>After some disappointing trials with beta tooling and after the <a href="https://en.wikipedia.org/wiki/Microsoft_Visual_Studio#2017">release</a> of Visual Studio 2017 I have been able to provide support for those of you that are way ahead of the curve and need currencies and help with tests.</p>
<a name='more'></a>
<h2>Testing.Commons</h2><p>Being released slightly earlier due to <em>NMoneys</em> taking a dependency on it I was able to provide a .NET Standard (<em>netstandard</em>) library for both <a href="https://github.com/dgg/testing-commons#testingcommons">Testing.Commons</a> and <a href="https://github.com/dgg/testing-commons#testingcommonsnunit">Testing.Commons.NUnit</a>.</p><p>Support for other platforms means some <a href="https://github.com/dgg/testing-commons/wiki/Changelog#2000-4000">sacrifices</a> in terms of functionality due to the smaller API surface of other platforms, but I feel that will not be a big problem.<br>Besides, it it not that the features are gone, gone for everyone. IT is just that those being “forced” to use the <em>netstandard</em> version won’t be able to see those features, whereas those using the full .NET Framework (<em>net</em>) will continue seeing those features.</p><p>Go grab them from and let me know if you find any problems (as I do not use <em>netcore</em> very much myself yet).</p><p><img alt="Testing.Commons" src="https://github.com/dgg/testing-commons/wiki/img/Testing.Commons.NuGet.png">
<img alt="Testing.Commons.NUnit" src="https://github.com/dgg/testing-commons/wiki/img/Testing.Commons.NUnit.NuGet.png"></p>
<h2>NMoneys</h2><p>Much like Testing.Commons, I have included a netstandard-compatible binary for both <a href="https://github.com/dgg/nmoneys">NMoneys</a> and <a href="https://github.com/dgg/nmoneys/wiki/Exchange">NMoneys.Exchange</a>, with some of their siblings maybe coming later.</p><p>Likewise, there are some <a href="https://github.com/dgg/nmoneys/wiki/Changelog#5000-and-exchange-4000">stuff</a> missing from the <em>netstandard</em> binary, but I believe it is not something that will bother many people. And again, those features are not gone forever (yet) but just removed from those not use the full framework.</p>
<p>Fetch the packages from <em>Nuget</em>:</p>
<p>
<img alt="NMoneys" src="https://github.com/dgg/nmoneys/wiki/img/NMoneys_NuGet.png">
<img alt="NMoneys.Exchange" src="https://github.com/dgg/nmoneys/wiki/img/NMoneysExchange_NuGet.png"></p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-88766985324493324682017-04-05T14:25:00.001+02:002017-04-05T14:25:00.316+02:00Unluckiest man on Gadget-Earth
<p>I admit I have been moderately lucky with gadgets and tech gear through all my life. Taking good care of them seems to help.<br/>
I have only dropped a phone and I have only serviced a laptop (in the 90s) and a phone throughout all these years.</p>
<a name='more'></a>
<p>But there is always this exception. This single item for which you should not take a beating because you have a record to back you up.</p>
<h2>Fitness bands</h2>
<p>I am no fitness freak myself and since I am also on the cheap side I started with the cheapest band one can get from a known brand: the first version of the <a href="http://www.mi.com/en/miband/" target="_blank">Xiaomi MiBand</a> to check whether I had any use for such a device.</p>
<p>It turns out I did. It nagged to get off my ass more often, it encouraged me to get my daily quota of steps and I had an "excuse" for a nap when I showed my nagging partner that I had "only" slept 6 hours but they were of poor quality <br/>
It broke within a year-ish but that is not the gadget to call oneself unlucky (and I am not cheap enough to bother disputing Chinese warranty for 14 bucks).</p>
<p>That device "proved" I have a "use" for a fitness band. Note the profusion of quotes.<br/>
And then I laid my eyes on a much more expensive and feature-rich device. The <a href="https://en.wikipedia.org/wiki/Microsoft_Band_2" target="_blank">Microsoft Band 2</a>.</p>
<p>After much review reading I took advantage of a visit to London to get myself £200 worth of sensors and access to a nice mobile app and health portal.<br/>
Despite the awkward design (that bulgy battery) I have been a really happy user. My favorite features being the GPS tracking when cycling and, above all, the guided workouts.<br/></p>
<h2>Gone badly</h2>
<p>Device #1. Three months in, I discovered the sturdy rubber band was splitting close to the screen. It turns out <a href="http://forums.windowscentral.com/microsoft-band-2/407353-rubber-strap-splitting-band2.html" target="_blank">I was not the only one</a>.</p>
<p>Device #2. Next device's band also split. Different place, though, this time closer to the clasp.</p>
<p>Device #3. Screen started acting weird with an horizontal white line. Stopped responding altogether.</p>
<p>Device #4. Galvanic Skin Response sensors cover fell off. This time I got a lot of attention from Customer Care as I got a nasty rash without the covers.</p>
<p>Device #5. Battery life shortened (could not register 5+ hours of exercise -snowboarding-) and stopped charging altogether.</p>
<p>I think I do not forget any other failing device.</p>
<h2>An opportunity to excel</h2>
<p>For (almost) each device -more on this later-, the process of repair/return was as painless as I have ever experienced:<br/></p>
<ol>
<li>initiate a chat with a really pleasant support "technician" and explain the problem and provide return address information</li>
<li>get a free return label from UPS</li>
<li>pack the band and stick the return label</li>
<li>schedule a pick up from UPS (or take it to a drop-center). I have done both, both in Denmark and Spain and works flawlessly</li>
<li>wait around one week for a UPS delivery guy to hand over a new band ready to fail again</li>
</ol>
<p>The process takes around 10 days and was repeated 4 times. Only four, as the band has been discontinued, that is, not manufactured any more.<br/>
That means that my last broken device can't be replaced as there is no more stock.<br/>
Which, in turn, means I will be receiving a refund for my device.</p>
<p>Hats off to Microsoft. They manufactured a device with terrific potential and horrific construction. I would have run away from <em>any</em> Microsoft hardware device given the dreadful result with this one.<br/>
Instead, I am relatively happy with Microsoft and totally disappointed with the device. And all that because of the outstanding customer service. On a device that was only sold in a handful (literal) of countries.</p>
<p>One year plus worth of devices, basically for free. No wonder Microsoft is shutting down the product line.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-72015789472167993782017-04-04T09:53:00.001+02:002017-04-04T13:25:05.848+02:00SOLID principles are not WRONG<p>I am aware of the imposed necessity of bringing attention with titles (specially for talks), but one also needs to be aware that titles are, sometimes, the one and only thing that sticks.<br>
And there, lies the need of them not being untrue.</p>
<a name='more'></a>
<p>Some weeks ago I read a tweet from Rob Connery recommending a Dan North's talk he attended while in a conference.</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">The best talk I saw at NDC London: <a href="https://t.co/TQJ6fF7CRM">https://t.co/TQJ6fF7CRM</a> from <a href="https://twitter.com/tastapod">@tastapod</a>. No video as it was <a href="https://twitter.com/PubConf">@pubconf</a>! This is why we need <a href="https://twitter.com/PubConf">@PubConf</a></p>— Rob Conery (@robconery) <a href="https://twitter.com/robconery/status/841368621217595393">March 13, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>I have the most utter respect for these two individuals. They are respected professionals and, in my opinion, both brilliant speakers.<br>
And so I gave the <a href="https://speakerdeck.com/tastapod/why-every-element-of-solid-is-wrong" target="_blank">slide-deck</a> a good read.<br>
The reader might as well do the same. I'll be waiting.</p>
<h2>Is it the context?</h2>
<p>I have not been able to watch any videos of the talk, so there is a chance that I am getting it all wrong, but, you see, if one writes something and then tells the exactly opposite... what would the use of a script that it is not what the author meant?<br>
So, from now on, I will be assuming the slide-deck is the foundation and the message conveyor of what was said in such talk.</p>
<h2>My bias</h2>
<p>I am a software engineer myself that takes software design very seriously as believe that design amounts to a big share of the success of a software system.<br>
As I have spent majority of my career in a world of Objects, Object-Oriented Design is both part of my day-to-day job and also one of the topics in computing that I enjoy the most.</p>
<p>I have also dug a bit deeper into the realms of what lies behind the <a href="https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)" target="_blank">SOLID</a> acronym, so I consider myself mildly knowledgeable about the topic and, as a side note, a bit of a fan of such principles having seen some of the benefits they report to systems built following their ideas.</p>
<h2>Wrong They are NOT</h2>
<p>When I first read the title (and not the content) I was honestly expecting Dan to have seen the proverbial <em>absolute truth</em> of <a href="https://en.wikipedia.org/wiki/Functional_programming" target="_blank">Functional Programming</a> that a lot of software engineers seem to have seen lately and I was expecting him bashing the principles for how useless they are in the functional paradigm and how superior is the functional world compared to the rancid world of objects in which SOLID have some sort of relevance.<br>
Far from it, there is no mention in the slides to functional programming, so my first impression was utterly wrong.</p>
<p>After reading the whole content (twice) I honestly thought it was some sort of joke or sarcasm. Kind of twisted, but kind of funny at the same time.<br>
However, some of the points make really good sense, so I doubt all of it to be a joke or sarcastic.</p>
<p>From the content of the slide-deck I will try to analyze some of the claims made for each of the principles:</p>
<h3>Single Responsibility Principle</h3>
<p>Dan calls this principle <em>pointlessly vague</em>, maybe due to the, let's admit it, somewhat vague definition of <em>responsibility</em>.</p>
<p>He questions our ability to predict what is going to change, because one definition of responsibility is "reason to change" and since we do not know what might change, we cannot possibly maintain that unknown "single" (or, at least, in the lows).<br>
However, there are indeed ways to know with higher certainty than guessing what is likely to change. To my mind come two from my experience:</p>
<ul>
<li>whatever has changed in the past</li>
<li>whatever the client says it won't change</li>
</ul>
<p>The first is used (or, at least was) in very high performance systems, such as micro-processor caches, so I guess it might be of some use.<br>
The second sounds more like a joke, but I found it to be scarily accurate. Imagine your client tells you there is a hard invariant in a system you are building. IT is the cornerstone of their business and strategy and it has to be maintained at all costs. It can't possibly change. No matter how hard you press your client to open a possibility of that key invariant to change... It just won't. 100% guaranteed. That moment is your moment to nod politely and make a note to pass to your boss: "a fancy dinner on a change invariant X before phase 2. u r on?".</p>
<p>There are places in the system in which you are probably to sniff future changes. The closer to the eyes (visually), the higher the probability. Do we want to recklessly mix very volatile elements with some that are not? Likely not. Regardless of their simplicity or their low complexity.</p>
<h3>Open-Closed Principle</h3>
<p>Dan calls off this principle for allowing unneeded code to exist in lieu of a futurible change. I cannot say he is wrong, but I still believe this principle is very useful. Let me tell you more.</p>
<p>Remember the heuristic I gave you about things bound to be changed? Well, one can take a gamble when designing the system, preparing it to be extended in an easier way when reality changes (and experience tells you it will likely change). It is not a particularly risky gamble, mind you, and one does not have to add a lot of scaffolding just in case. But getting ready for when the change comes is advisable.<br>
A more specific example would be having a chain of responsibility instead of a list of _if_s. Certainly, with two links (a single branch) seems overkill. But knowing that now it is two, but it will grow to 10+, a lot of pain and rework in the future will be spared. <br>
And that is because this principle. It is a principle and needs to be used and not abused. However I have the certainty that <em>write simple code and replace it when it is wrong</em> can receive more abuse than OCP.</p>
<h3>Liskov Substitution Principle</h3>
<p>Dan calls this useless based on the premise that inheritance should be avoided at all costs.<br>
Well he is definitely right in that composition should be <strong>preferred</strong>. But that is not to say that inheritance does not have a place at all in designing Object-Oriented systems. And when inheritance is involved, one is a much better position if LSP is followed.</p>
<p>And there are times in which inheritance cannot be avoided. Some frameworks are meant to be used via inheritance (exceptions, graphic libraries,...). One can argue that is not an optimally designed system, but when there are few feasible options to replace it, bitting the bullet is no sin. Once bitten, one is better off knowing about LSP to prevent mischiefs such as downcasting.<br>
Would it be nice that a concrete exception threw when trying to display a message? Probably not. Knowing about LSP would make you think twice about doing such thing.</p>
<p>Oh, and I strongly disagree that objects hierarchies should be avoided altogether. They should be pondered heavily, but in occasions, hierarchies are the simplest solution. Wasn't it <em>simple</em> the way to go?</p>
<h3>Interface Segregation Principle</h3>
<p>For this principle, I think Dan is more accurate than with any other. Writing simple role-based code leads to ISP.</p>
<p>But that is not the same as saying ISP is wrong. Small, focused components. Sounds easy and yet, one has to be reminded.</p>
<h3>Dependency Inversion Principle</h3>
<p>Dan call it the <em>wrong goal principle</em>. I have to agree that design of small components is a better goal than reuse, but I strongly disagree on DIP leading to dependency on DI frameworks. If anything, well applied, DIP leads to depend even less on DI frameworks (just use it in the composition root and off you should go).</p>
<p>One can go very far with <code>new</code>. But just so far. Good luck testing external systems (tickers, sensors,...) that you new-ed up when a certain behavior is needed for testing the system. Not everything needs to be abstracted, but for certain situations, abstracted dependencies are the way to go.</p>
<h2>Old Man's babble</h2>
<p>Let me tell you a story...</p>
<p>I was a consultant. We were building a new portal and there was this integration to an existing datasource. Same behavior, same data, it was already working. The estimation time for that task was set to the minimum. What could possibly go wrong? <br>
The datasource system was indeed simple. Unfortunately, simple as it was, it was doing way too much and in the wrong places. The system was a stored procedure, retrieving some data, performing some simple transformations and ... wait for it... generating the HTML that was displayed. Yes, HTML. Tables. With embedded styles. In a stored procedure. At that point, the minimal estimation time was already not achievable and I was the bad news bearer to the client. The very same client that paid big bucks for that system to the same company my company was co-working with: "sorry, can't do in allocated time because, with all due respect, the system is shite".</p>
<p>It was a simple system, easy enough to reason about at the time of writing.<br>
I am sure it fitted inside the head of the perpetrator at that time.<br>
Definitely the code was changed "<em>easily</em>" and deployed even more easily (cowboy edit to live database and boom! magic is unveiled) when changes were asked.<br>
There was neither inheritance to composition. Simple, I told you.<br>
The client was dependent on only small surface: make a query and assemble the strings. No need to invert simplicity.</p>
<p>Despite ticking all simplicity marks they managed to crap over SOLID principles big time.<br>
The system had too many responsibilities of different nature: querying, a bit of logic and presentation. SRP off the board.<br>
As far as any change was needed, the heart (and brain and kidneys and...) of the system had to be changed. Not open in any sort of definition of openness. OCP violated.</p>
<p>Would the end result of the system be better had the developer read about SOLID? Probably.<br>
It could have been worse too. Over-complicated, obscure code. <br>
SOLID is not a recipe one follows to get consistent tasty meals. It is a set of principles which catchy mnemonics to be reminded about desirable characteristics of well-designed software. The cook still matters.</p>
<h2>Responsible talking</h2>
<p>SOLID principles are heuristics, reminders of a nicer world. Not all of them need to be applied and certainly not all at the same time.<br>
Dismissing them and providing a uber-vague heuristic (<em>just write simple code</em>) places ourselves in no better position.<br>
<em>Write simple code</em> helps less to write better code compared with what SOLID resonates.<br>
The goal is the same, but one is a subset of the other. SOLID code should strive for simplicity.<br>
When guidance is offered, specific is to be preferred.</p>
<p>Worse yet. Imagine the reaction of someone who is not too experienced (or too lazy) in software design reading just the headline (like majority of human beings would do): SOLID principles are wrong therefore I won't even bother reading more about them. Who would ever bother reading about something that is wrong?<br>
Such person would miss a great opportunity to read about cohesion, coupling, viscosity, rigidity,... and many others concepts to be discovered when further reading about SOLID. I am afraid those concepts would not appear as easily when tied to <em>simple</em>.</p>
<p>Calling them <strong>wrong</strong> is a fantastic disservice to an already challenged area of knowledge.<br>
Poorly designed systems vastly outnumber carefully designed ones. I refuse to think that is because SOLID principles are wrong.<br>
I would venture to say that, in a poorly designed systems, the perpetrator is less likely to have heard or applied SOLID than in a better designed one, even if such principles have not been followed "by the book" (hint: there is <em>no book</em>).</p>
<p>We all fancy a witty comment. But one should be careful with the message which remains. And I believe <em>SOLID is wrong</em> is a terrible message to remain.</p>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-30551581122897053532017-03-30T11:18:00.001+02:002017-03-30T11:18:48.532+02:00Not the last console application. Wrapping the series up<p>I am sure there are plenty more, but I feel I need to wrap up and move onto something different.</p> <p>But not without reflecting upon what we've seen in the series and mentioning some of the frameworks that did not make it for me to review.</p> <a name='more'></a> <h2>(Dis)-Honorable mentions</h2> <p>We have seen plenty, but some, unfortunately (or not) were not up for my fight with them. <br>That is not to say they are bad frameworks/libraries, but for some reason or another, I was not able to make my sample work with them.</p> <h3>Argu</h3> <p>This is a project that looks very interesting with its sexy DSL-like syntax... <br>... but I am no F# programmer (not for the lack of effort, I blame my OWB (Object-Wired Brain<sup>tm</sup>) for that) and could not make it work in a C# project. So, out of the window it went.</p> <h3>Docopt.net</h3> <p>This looks interesting on paper as it has a totally different approach: instead of decorating objects or using some sort of object model to build console-specific features (help, aliases,...) it takes textual input for the parser and builds some sort of dictionary-like structure for the parsed arguments from the user.</p> <p>But interesting or different does not mean I am not allowed to give up after giving it a fair try at fitting that weird syntax and model to my sample.<br>Not for me I suppose.</p> <h3>GetOpt.Net</h3> <p>I did not have high hopes for this one as the "documentation" was pointing to a single example in the repository, but since there is a <a href="https://www.nuget.org/packages/GetOptNet/" target="_blank">NuGet package</a> I gave it a shot despite the fact that the concept of <em>command</em> is not supported.</p> <p>Unfortunately, after implementing my sample application I was only able to display the help message and was terribly upset to get a <code>ProgrammingErrorException</code> because I was using a single dash instead of doubling it up (fact that I found out looking at source code). <br>If such a simple mistake for the user results into an exception thrown at his face (or forced to be handled by the un-aided programmer) I am afraid I am not going much further with that library.</p> <h2>Anything worthy then?</h2> <p>Yeah, plenty.</p> <p>Even though I believe that <em>Powershell</em> is the way to go I can highlight:</p> <ul> <li>the simplicity and straightforwardness of <a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html" target="_blank">GoCommando</a> <li>the power, feature-set and polish of <a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-powerargs.html" target="_blank">PowerArgs</a> <li>the object model approach of <a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-command.html" target="_blank">Command Line Utils</a> but without its confusing state <li>the "get it running in a breeze" approach of <a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-clap.html" target="_blank">CLAP</a> </li></ul> <h2>Famous Last Words</h2> <p>I really believed that such and old problem (parsing a command line) would have fewer "let's do this again for the heck of it" but I am not complaining for the extra effort of trying.</p> <p>OSS is a free country where everyone can direct their efforts the way they deem best and command line is a non-trivial-not-so-difficult problem with a low entry barrier for domain knowledge for someone that wants to start in the .NET OSS world.<br>Having said so, I think that we, programmers, are better off placing our efforts on something different. If not for ourselves, think of the poor souls trying to find just one framework. Let's not drown them in even more choices.</p> <p>I propose a banner in Github ".NET has its share of command line parser. Newcomers beware" or maybe I can write one framework myself with everything that I like from other frameworks...<br>Obligatory reference to <a href="https://xkcd.com/927/" target="_blank">xkcd</a>:</p><img src="https://imgs.xkcd.com/comics/standards.png"> <cite> <h3>Last Console Series</h3> <ol> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html">The beginning</a> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html">GoCommando</a> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-command-line-parser.html" target="_blank">CommandLineParser</a> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-powerargs.html">PowerArgs</a> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application.html">LBi.Cli.Arguments</a> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-command.html">Command Line Utils</a> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-clap.html">CLAP</a> <li>Wrap-up (this) </li></ol></cite>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-79388443670772334722017-03-23T10:09:00.001+01:002017-03-30T11:20:09.468+02:00Not the last console application. CLAP<p><a href="https://github.com/adrianaisemberg/CLAP">CLAP</a> is the framework of today. A venerable project with <a href="https://www.nuget.org/packages/CLAP/">NuGet package</a> and nice <a href="http://adrianaisemberg.github.io/CLAP/">documentation</a>.</p> <a name='more'></a> <h2>Experience as a asset</h2> <p><em>CLAP</em> presents itself as a parser for the command line and has been around for quite a few years. We could say it has reached stability (despite having some open issues).<br/> It roots for simplicity and gets you going very, very fast. Despite some mistakes in their documentation (fixed in their code documentation), it is very good to find a project that offers nice, detailed documentation.</p> <p>This attribute-based framework takes a slightly different approach as it is totally command-centered, thanks to its procedure of exposing methods as commands (verbs) and method arguments as command line options.<br/> One can get going by simply having a static method decorated with the <code>[Verb]</code> attribute and use a static method to dispatch that method :-O. That is a very low entry bar.</p> <h2>Implementing our example</h2> <p>Implementing our example was really simple, just a class with two methods and decorated arguments:</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=implementation.cs"></script> <p>Dispatching can be done statically (<code>Parser.Run<TheApp>(args);</code>) but I chose to use an instance of <code>Parser</code> to be able to display help and not surface exceptions to the end user and pass a constructed instance of the class that contains all the methods to be exposed.</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=Main.cs"></script> <p>It does indeed work nicely and the framework offers a lot of interesting features (validation, default commands, multiple ways to plug in an IOC container, interceptions, complex type handling,...) that are not used in this sample, but could come handy in a real project.</p> <h2>The Challenges</h2> <h3>Mandatory Arguments</h3> <p>Mandatory arguments are not the default, but decorating the argument property with <code>[Required]</code> does the job:</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=mandatory.sh"></script> <p>Extra, undefined arguments will make an error appear.</p> <h3>Non-Textual arguments</h3> <p>Declaring the right type for the argument seems to work just fine.</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=non-textual.sh"></script> <p>Default values are provided as objects and booleans are flags.</p> <h3>Multi-arguments</h3> <p>Declaring the argument as a collection and passing comma-separated values works beautifully.</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=multi.sh"></script> <h3>Showing Help</h3> <p>In order to provide auto-generated help, one needs to register a help handler (and an empty help handler for those that run tools recklessly). But that is a simple and well-documented pre-requisite to get a complete help message that shows verbs, options, shortcuts, required arguments, types and default values.</p> <script src="https://gist.github.com/dgg/0c65baf4abcb455dc68d6328eaf3e169.js?file=help.sh"></script> <p>There does not seem to be a way to display help for single commands, because everything is displayed for the general help. <br/> That might be a problem for big applications with lots of commands, but those kind applications might have found other challenges with the framework before thinking about help.</p> <h3>Command Dispatching</h3> <p>Commands are methods decorated with <code>[Verb]</code> and arguments are options. We can dispatch static methods, instance methods or we can let the framework instance objects, provide instances ourselves or delegate creation to a <code>TargetResolver</code>.<br/> On top of that there are multiple hooks to inject behaviors, so I believe we are good with this feature.</p> <p>We also have the ability to have a default operation for those single-purpose applications.</p> <h2>Conclusion</h2> <p>I did not know about <em>CLAP</em> (or many of the frameworks I have looked into in this series) and the idea of decorating method arguments did not make me the happiest camper in the woods, but I have to admit that the framework has won me over. Their choices are sensible, it is good architected, feature rich and well documented.</p> <p>Don't be fooled about its age, it is a very useful framework and one I could easily see myself using if I ever give up on <em>Powershell</em>.</p> <cite> <h3>Last Console Series</h3> <ol> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html">The beginning</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html">GoCommando</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-command-line-parser.html" target="_blank">CommandLineParser</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-powerargs.html">PowerArgs</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application.html">LBi.Cli.Arguments</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-command.html">Command Line Utils</a></li> <li>CLAP (this)</li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-wrapping.html">Wrap-up</a> </li>
</ol></cite>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-66713135782035448002017-03-22T12:01:00.001+01:002017-03-30T11:20:21.430+02:00Not the last console application. Command Line Utils<p>With <a href="https://github.com/aspnet/Common/tree/dev/shared/Microsoft.Extensions.CommandLineUtils.Sources/CommandLine">Command Line Utils</a> we are up for an exception. <br />I am going to break my self-imposed <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIKA4YHMc7WFN7eWTWcW2NTBBjeV5f0Mq8In0EzAubl0ftfNJooJBNyqB5JNjS5tJ8CpAr8LTffzrtX03FUiljtZ21NL4OmxQwzbmsb2HkbnI-I3hK4F-ojT1wHw9RL9dXtELWNaAXV1mN/s1600-h/image7.png" target="_blank">rules</a> for code that I use. It passes the <a href="https://www.nuget.org/packages/Microsoft.Extensions.CommandLineUtils/" target="_blank">NuGet package</a> filter but there is no project page to get from the package. That would be an automatic “thanks, but no, thanks” but I was curious enough to see what it might come from a Microsoft owned repository that I bit the proverbial bullet. </p> <a name='more'></a> <h2>Attributes departure </h2> <p>This a <em>Microsoft.*</em> extension, but there is no project page, no documentation, no intellisense comments and a single <a href="https://github.com/aspnet/Common/blob/dev/test/Microsoft.Extensions.CommandLineUtils.Tests/CommandLineApplicationTests.cs" target="_blank">test</a>. Am I totally out of my mind to even consider this as a contender? <br />I might well be, but when trying to find where it lived I bumped into <a href="https://msdn.microsoft.com/en-us/magazine/mt763239.aspx" target="_blank">this article</a> in MSDN magazine the caught my attention. </p> <p>As opposed to majority of the frameworks seen so far, it does not use the concept of member annotations to generate the commands, options and arguments. It uses a “proper” object model (although there are hints of <em>DSL-nesh</em> in the language used to define options and arguments) with classes (<code>CommandLineApplication</code>, <code>CommandArgument</code> or <code>CommandOption</code>) that can be instantiated and configured with good-old methods or we can go the inheritance route to encapsulate command behavior as much as possible. <br />In any case, this different take and the fact that it comes from Microsoft and seems to be used by themselves (at least an incarnation of it) in their <code>dotnet</code> <a href="https://github.com/dotnet/cli" target="_blank">cli</a> is enough for me to look at despite its raw state.</p> <h2>Implementing our example</h2> <p>To show both styles of use, we have encapsulated the <em>something</em> command in its own class. Describe the command via properties and add options via the <code>.Option()</code> method (the help option is a special one used to display detailed help):</p> <script src="https://gist.github.com/dgg/d47be564e2ffecd93e57491dfd86e5b1.js?file=implementation.cs"></script> <p>Dispatching is simple: create a instance of <code>CommandLineApplication</code> (or a subclass), add commands to the <code>.Commands</code> collection (for the self-contained ones) or describe them using the <code>.Command()</code> method (as we have done for the <em>something-else</em> command) and dispatch the execution with the <code>.Execute()</code> method.</p> <script src="https://gist.github.com/dgg/d47be564e2ffecd93e57491dfd86e5b1.js?file=Main.cs"></script> <p>Not being attribute-driven we can easily tweak the messages to whichever language needs to be used in an easier manner than when attributes are used.</p> <p>One thing to note is that there are two concepts: arguments or options. Think of options as the information that comes after a named parameter and think of arguments as the information to be supplied for the command without the need of a named argument. We have used an argument for <em>something-else</em>’s <em>locations</em> argument, whereas we have used an option for <em>something</em>’s <em>location</em> argument. The difference is the need to specify the <code>--location</code> (or <code>–l</code> shorthand) when using options.</p> <h2>The Challenges</h2> <h3>Mandatory Arguments</h3> <p>There is no framework built-in concept of a mandatory argument as far as I can see. Which means that the burden of checking for its existence is laid upon the framework user. I have sketched how one could do it in the <code>Something</code> class, but I think the framework should provide support for that scenario <img class="wlEmoticon wlEmoticon-sadsmile" alt="Sad smile" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE2sT94grRbOGQpBH6YzpEvz382Ua3LyKNrVFzmzoz3GVN5lXaTPPIHgpV7OO6RgENOZIRmYvvb4pTvN8rEK4aVe8aVLR8ovBaaHyguY3QYDY6Z6XM6gYZ28dobZXDNK8CWThQd6St32cf/?imgmax=800" />.</p> <p>Unmapped arguments will make an error appear but the behavior can be changed.</p> <h3>Non-Textual arguments</h3> <p>Another area where the framework falls very short is the conversion of textual argument values into typed values. Again, I included a very pedestrian way of doing in the <code>Something</code> class, but I am very disappointed the user is forced to do that sort of plumbing code.</p> <p>Same history with default values.</p> <h3>Multi-arguments</h3> <p>Multi-valued arguments are supported by declaring the option as <code>CommandOptionType.MultipleValue</code> or the argument with the <code>.Multiple </code>property set to <code>true</code>.</p> <script src="https://gist.github.com/dgg/2f3fd37480a1cc8917f2ba6bdf6c723d.js?file=multi.sh"></script> <p>For more complete help (samples, remarks,…) the <code>.ExtendedHelpText</code> can be populated.</p> <h3>Showing Help</h3> <p>Running the program with the <code>-–help</code> option (which we have defined with the <code>.HelpOption()</code> method) a list of supported commands:</p> <script src="https://gist.github.com/dgg/d47be564e2ffecd93e57491dfd86e5b1.js?file=help.sh"></script> <p>And using the <code>--help</code> option on a command, drills down to the command-level documentation:</p> <script src="https://gist.github.com/dgg/d47be564e2ffecd93e57491dfd86e5b1.js?file=help-command.sh"></script> <h3>Command Dispatching</h3> <p>Commands behavior can be defined using the <code>.OnExecute()</code> method that takes the piece of code (synchronous or async) to be executed when the command is specified.</p> <p>There does not seem to be a hook to get into the command object creation, but dependencies could be “manually” injected when creating command instances.</p> <h2>Conclusion</h2> <p><em>Command Line Utils</em> offers a new object-oriented take on the panorama of cli frameworks. Despite coming straight from the horse’s mouth, it is extremely rough on the edges (and beyond them) but has the advantage of <em>netcore</em> portability. As a matter of fact, it is a parameter I have not evaluated so far but I guess that if no other nicer framework is portable, one might consider this one knowing that one is pretty much on his own (or can use and contribute to <a href="https://github.com/hravnx/CliHelpers" target="_blank">CliHelpers</a>) and sleeves are going to be rolled </p> <p>Let’s wait and see how it evolves, but I would not hold my breath with this one.</p> <cite> <h3>Last Console Series</h3> <ol> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html">The beginning</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html">GoCommando</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-command-line-parser.html" target="_blank">CommandLineParser</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-powerargs.html">PowerArgs</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application.html">LBi.Cli.Arguments</a> </li> <li>Command Line Utils (this) </li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-clap.html">CLAP</a>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-wrapping.html">Wrap-up</a> </li></li>
</ol></cite>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-72192405154685979892017-03-16T11:25:00.001+01:002017-03-30T11:20:33.108+02:00Not the last console application. LBi.Cli.Arguments<p>Today we will look at <a href="https://github.com/LBiNetherlands/LBi.Cli.Arguments">LBi.Cli.Arguments</a>.<br/> With its (still) beta <a href="https://www.nuget.org/packages/LBi.Cli.Arguments">NuGet package</a>.</p> <a name='more'></a> <h2>Work in progress?</h2> <p><em>LBi.Cli.Arguments</em> presents itself as something to mimic the syntax and capabilities of <em>PowerShell</em>. Those are big words, but not necessarily false. It tries, but it falls waaaay short.<br/> To begin with, its package is still pre-release despite the last commit being 3 years old. Then we have the pretty lacking (feeling magnanimous today) documentation.</p> <p>The framework relies on decorating classes with attributes but it comes ready for multi-language support. I also like that the framework makes use of existing attributes from <code>System.ComponentModel.*</code> instead of creating new ones for the same purpose.<br/>And that is about the nicest thing I can say about the framework. That and the fact that exposes a parsing object model in case one feel like getting fancy.</p> <h2>Implementing our example</h2> <p>Implementing our example was not difficult despite the lack of documentation:</p> <script src="https://gist.github.com/dgg/1531d3d8fffedd0f2a197f311d85a87d.js?file=implementation.cs"></script> <p>Dispatching involves creating a base class or interface that our argument objects implement (in our case <code>IRunnable</code>) in order to get a hand from polymorphism when executing our code.</p> <script src="https://gist.github.com/dgg/1531d3d8fffedd0f2a197f311d85a87d.js?file=Main.cs"></script> <p>One can guess from my tone that I am not liking it already. If a concept (a command-like interface) is basic for the sunny-day scenario of a framework to work, I would expect that concept to be baked-in into the framework.</p> <h2>The Challenges</h2> <h3>Mandatory Arguments</h3> <p>Mandatory arguments are not the default, but decorating a property with <code>[Required]</code> does the trick:</p> <script src="https://gist.github.com/dgg/1531d3d8fffedd0f2a197f311d85a87d.js?file=mandatory.sh"></script> <p>Extra, undefined arguments will make an error appear.</p> <h3>Non-Textual arguments</h3> <p>Declaring the right type for the property works most of the time.</p> <script src="https://gist.github.com/dgg/1531d3d8fffedd0f2a197f311d85a87d.js?file=non-textual.sh"></script> <p>Default values are provided as strings.</p> <p>Boolean properties are to be defined as <code>Switch</code> instead on <code>bool</code> for some reason that I fail to comprehend.</p> <h3>Multi-arguments</h3> <p>Collection arguments are supposed to work by using PowerShell's <a href="https://ss64.com/ps/syntax-arrays.html">array syntax</a> when passing the value, but I was not able to make it work and, honestly, I gave up trying pretty soon.</p> <h3>Showing Help</h3> <p>Running the program with the <em>-Help</em> argument provides a useful overview of the commands and simple examples.</p> <script src="https://gist.github.com/dgg/1531d3d8fffedd0f2a197f311d85a87d.js?file=help.sh"></script> <p>I could not find a way of focusing the help to a single command without getting the complete (very complete) help.<br/> It uses the "<em>PS way</em>" of getting help via <em>-Detailed</em>, <em>-Examples</em>, <em>-Full</em> which leads me that we can create very complex and useful help, but that I did not try.</p> <h3>Command Dispatching</h3> <p>As mentioned previously, commands are executed (in its simplest form) by providing a base class/interface and calling a method if that interface.<br/> There is nothing in the anemic help that tells us how to tap into the process of creating the command so that dependencies can be injected via IOC container.</p> <h2>Conclusion</h2> <p><em>LBi.Cli.Arguments</em> left me totally cold. From the pre-release package, to its missing help the best thing I can say is that it was not difficult to have my example working almost completely (save the multi-arguments).</p> <p>I am certainly not going to push this framework on anyone wanting to create console applications in .NET. And that should be totally OK.</p> <cite> <h3>Last Console Series</h3> <ol> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html">The beginning</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html">GoCommando</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-command-line-parser.html">CommandLineParser</a></li> <li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-powerargs.html">PowerArgs</a></li> <li>LBi.Cli.Arguments (this)</li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-command.html">Command Line Utils</a></li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-clap.html">CLAP</a> </li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-wrapping.html">Wrap-up</a> </li>
</ol></cite>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0tag:blogger.com,1999:blog-2779955313707490982.post-85096156075632029762017-03-07T10:42:00.001+01:002017-03-30T11:20:44.683+02:00Not the last console application. PowerArgs<p>They keep coming... Let's have a look at <a href="https://github.com/adamabdelhamed/PowerArgs">PowerArgs</a>. <br />With its <a href="https://www.nuget.org/packages/PowerArgs">NuGet package</a> and sane, useful documentation both in the <a href="https://github.com/adamabdelhamed/PowerArgs/blob/master/readme.md"><em>README</em></a> and in the <a href="https://github.com/adamabdelhamed/PowerArgs/wiki"><em>WIKI</em></a>.</p> <a name='more'></a> <h2>Conventional Attribution</h2> <p><em>PowerArgs</em> relies heavily on attributes. Whereas some other frameworks use a single attribute with multiple properties to drive behavior, <em>PowerArgs</em> uses multiple attributes. It is true that some of those attributes have multiple properties themselves, so it is not just waste of keystrokes. <br />I have to admit that this multiple argument path and the usage of <em>Arg-</em> prefixes is not much of my liking; but, in the end, it is just a matter of taste, as the framework works very well and offers a lot of power and unique features.</p> <p>The "conventional" part of the title comes from the way different commands are implemented: modelling them as methods that take an <em>Arg</em>-decorated object as argument.</p> <h2>Implementing our example</h2> <p>Implementing our example was pretty simple:</p> <script src="https://gist.github.com/dgg/f365528edb6afb7cf1a494f65b49c50f.js?file=implementation.cs"></script> <p>Some clarifications:</p> <ul> <li>decorate the argument object with <code>[ArgExceptionBehavior(ArgExceptionPolicy.StandardExceptionHandling)]</code> for the framework handling parsing exceptions such as missing mandatory arguments or type problems so that friendly-er messages are printed instead of daunting exceptions. </li> <li>the <code>Action</code> property, mandatory and positioned as first argument is a requirement of the <a href="https://github.com/adamabdelhamed/PowerArgs/wiki/Action-Framework">ActionFramework</a> for multiple commands to be dispatched to the correct method. </li> <li>the <code>Help</code> property, decorated with <code>HelpHook</code> allows getting global and command-specific help messages. </li> </ul> <p>Dispatching is also simple: translate one argument object into another (redundant if we embraced the framework), instantiate and call the method.</p> <script src="https://gist.github.com/dgg/f365528edb6afb7cf1a494f65b49c50f.js?file=Main.cs"></script> <p>Being attribute-driven, we would face the same challenges for localization of messages.</p> <h2>The Challenges</h2> <h3>Mandatory Arguments</h3> <p>Mandatory arguments are not the default, but decorating a property with <code>[ArgRequired]</code> enables easy reporting of missing mandatory arguments:</p> <script src="https://gist.github.com/dgg/f365528edb6afb7cf1a494f65b49c50f.js?file=mandatory.sh"></script> <p>Extra, unmapped arguments will make an error appear, but that behavior is customizable.</p> <h3>Non-Textual arguments</h3> <p>Declaring the right type for the property works fine including flags (boolean properties). <br />Failures to do so will reported back to the user:</p> <script src="https://gist.github.com/dgg/f365528edb6afb7cf1a494f65b49c50f.js?file=non-textual.sh"></script> <p>Default values are provided as objects, instead of strings.</p> <p>Furthermore, there is support for custom argument types via <a href="https://github.com/adamabdelhamed/PowerArgs#custom-revivers">custom revivers</a> that turn on string/s into objects. Very nice to have.</p> <h3>Multi-arguments</h3> <p>Collection arguments are supported by declaring the type as such with no extra decoration required.</p> <script src="https://gist.github.com/dgg/f365528edb6afb7cf1a494f65b49c50f.js?file=multi.sh"></script> <p>Again, there is no help hint in regards of what that separator character might be when displaying help to the user (as it happens with default values). <br />We could use <code>[ArgExample]</code> on the top- level argument object to help the user out with collection semantics, but an out-of-the-box solution might be desirable.</p> <h3>Showing Help</h3> <p>Running the program with the <em>-help</em> argument provides a very nice and colorful auto-generated list of supported actions and arguments:</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI8sOdN6Am9Y1aIqsBDcHcfI5k27n84S5mN5SHhemMaoArcOz1NJJdPLSTO3N8_kS_osYNE25GjQ1drio3fYI5FArJ1XLOn13S4tLu6hJylqpxXc2dAgp3hdPWRO7ci_U4tHY68rMjmCyp/s1600-h/powerargs_help%25255B5%25255D.png"><img title="powerargs_help" alt="powerargs_help" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDbmCPmdjrAu0XeDa0TwsgO-iRtL2mgEownoEe22dQ6HM6QKt3CR6g1shFSdXmQp3i9BxMthD0akdCy1ygFDMrDpcR_GoRE-hYQY4OMnlc5uT04k_ih7idndJwlFSwUILS2TFh9bfJJ-ZQ/?imgmax=597" /></a></p> <p>And using the <em>-help</em> argument with a action, drills down to the action-level documentation, equally colorful:</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2-4xeCuKerNRK5yt6JLhbGkcqT1jv9JdKJt2HvcuHkPVO1t1nS-tgRT8ir9erC4YIR27y4STV9AUoEyqJ3JfDgMwtuMrYGPqtb27WvLtq6ybGqfoVrTp41GpT0hRVmGZ8AYAsinAkRFGS/s1600-h/powerargs_command_help%25255B5%25255D.png"><img title="powerargs_command_help" alt="powerargs_command_help" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsjGJ_8LT9mw8uocepd89hTZu2HUr3z2FaxaGJBVYBQUrsb2nDt7lFl_xdwZwz_-TB6VDm_nZCJC_rSSbqWUXoCtLcGszdTHCNg32GWpyh3KSr_Kp1L8cum84zkGJxd32TKJ7g2jk7SVyU/?imgmax=700" /></a></p> <p>Furthermore, there seems to be support for generating <a href="https://github.com/adamabdelhamed/PowerArgs#generate-usage-documentation-from-templates-built-in-or-custom">usage documentation</a>, not only for the command line user, but also in HTML. Nice, indeed, but not so well-documented... yet.</p> <h3>Command Dispatching</h3> <p>As mentioned previously, commands are executed (in its simplest form) by providing a method that takes the argument object and invoking <code>Args.InvokeAction<doer>(args)</code>. <br />We can further detach actions from the argument definition by using <code>[ArgActionType]</code>.</p> <p>There seems to be a way to tap into the action dispatching method, by using <code>[ArgActionResolver]</code>, but I am not sure how to use it and could not find much documentation to help me. This could be a place to use an IOC container if we had the use for it.</p> <h2>Conclusion</h2> <p><em>PowerArgs</em> surprised me. And it is a very positive surprise. I was put off at first by the usage of multiple, prefixed attributes, but as I worked with it, I found out that there is a lot of power in this framework. It can do everything to be done in the sample application and way more.</p> <p>One thing I am really thrilled about is the support for tab completion, interactive console applications and secure parameters. I goes way beyond my sample, but it is a feature that can save a lot of time if the requirement arises. Heres is a simple image on how tab completion works with the simple addition of a <code>[TabCompletion]</code> attribute on our argument object.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimStkXQRAFoX7cIoQMi8txpUVovYKFJ_FvAZnqf-M4FB8joaOv-zePjWZa8jd3dyEJxj5mQyhchyphenhyphent1q2P6HD6yfGxfOIpUwGT9UeoK2WtBPecw6WuphsYM5hf7VLF3KSIvHS_KxeL-u4Tb/s1600-h/powerargs_tab_completion%25255B4%25255D.gif"><img title="powerargs_tab_completion" alt="powerargs_tab_completion" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCagblT0kSza5YgqGWiTlEbokVH1Nc2XuPMdHNq-vS8W5cciyIdRnfjqR0LKDLss2V6j8hETlNWhxM6eZBVqeYulzz46D4tmtL9FLKJuF8WWXu3BCFJsucxT-VVFwHsK7imP4OYaNQ3ScS/?imgmax=782" /></a></p> <p>I will certainly keep an eye on this one in case I do not feel like going out the PS way of things.</p>
<cite> <h3>Last Console Series</h3> <ol> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-you-would.html">The beginning</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-application-gocommando.html">GoCommando</a> </li> <li><a href="https://dgondotnet.blogspot.dk/2017/02/not-last-console-command-line-parser.html">CommandLineParser</a></li> <li>PowerArgs (this)</li><li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application.html">LBi.Cli.Arguments</a></li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-command.html">Command Line Utils</a></li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-clap.html">CLAP</a> </li>
<li><a href="https://dgondotnet.blogspot.dk/2017/03/not-last-console-application-wrapping.html">Wrap-up</a> </li>
</ol>Daniel Gonzálezhttp://www.blogger.com/profile/13468563783321963413noreply@blogger.com0