Automating PhoneGap Builds with Gulp. Part IV

on Friday, 26 June 2015

In part III we got PhoneGap Build (PGB) “cooking” our artifacts and generating the binaries stew.

D-Ploy

That compilation takes some time so we can:

  1. sit and wait
  2. be proactive and check the build status

We chose the first options as deploying is not done every time the app is built (we install the application on the devices from PGB itself while developing). But if we wanted to check whether a build is ready to be deployed we could poll the PGB API and check the status for the platform that we want to deploy.

The idea is (after some sanity check), using the aforementioned phonegap-build-api package, download the resource for a given platform (using the right file extension) and calling the HockeyApp method that uploads the application version for the chosen platform and environment.

Wrapping it up

Package, queuing a build, waiting and pushing a version.

That is all it takes to have a new version of the application for a given platform ready for the testers.

Two commands. And the peace of mind that there is very little room to screw up. I’ll type them once again, just for the heck of it.

Automating PhoneGap Builds with Gulp. Part III

on Monday, 22 June 2015

In Part II we got to create a .zip file with the artifacts to be uploaded to PhoneGap Build (PGB)

Compile and forget.

The point of automating is not create a .zip file and point and click your way through PGB for it to compile. That is why PGB offers a “read” and a “write” APIs.

Being a PGB “private app” we need to call the /apps/:id endpoint and attach the .zip containing the artifacts, which raises the question: is there an easier way than crafting the HTTP calls ourselves? And there is: use the phonegap-build-api package to handle communication, authentication (using the slightly less secure token authentication) and form manipulation and learn from their documentation.

According to phonegap-build-api’s documentation on the method, attaching the file is as simple as indicating the path to the file, but since globs are used in Gulp’s src, we need to know the path to the single file matched by the glob. For that introspection we use gulp-tap.
Of course, the build task depends on the compress task but there is also this unlock task. What would it be?

Unlock it

Apps in Android may or may not be signed with a key, but iOS apps absolutely need to. For signing purposes there are manual steps to be carried out the resulting of which is some signing keys uploaded to PGB. From then on, those keys can be used to sign the application while building. We chose to have a key per build environment, but you can do differently. In any case, those keys are unlocked (suitable for build) for a period of time, after which, they lock again, resulting in failed builds. To prevent those failed builds keys need to be unlocked and, fortunately enough, the unlocking process can be automated via the API.

Watch it! There is no such thing as updating an existing signing key. If you are referring to keys by id (like we do), changing the iOS provisioning profile requires creating a new key (that has a new id), which forces us to change the keys in our config.js. We chose to cope with it (changing config.js is part of the manual process of adding test devices to the provisioning profile and uploading it again) but one can overcome this small annoyance by getting the keys by name via the API and extracting the key before unlocking or building.

Watch it! I found little documentation about the format of the data to be sent when unlocking the key (the unlockData object from our example). If you do not follow this structure: { form: { data: { password: 'your_password' } } } (or { form: { data: { key_pw: 'kP4zzw0rd', keystore_pw: 'p4zzw0rd' } } } for Android) the call to unlock will say yay! while the build will say nay! and then your frustration levels soar with every try.

Automating PhoneGap Builds with Gulp. Part II

on Tuesday, 16 June 2015

So far, in Part I we read about the motivations and the targets we want to hit with our automation.

Packing up the goods

Being a PhoneGap Build (PGB) private application, we need to upload a .zip file that contains everything we need to build the app for a given environment.
So, creating a .zip file with the app and assets it is.

We follow the default ionic source tree structure:

which means having the config.xml outside the /www folder (as PGB expects). Besides, both the config.xml and the angular services have to be “configured” for the environment they are to be deployed to, so we took the approach of copying everything we needed to a temporary location (tmp/) and create the .zip file for that location (previous cleaning it).

First challenge, harvest command line arguments. Gulp does not have a built-in, standard way, so we chose the yargs package that exposes them from the argv variable.
config is a module we built that exposes variables and utility functions that deal with configuration of the process.
Second challenge is the parallelize-everything mindset of Gulp, that offers no guarantee that a dependent task has finished before the executed task begins. In our case we definitely wanted to finish up “mirroring” the artifacts into tmp/ before performing file transformations or start packing. To achieve such simple requirement we used gulp-sync that offers the gulpsync.sync(['t1', 't2', …]) syntax.
We use gulp-zip to compress the files in tmp/, but the dependencies are interesting in themselves.

mirror copies just the files we need to build (minified versions) and makes heavy usage of inclusion and exclusion of globs. Word of advice, copy just what is needed and optimize your images. We ended up from a 60MB app to a still heavy 10MB (thanks you, high-res displays) and I cannot tell you how much faster that is.
Another interesting feature is, since we are using TFS, we have those dreaded read-only flags on files that would prevent manipulating the content of the files, so gulp-chmod allows to remove the flag when copied.

config transforms the config.xml file with the variables known at build time, such as version and application namespace. To “poke” the content of the files we use gulp-replace. services does the same for for our services.js file for replacing the tokens such as API endpoints and the like from the source code itself.

The end result is a .zip file that can be uploaded to PGB and it will compile.

Automating PhoneGap Builds with Gulp. Part I

on Monday, 8 June 2015

Lately, my team has been doing some hybrid mobile apps development with Cordova and Ionic and one of the multiple challenges faced during such development is:

how (and where) do I build the resulting app?

Local vs. Semi-local vs. Remote

Of course, we can follow Cordova’s guides and build the application locally. That is probably the fastest:

  • time of setting-up, as it is well documented
  • performance-wise, since developers tend to use powerful machines themselves
  • time-to-device: connect you phone and push the resulting app through cable @

It has a pretty serious drawback: forget about building iOS apps unless you have a Mac and… most of us do not have one, some of us do not want one as a work machine and do not fancy the idea of switching machines/OSs back and forth and using that crazy keyboard layout.

Semi-local refers to the new old idea of having a build machine (a Mac in our case) that can build the apps for all the platforms. It something that other teams that develop hybrid apps have done in our company and works for them. Unfortunately, after spending pretty much 5 days fighting tools that we are not familiar with (Grunt, Gulp, Jenkins, Android and iOS SDKs) we were still nowhere near the automation Nirvana.

There is Remote left. And by remote I refer to the ability of someone else building your app off-premises. A 15 minutes walk-through and we got our app built for all supported platforms. That’s all it took to PhoneGap Build (PGB) to be chosen.

Money talks

Their free tier allows a single private app (one which GitHub repository cannot be accessed publicly) and restricts the app size and the ability to upload plugins.

And none of that disturbs our app, so free it is.

The Objective

What we wanted to achieve with our build and deployment scripts is simple: from the command line (the closest to one-button I want to get), generate the artifacts needed to compile the app for a given environment (development, staging or production), queue a build that compiles the application for all supported platforms and automate the deployment to our distribution platform.

The foundation

Before deciding how to automate the build, we were already using Gulp to “compile” the front-end (Sass to CSS, bundle and serve the static website), so we saw no reason for not using it for all automation.

nMoneys Moves Forward…

on Tuesday, 21 April 2015

…by looking into the past.

Because that is what it should be considered when it stops supporting a version that is 7 years, 5 months and 2 days old; for one that is 5 years and 9 days.

.NET 4.0

Those are the the age difference between the previous .NET 3.5 and the currently supported .NET 4.0 according to WikiPedia. So I guess that very few people will complain about the minimum requirements imposed over this new version of NMoneys.

Why?

Well, someone complained about a concurrency problem and even though I have never, ever experienced such a race condition, fixing an issue always feels right.

I could have done differently. I could have back-ported just ConcurrentDictionary, or use someone else’s back-port; but, while at it, I suffered from the “falling into the rabbit-hole” syndrome and foolishly holding onto the ancient version felt like the wrong-thing-to-doTM.

And so I decided that 3.6.0.0 will be the last net35 compatible release.

Home, Sweet (and New) Home

As the reader might have noticed, Google Code is shutting down. That means that the project needs to get a new home and GitHub seems a sensible place to host an Open Source project.

Binary Releases

Nuget packages will continue being the preferred and main deployment method, but binaries will continue being offered.

Since the de-facto closure of Google Code’s download service, binaries had been provided by Bintray. Well. it’s not that thy do not offer a good service, but given the whooping grand total of 4 binary downloads I think I can go with the simpler, more integrated and less-featured GitHub Releases.
As a result, binaries will be available in the Releases area of the project home from now on.

Continuous Integration

Moving away from a dying code repository opens up a world of possibilities. Amongst which is Continuous Integration using Appveyor.

Right now it is an out-of-the-box build (compile + run tests) but I will be investigating further automation in the near future.

Oh, and the badge is nice Smile

And a Small Change

A colleague of mine pointed out the fact that the good people of Sweden changed the way they prefer writing big numeric quantities, as pointed out in this Wiki page (in Swedish). And so, I obliged.

From now on, the default way a big monetary quantity in SEK is not using . as a group separator. But, of course that default behavior can be overriden:

1s and 0s


Handling Multiple RavenDB Databases

on Monday, 23 March 2015

I confess. It was a bit backwards writing about how to handle indexes in development involving multiple databases without writing about handling multiple databases. It’s never too late.

Why?

In RavenDB databases are a way to isolate, not only data, but configurations as well as the location of “physical” data. Oh, and it’s fully supported in server mode.
For the amounts of data my applications handle, we are good to go with a single database, but sometimes I have found the need to have separate databases. Hint: has to do with being faster to delete a complete database and start from scratch than delete lots of documents inside an existing one.

How?

Once you have made the choice of having multiples databases you face the challenge of consuming those databases in code.

How I use RavenDB abstractions

The two main entry points for RavenDB are the IDocumentStore and the IDocumentSession (I have not had the chance to use asynchronous session yet) abstractions that are injected by the IOC Container of choice. IDocumentStore is expensive to create and I usually register it as a singleton. As for the IDocumentSession I usually have one instance per request which is disposed when the request ends.

The session is heavily tied to a database so, how does one handle having multiple instances around?

The database-specific session

It is easy enough to have a marker interface that extends IDocumentSession that represents an open session to a given database and use that dependency whenever a connection to a database is needed.

Question is, how is that abstraction implemented?

Some IOCs have the ability to create types on the fly, thus obviating the need of a concrete class that implements the specific session, but I have found those too tightly coupled to the IOC and pretty code-navigation unfriendly.

I would be silly, though, to create a type that implements the interface for each database: after all, one would be implementing IDocumentSession again and again and again. Instead we can use a decorator of the IDocumentSession interface can do the trick very well. The only thing one has to keep into account when instantiating the decorator is passing on the session to the correct database.

Small help offered

rvn-izr can help with this approach by providing the “implementation” for that decorator/adapter: the DbSessionAdapter, that is.

The adapter implements IDocumentSession, but , in my experience, I have found this interface is kind of volatile (has changed in between versions). That kind of instability made me shy away from binary dependencies for this part of rvn-izr and go the source code route instead. With this approach, the adapters are integrated in the source code of the importer and use whichever version of RavenDB the importer is using.

Does that mean that the developer has to manually implement IDocumentSession? I am afraid so. Those twenty-something methods need to be implemented. Delegated, more properly speaking.
And you know what? It’s not a big deal. And with the right tools, one can achieve it in around 3 seconds, being a clumsy typist and even using the mouse. Here is how I do it with Resharper:

Implement DbSessionAdapter R#

Closing the circle

And that is all. At least for sessions, for other database specific artifacts, head on to rvn-izr’s documentation and find out.

RavenDB Indexes While Developing

on Friday, 27 February 2015

At work, my team and I have been working with RavenDB for quite some time.

Despite some people objections, the team usually shares a single development database. There is still the occasional “I made this breaking change… Guys! Get latest!!” but happens quite rarely.

Besides documents, there is another data artifact that is very necessary for RavenDB to work: indexes (and transformers). Favoring explicit indexes (instead of automatically created ones) means we end up with bunch of indexes defined in code.

From Code to Database

RavenDB offers the IndexCreator high-level api to easily create indexes. Question is: when will that code that creates data indexes from their code definitions executed?

When application starts

An strategy we have followed in the past was placing index creation code right after the DocumentStore initialization code. Since that initialization is recommended to occur once per application, we ended up re-creating the indexes every time the app starts.

An app in development re-starts many times and the problem that I pointed out in the first paragraph (developers that do not have the latest version of the code running) worsens seriously: an application with the outdated index definition will override the just-updated index.

So, all in all, that approach was not such a good idea at all.

On demand

Currently we use a small console program (or LINQPad script) that updates the indexes when needed.
Such approach can even be used to automatically deploy index definitions from environment to environment (given we have access to that environment database, of course).

The multiple database challenge

What happens when multiple databases are accessed in the application? Out-of-the box, the simpler to use IndexCreator.CreateIndexes() takes an assembly, which means that if all indexes for all databases live in the same assembly (and we tend to favor single –or few– assembly solutions), all the databases will contain all assembly definitions.

Not all databases contain the same collections, so some indexes are empty (consuming little to no resources); and transformers won’t be used in queries against that database. But having artifacts that should not be there is still a small annoyance.

OSS to the Rescue

To solve this, and some other problems, rvn-izer has been created.

Rvn.Izr_Nuget

Installing the Nuget package in the project that contains the indexes, gives access to the CreateInAttribute that can be used to decorate indexes and transformers with the name of the database they correspond.

Filter by database name

Using the Indexes fluent interface, we can get the ExportProvider instance that contains the artifacts corresponding to the specified database and invoke the corresponding IndexCreator.CreateIndexes() overload to stop creating artifacts where they do not belong.

Safety first

Another interesting feature Indexes offers is the ability to return artifacts not decorated with the attribute. That opens the chance to create an automated test that verifies, for instance, that we are not forgetting to decorate a given artifact and, thus, not creating it.

Useful?

Play with it, use and abuse it and, if you fancy more features, PR your way through.