The dreaded moment

 
event

And we are not talking about meeting your political family, or that moment of hesitation before writing the first test of a class. I am talking about deployment to a whole new different environment. I am talking about the pressure of the client to show everyone and their cat why they had spent so much money to replace an existing working system. I am talking about deployment to the live environment.

Why it has to be dreaded in the first place? Well, it shouldn't have to. After all we all are running iterative project with a live demo after each an every SCRUM sprint, aren't we? Weeeell... maybe not. I agree it shouldn't be dreaded, but not everyone has the luxury of running one of those projects (and run it the proper way).

My team is very lucky in a lot of aspects: we use SCRUM, we have our code covered by a bunch of unit tests, we produce deployment-ready code after each sprint, the client is pretty involved in the process (and happy to be so), etc... But not everything is champagne and roses. It never is. Sounds dark and pessimistic but reality strikes hard back.

We have made several sins that I'd like to share with my loyal readers (if any) so that they benefit from our experiences.

First of all we were cheating. Well, sort of. Every demo we did was from a chosen developer machine, so very little (if any) "real" deployment was done before each sprint demo. Was it the right thing to do? Well, at first we did have no other choice as we had no other environment. But inertia is a powerful and evil feeling. We were caught in the moment of delivering features as mad-men and we were not steady enough to draw the line, after which, all demos are to be shown using a different environment. The more "live environment" the system is, the better. Of course it would have been more than painful than nothing, but I have the feeling that the pain would have been lesser and lesser.

 

The second "sin" (and almost unavoidable) has to do with the products that were involved. I envy deeply those that only have to deal with their own code and a number of databases they control. It is an almost sweet spot. There are plenty of tools and techniques to deploy the environment automatically and seamlessly. Even if they had to deal with a legacy database, the chances of having existing environments (their health is another matter) makes it less problematic. But if you are, like my team is, stuck with "enterprisey products"... boy, you are in trouble. The very nature of that enterprise system makes the community of users an important subset of the whole community of software development that deals with XCopy-deployable binaries and databases. That makes the variety of techniques to speed-up deployment diminish as quick as a doughnut makes its way into Homer Simpson's digestive system. If the offer is low and the system was not designed for happy deployments, you have a cocktail for stress of manual deployments. One would say that, 90% of the time, there is an automated option. And I have to somehow agree, but if automating takes a lot (and I mean a lot) of pain and time, and you find a way to deal with iterative deployment (by not doing it), automation loses its sex-appeal and humans revert to good-old point-and-click.

 

And the third is related to the second one in the sense that some of the "enterprisey products" (and worse, the combination of them) require you to be a freaking genius. No one says this is easy, but one thing is know your stuff as a programmer, developer, architect, software engineer, whatever-says-in-your-business-card and another very different thing is change your hat to be a highly proficient systems engineer for the sake of creating a solution with 2+ server products. In our case the skill-set needed for deploying successfully includes (but in no way limited to):

  • Advanced knowledge of Active Directory and its evil pet: Kerberos,
  • pretty good knowledge of security regarding web applications and their relationship with those "enterprisey products",
  • knowledge of "ancient pearls" like: COM, MsDTC,...
  • pretty good database knowledge in terms of backup/restore and security configuration,
  • everything spiced with the intrinsic knowledge of administrative tasks of 4 server products,
  • and mixed with the fact that you have to be good at your "real" job: programming.

Jeeez, it hurts just typing it. Imagine being proficient at it.

Our sin comes with the fact that we were not able to find the correct set of skills for such a task. Hard to say and admit, but none of us had the whole set of skills, and we lacked of some expertise in some of the areas (as a team). But harder to swallow is the fact that we were not prompt enough to get that knowledge outside our boundaries, including the hosting company.

 

You might think it was Hell on Earth, but it wasn't as bad as it has been somewhere else. But they key takeaways we extract from the experience are:

  • Deliver to different environment incrementally as soon as you can
  • Evaluate you third party products not just by features at runtime, but also by "pain-before-runtime"
  • Be prompt and honest when you have to seek for knowledge elsewhere

Hope that we all learn something from it.