Automating the build and deployment of legacy applications


Published on

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • CWC owns and operates full-service telcomms businesses around the world.We provide fixed line, mobile, broadband and pay TV services in the Caribbean, Panama, Seychelles and Monaco.My team and I are working on a large-ish project to improve our build and deploy processes, and in my session today I’m going to share our experiences and what we have learnt.
  • So first a little background and some context
  • The main application we are focussing on isLiberate.It is a web application with a legacy backend written in CobolThe team that develops, deploys and supports it is about 50 people strong, including a deployment team of 5.It comprises of several tiers: A Java web application running within Tomcat under Linux A Java Web Services application providing SOAP APIs, also running within Tomcat but under Windows And then there is the legacy Cobol backend, which runs under VMS.For those not familiar with VMS, it is a mainframe type OS (developed & supported by HP) that has been around for many years but has almost reached end of life, at least as far as HP is concerned.The Liberate application is installed in most of the Business Units I mentioned earlier, in fact its installed on about 20 production environments.
  • Challenges with our software delivery process for Liberate1) CMS is a VMS specific VCS. As version control systems go,  it is old-fashioned: it doesn’t really do branching (has something sort of equivalent) plus it works on a locking-model where devs reserve and check-out the code they want to work on, making that code unavailable for anyone else to work on. 2) Java tiers not such an issue as we use Hudson, Jenkins + Ant for building the binaries and deployablesThe Cobol/VMS application is built and deployed using a large number of home-grown DCL scripts + a Cobol program (DCL is VMS equivalent to Bash or Powershell). It is fair to say that it requires a lot of babysitting, and there are only 2 people in the team who have the necessary knowledge+skills to do this.3) Although the 3 tiers are separate applications there are dependencies between them; much of the time deploying a new build of one tier means deploying a new build of all tiers. Thus a lot of co-ordination is necessary4) Due to the manual steps, the co-ordination and the VMS scripts we are dependent on a small number of knowledgeable people5) So, the whole process is very stressful6) Although some people from the Liberate team are relocating, most are not. So, experience will be even more in short supply
  • Late last year my boss set me the following objectives with respect to the challenges just mentioned:There are 5 people in the Deployment team who spend most of their time either doing deployments or preparing for one; it is not a speedy process and, thus, there is a backlogAs you can imagine, the process is complex and laborious; so error-prone and difficult to manage and controlAs I’ve already mentioned, VMS skills are hard to findDepending on multiple version control systems increases complexityDuring the summer most of us not relocating to Miami will be leaving CWC (including most of my team)
  • Continuous Delivery is a best practice book, which covers all aspects: includingContinuous IntegrationTesting strategyConfig Management etc whereas the Jenkins book … is much more of a recipe style book which focuses on how to get the best out of JenkinsBoth are definitely worth reading, though I am going through the5 stages of grief: denial, anger, bargaining, depression and acceptanceThe Liberate development and deployment teams are very much what you might call ‘traditional’: thus modern practices like automated testing, automated configuration management, continuous integration are not terms familiar to them, let alone practiced.
  • These are the tasks we are planning on achieving in order to meet the objectives
  • Before we started it helped me to break down our build/deploy process into its high-level stages; not just for my benefit but when explaining to others.Commit stageTotally up to the Dev team how they produce their code, test it and so on. Only once it hits the VCS is it of interest to the deployment team. We are fortunate that, for many years, it has been the practice that the Dev team do not deploy to any test or prod environment – that responsibility lies only with Deploy team. This practice makes it much easier for us to change/update the build/deploy processes as many fewer people we need to involve.Generate artifactsWe use our Build Server to execute the tasks in this stage and I’ll talk a little more about it later on.Although many of the tasks in this stage are Java-only the main task of generating binaries is just as applicable to the Cobol backend application.Push binariesWe don’t use our Build server for actual deployments – we have a separate application for that. So, this stage simply pushes the appropriate binaries from the build server to the deployment serverBinary packagingThis stage takes the selected binaries and combines them with the config for a specific environment, e.g. A BU test environment, and generates that actual deployment files. In the case of our Java apps that will be a war file and for the backend app zip files containing executablesDeploymentThe actual deployment stage. One of the things we looked for was a solution that didn’t require any special agent s/w to be installed on the hosts to deploy to.For the rest of this session I’ll be focussing on 3 areas: the VCS that we use, some gotchas we’ve come across and the branching strategy we have adopted the build server we are using and some of my favourite plugins the deployment server we are using and how we are configuring it
  • Reliable – we’ve been using it for 7+ years with no problemsStraight-forward – popular, concepts are well thought through (in my opinion)Tools – TortoiseSVN and the Eclipse plugin are my favoured ones. Happily the Java client runs under VMS without any problemsLocking – we’ve never really had problems with developers over-writing others’ code. Will talk about code merging later on thoughCobol source migration – approx 10,000 objects for each release. Will be an arduous process.VMS Dev env - use DCL scripts rather than direct CMS commands; this abstraction means that we can effectively write new versions of these scripts; therefore devs will need less re-training as no need to learn SVN commands
  • As well as choosing the appropriate VCS that fits in with your culture and environment, having a branching strategy which is clearly articulated and understood is also important.This is the strategy we are aiming for – it is not very much different from what we do now. All I have really done is formalise an existing, de facto, strategy.Over the last few years there seems to be much more debate about branching: with the rise in popularity of Git and Github, branching and forking is encouraged versus people like Martin Fowler and Kent Beck who are encouraging continual deployment from the trunk and, thus, little or no branching. However, we follow what I think is a fairly standard branching strategy, namely....Dev release branch – in SVN lingo, the trunk.A Branch for each release – created when system testing of that release is commencingA Branch for each prod release build – our Liberate application is used by many business units and it is up to them when they want to upgrade to a newer major release or even a newer build of the same release. Once a BU is settled on a build they become very reluctant to take newer builds unless there is something in it for them. This makes it difficult to deploy bug-fixes as the release branch will often contain many weeks of other fixes and enhancements (that they are not interested in). So, our solution is to create these specific release build branches to which we can apply specific bug-fixes only. We try not to become too branch-tastic, though, as this becomes difficult to manage and control.Different SVN repos – we have learnt to our cost that having one repo for everything is not necessarily a good thing when there is a lot of active development happeningCurrently the 3 Liberate tiers use slightly different name formats for the same release, so it seems sensible to move to a common naming standard
  • Perhaps these are obvious but I wish I’d known a few years ago Merging – SVN has a merging tool, which we did used to use several years ago. But our offshore developers hated merging, so only did it maybe once per week, by which time there were often 10s of commits involving 100s of code changes. Also I don’t think the testing & checking of merged files was particularly thorough. As a result this merging activity was usually a disaster. After about 6 months the head of Dev decided it should be abandoned and now code changes are manually made to the relevant branches. Moral of this story: merge often and have high code coverage.Commit messages – wasn’t sure whether to include this one... Its common sense right? Well, in my experience perhaps not. The reason I’m a little fixated is that at least once a week I use SVN history to figure out why something was changed and when and having decent commit messages is an invaluable aidBinaries – I’m talking about 3rd-party binaries (jar files etc). Its so easy and simple. All those jars stored in a central place, next to your source code. Makes things nice n easy for the developers and the automated build jobs. The downside is the amount of space it takes up in the repo; the diff algorithm SVN uses is pretty sophisticated but not so great with binary files. So, if you are often committing new versions of binary files expect your repo to grow rather large rather quickly. An obvious alternative is to use Maven for managing and building your application, in which case you can take advantage of its dependency management features. If, like us, you prefer not to use Maven an alternative is another Apache piece of s/w called Ivy. Ivy is purely a dependency manager and nicely integrates with Ant.Repos – we have one repo which holds all our projects. As well as many directories and sub-directories it is very large (thanks to the binaries issue already mentioned). It is relatively slow to navigate and access and its size is causing the people responsible for backups some cause for concern. So, we are migrating the still used projects to a number of new repos. This is a very slow process, we are talking months here.
  • We’ve been using Jenkins (and its previous incarnation, Hudson) for around 6 years, so we’re very comfortable with it.Technology it uses (Java and Tomcat) are very familiar to usSVN and Ant – 2 technologies we use a lotPlugins - There are many manyplugins available and I’ll talk about a few of my favourites shortlyConfiguration – we’ve found the learning curve is not particularly steepIt has a master/slave model allowing the distribution of build jobs; this feature is important for us as we have certain jobs that need to be executed within a Windows environment. It also allows for horizontal scaling – many of our jobs will be overnight builds so being able to spread the load across several severs is essential to ensure builds complete before the next day.The Cobol app has to be built on 3 separate VMS environments. Because we want to use our Build Server to also manage the building of the backend Cobol application it is essential that it can execute scripts on remote hosts. Jenkins does this via SSH and, happily for us, works very well with VMS in this respect.
  • The following plugins are my favourites, i.e. The ones that have really helped usBuild pipeline – in the past our Jenkins jobs consisted of many many steps, however, having more jobs with smaller numbers of steps allows greater parallelism in build pipelines. Without this plugin, though, would be difficult to understand the flow. <Click to show screenshot>Copy artifact – works very well with the Pipeline plugin. Allows artifacts generated by one job to be used by other jobs further along the pipelineText-findr – we discovered that Jenkins can happily execute scripts on remote VMS hosts via SSH but for some reason the return code was not getting picked up so Jenkins wouldn’t know that the script had failed for example. Any console output is picked up though, so we can use this plugin to check for error output
  • We used to use an old, no-longer developed, product called ControlTier for doing deployments. It was very clunky and fragile. So about 18 months ago I started looking for an alternative, that could meet these requirements:Fairly complex deployments, in that our application is used by around 20 BUs, each with their own production and UAT environments and all on either different releases or different versions of the same releaseSo, it is also very important we can easily keep track of which release version is on which environment. Currently done via email and spreadsheetsApplication configuration is manual at the moment and I reckon config mistakes account for around 20% of our deployment cock-ups. A couple of weeks ago we had to change the IP address of a database server in several places – we forgot about one of the places and as a result all of our Caribbean users were locked out of the app for about 30 mins. Not a good day.2 phase deployment: we need to be able to prepare the deployables in some sort of BU staging area in advance, so that when the actual deployment time comes there is no need to wait for the deployable to be remotely copied. This is especially important for the BUs that have limited WAN bandwidth, e.g. Those in the Indian oceanDifferent parts of the whole app have to run under different OS
  • Basically it meets our requirements.Instead of storing the BU specific config files in SVN we store them in DeployIt.Can deploy to the different Operating Systems we useThe built in Release Dashboards help us keep track of what version & release is installed on what environmentThe 3 tiers that comprise the Liberate application have separate deployment mechanisms but versions do need to be kept in sync. DeployIt has the concept of a composite package which allows us to do that.With some excellent help from Xebia we built a custom plugin which allows us to operate a staging server and manage 2 phase deploymentsAnother feature we like is that DeployIt only transfers the packages that have changed between versions.
  • This brings me to the end of what I wanted to talk about; I’ve covered our experiences with Subversion how we use Jenkins and what we want to do with DeployItWe are using the build/deploy pipeline in anger. Not yet for Liberate (that is a few months away) but for a Java ESB application, and we are very happy with how things are progressing.There is a lot to Continuous Delivery, especially for organisations like CWC, and we are only at the start of what we could achieve. For those of you in a similar position I do recommend you read the Continuous Delivery book I mentioned earlier and talk to the vendors here today.Thanks for listening and if anyone has any questions I’m happy to take them
  • Automating the build and deployment of legacy applications

    1. 1. Automating the build and deployment of legacy applications The CWC experience
    2. 2. Steve Judd Systems Architect at CWC Cable & Wireless Communications Full service Telco (fixed, mobile, broadband, TV) Main businesses in Caribbean & Panama
    3. 3. Background
    4. 4. The Application.... • N-tier application: ‘Liberate’ • Installed in multiple Business Units, on multiple environments • Several releases are actively supported
    5. 5. Challenges... • 2 Version Control Systems: – Subversion for Java tiers – CMS for the Cobol code • Each tier has its own build/deploy tools • Many manual steps • Much co-ordination required • Dependency on highly knowledgeable people • Equals risky, stressful, labourintensive releases • And finally…. CWC HQ is moving to Miami this year
    6. 6. Objectives... • Significantly reduce manual effort • Simplify & streamline build & deploy process • Remove dependency on VMS skills & highly knowledgeable people • Migrate to one VCS • All to be completed by June!
    7. 7. Approach
    8. 8. Some theory... • Useful books: – “Continuous Delivery” by Jez Humble & David Farley – “Jenkins - The Definitive Guide” by John Ferguson Smart • Both contain useful ideas and approaches However…. • Daunting to apply these to a traditional development shop... • … with a large, complex legacy application
    9. 9. What to do, what to do... • Migration to a single VCS + formally define the branching strategy • Selection & adoption of a single tool to manage generation of application binaries • Selection & adoption of a single tool to manage deployment
    10. 10. Our build/deploy pipeline... Responsibility of Dev teams Unit testing Functional testing Commit to VCS Compilation & generation of binaries Run any automated tests Static code analysis Generate Javadocs Push binaries to Deployment server Package binaries into environment specific deployables Actual deployment mechanism
    11. 11. Version Control
    12. 12. Subversion... • Our VCS of choice • Why? – Reliable – Straight-forward to understand & use – Good set of client tools – ...can even use under VMS! – Optimistic locking model • Does mean we’ll be migrating our Cobol source though • Also need to change VMS development environment
    13. 13. Branching strategy • Branch for latest development release (aka trunk) • Branch for each major release (3,4,5 etc) • Branch for each release build in a BU production environment • 3 Subversion repositories: one for each application • Standardise the branch naming convention
    14. 14. Subversion: lessons learnt • Merging – do it often & check every merged file • Meaningful commit messages • Using a VCS to store binary libraries can be a bad thing • More small repos better than fewer large
    15. 15. Building binaries
    16. 16. Binary generation • Jenkins is our Build Server of choice • Why we like it: – – – – – – Familiar technology Integrates well with Subversion & Ant Extensive library of plugins Straight-forward to use/configure Master/slave model Ability to invoke scripts on a remote host
    17. 17. Plugins we use • Build Pipeline – provides a view of upstream & downstream jobs • Copy Artifact – Enables artifacts to be copied from another job – We use it a lot with Build Pipeline • Text-findr – Searches for specified text in files & console output – Handy when scripts’ return code not sent back to Jenkins
    18. 18. Deployment
    19. 19. Our requirements • Complex deployment needs: – Multiple Business Units – Each has a production environment + 1 or more UAT environments – Business Units often on different releases – Many system test environments • Thus keeping track of what release where is important • Management of Application Configuration • 2 phase deployments • Able to deploy to Linux, Windows and VMS
    20. 20. Why we chose DeployIt • • • • • • • It meets our requirements  Dictionaries for the configuration values Handles Linux, Windows and VMS deployments Release Dashboards Composite packages for multi-app deployments Custom plugin for 2 phase deployments Only transfers packages that have changed