• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Agile in the Coud
 

Agile in the Coud

on

  • 1,530 views

Thoughtworks and AWS co-presentation on Agile Development in the Cloud

Thoughtworks and AWS co-presentation on Agile Development in the Cloud

Statistics

Views

Total Views
1,530
Views on SlideShare
1,519
Embed Views
11

Actions

Likes
9
Downloads
0
Comments
0

2 Embeds 11

http://www.techgig.com 9
https://si0.twimg.com 2

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • WELCOME\n\nTHANKS\n\nAGILE + CLOUD -> NEBULOUS & ILL-DEFINED\n\n---\n\nWelcome everyone. Thank you for your attendance today as Joe and I get to talk about Agile in the Cloud. And Joe, I think we should congratulate ourselves on the fact that we’ve managed to use two of the most nebulous and ill-defined terms in computing together in the title. I think we could talk about pretty much anything to do with IT and be able to defend it as being somehow related to Agile and/or the Cloud... it’s a shame we didn’t get a chance to work Big Data into the title as well :-) \n
  • But before we progress, let’s quickly introduce ourselves.\n\n(Andy) I am Technical Consultant for the Singapore office of ThoughtWorks. ThoughtWorks is a global software delivery consultancy that has for the last 12 years worked solely using Agile methods. We’ve established a Singapore office in late 2012.\n\n(Joe) I... \n
  • TITLE - RISK OF NOT DEFINING TERMS\n\nWORKING DEFINITIONS\n\nREALLY AWS AND XP\n\n---\n\nNow back to our title. \n\nAnd specifically back to the risk we run with not clearly defining what we mean by Agile and Cloud within the context of this presentation. \n\nHere are the working definitions we are using for both Agile and Cloud going forward. They are by no means complete, but capture the various aspects of each term that we want to draw upon for the remainder of this presentation.\n\n[These are really definitions of AWS and XP, respectively]\n
  • AGENDA -> NATURAL SEQUENCE\n\n---\n\nSo with these working definitions in mind, let’s identify the specific aspects of Agile development that we’ll be focussing on for the remainder of the evening.\n\nIf you’re familiar with these practices, you’ll be aware that this is a natural sequence to present these ideas in as well. Each subsequent one builds upon the previous. \n
  • SET EXPECTATIONS\n\nLIMITED TIME -> SCRATCH SURFACE\n\nFURTHER READING\n\n---\n\nBut before we launch into the guts of these topics, we’d like to set expectations appropriately by being very clear and upfront about what we will and will not be covering this morning. We do have limited time and each one of these topics require at least a week of discussion to get a full appreciation of their depth. So we will really only scratch the surface of each topic, but hopefully provide enough breadth of information that you’ll be able to see where to continue looking if you want to know more about any area.\n\nAnd for each of the topics, we’ll identify further reading material if you’re interested.\n
  • SET EXPECTATIONS\n\nLIMITED TIME -> SCRATCH SURFACE\n\nFURTHER READING\n\n---\n\nBut before we launch into the guts of these topics, we’d like to set expectations appropriately by being very clear and upfront about what we will and will not be covering this morning. We do have limited time and each one of these topics require at least a week of discussion to get a full appreciation of their depth. So we will really only scratch the surface of each topic, but hopefully provide enough breadth of information that you’ll be able to see where to continue looking if you want to know more about any area.\n\nAnd for each of the topics, we’ll identify further reading material if you’re interested.\n
  • \n
  • \n
  • \n
  • \n
  • SIMPLEST\n\nMOST WIDELY ACCEPTED\n\n---\n\nSo let’s start with automated testing, the simplest and most widely accepted of all the practices we’ll be discussing.\n
  • CORNERSTONE PRACTICE\nSAFETY NET\n\nFAST FEEDBACK & CONFIDENCE\n\nMATURE TOOLS - NO EXCUSE\n\nCOMPUTERS BETTER AT MOST TESTING\nFREE UP TESTERS\n\n---\n\nAutomated testing is one of the cornerstone practices of all Agile engineering teams. Many of the more advanced practices become difficult if not impossible to perform adequately without having the safety net of a fast running automated test suite.\n\nAgile teams crave fast feedback on the work they are doing and want to have confidence that they can improve existing code without fear of introducing regression problems into the codebase: the appropriate use of automated testing addresses all of these issues. \n\nAnd modern development languages and platforms have mature automated testing tools and frameworks available to them, so there is almost no technical barrier to investing in test automation these days.\n\nHowever, many teams still take a manual approach to the bulk of their testing, even when computers are far better at this repetitive, repeatable, detail-oriented type of work. There will always be a place for manual testing for most pieces of software, but this type of work should be concentrated on those aspects of applications which are truly difficult to test. Invest in automated testing to free up your precious testing people to focus on just testing those crucially difficult-to-automate areas. \n
  • MANY TYPES OF TESTING - NOT ALL AUTOMATEABLE.\n\nDIAGRAM FROM BRIAN MARICK\nSTATES WHAT SHOULD BE AUTOMATED.\n\n---\n\nTesting is a wide and open church. And there are many different types of testing which can be freely automated. In fact, some types of testing can only be done in an automated fashion.\n\nThis diagram is from noted Agile tester, Brian Marick, and it breaks common types of testing that Agile teams do into 4 quadrants... I like this diagram because it does a good job of identifying lots of different types of testing that teams do, but also gives an opinion on what types of tests are best suited to automation or manual testing.\n
  • SUBSET OF AUTOMATEABLE TESTS FOR AWS\nALL CAN - NOT ALL SHOULD\nUNIT TESTS - NO: DEV LOCAL, TEST LOCAL\nINTEGRATION TESTS - PERHAPS\nFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATION\nPERFORMANCE TESTS - DEFINITELY\n\nPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL\n\n---\n\nBut there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.\n\nUnit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.\n\nIntegration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. \n\nFunctional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.\n\nThe broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. \n\nAnd because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.\n
  • SUBSET OF AUTOMATEABLE TESTS FOR AWS\nALL CAN - NOT ALL SHOULD\nUNIT TESTS - NO: DEV LOCAL, TEST LOCAL\nINTEGRATION TESTS - PERHAPS\nFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATION\nPERFORMANCE TESTS - DEFINITELY\n\nPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL\n\n---\n\nBut there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.\n\nUnit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.\n\nIntegration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. \n\nFunctional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.\n\nThe broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. \n\nAnd because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.\n
  • SUBSET OF AUTOMATEABLE TESTS FOR AWS\nALL CAN - NOT ALL SHOULD\nUNIT TESTS - NO: DEV LOCAL, TEST LOCAL\nINTEGRATION TESTS - PERHAPS\nFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATION\nPERFORMANCE TESTS - DEFINITELY\n\nPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL\n\n---\n\nBut there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.\n\nUnit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.\n\nIntegration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. \n\nFunctional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.\n\nThe broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. \n\nAnd because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.\n
  • SUBSET OF AUTOMATEABLE TESTS FOR AWS\nALL CAN - NOT ALL SHOULD\nUNIT TESTS - NO: DEV LOCAL, TEST LOCAL\nINTEGRATION TESTS - PERHAPS\nFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATION\nPERFORMANCE TESTS - DEFINITELY\n\nPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL\n\n---\n\nBut there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.\n\nUnit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.\n\nIntegration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. \n\nFunctional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.\n\nThe broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. \n\nAnd because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.\n
  • Over to you Joe to show us one specific framework that has been written to support this type of testing within AWS...\n\n[Introduce Pypoll app]\n[Clone repo?]\n[Run tests from command line]\n[Explain BWMG]\n[Fire up BWMG]\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • [JOE’S SLIDE]\n\nThere’s another type of testing which is also ideally suited to the AWS platform - hypothesis testing. Agile teams often use the term “spikes” to define technical proof-of-concept investigations designed to mitigate risks around feasibility or effort. Using AWS greatly expands the scope for what forms those tests can take. For example:\n question: what is the best instance size for me? We can run the same code on a variety of instance size configurations and identify the sweet spot of number of servers and server size\n question: is multi-threading necessary? \n question: what is faster/cheaper: memory or cpu? Amongst it’s set of EC2 instance families, AWS provides both CPU optimised and RAM optimised instances. Depending on which of these machine types you use, you would use different coding styles to make the most of these machine types...\n
  • \n
  • \n
  • CI -> ROBOT TESTER\nNOT COVERED BY NAME...\nINTEGRATION = BIG RISK IN S/W DEV\nUSUALLY LATE, LONG, HARD TO ESTIMATE, MANY DEFECTS\n\nAGILE -> “HURTS -> MORE OFTEN”\nCULTURAL CONVENTION + SOFTWARE\nMANY TIMES/DAY\nINC. FREQUENCY -> DEC. BATCH SIZE -> DEC. RISK\n\n---\n\nAt it’s simplest, you can think of Continuous Integration (or CI as it is popularly abbreviated) as a robot tester who’s sole job is to monitor changes to the codebase and run quality checks each time changes are detected.\n\nAlthough this definition isn’t really conveyed by the name “CI”. The “integration” aspect comes from the fact that one of the biggest risks of software development at scale is when streams of code developed in isolation are integrated. Traditionally, this integration happened late in the piece, especially if different teams were tasked with building different parts of a larger system. These delayed integration activities were usually lengthy, difficult to estimate and often produced many defects fundamental to the design of the software, requiring expensive and time consuming fixes.\n\nThe Agile perspective on these sorts of things is “if it hurts, do it more often”, hence the notion of Continuous Integration. By using a combination of cultural convention with the development team and software support, even large teams can integrate disparate code streams multiple times a day. By integrating daily, hourly or even more frequently, the size of each change to be integrated is much smaller than when you are integrating weeks or months worth of work at a time. Smaller change sets means quicker and simpler integration and faster feedback should any problems occur. \n
  • QUOTE FROM BOOK\nEXTREME? NOT WITH PAST EXPERIENCE WITH INT.\n\n---\n\nThis is a quote from a book we’ll be referring to a little later on. It might sound like an extreme attitude to take, but if you consider how troublesome integration often becomes in the software development industry, it makes sense to take this risk-averse stance.\n
  • QUALITY: RUNS, TESTS, INTERNAL QUALITY\nNEED BINARY PASS/FAIL STATUS\n\n---\n\nSo the basic elements of a Continuous Integration environment are quite simple. You need:\n\n- A version control system which is the truth source of all changes to your codebase\n- Some form of continuous integration software. There are lots of options here, some free, some commercial, some suited for small scale CI, some targeting large scale environments. The one we’ll see later in the demo section is called Jenkins which is a well-known, mature, free CI server with an active community supporting it.\n- Some notion of what “quality” means with respect to the codebase being integrated. At it’s most basic level, you can consider an application to have passed CI if the application runs in any sense. For most teams though, they will at least have a suite of automated tests to run against their application. Usually these tests will provide a combination of white box, grey box and black box testing. Beyond that, many teams also attempt to make statements about the internal quality of their code by using quality metrics tools to ensure the amount of complexity and duplication within the codebase is kept to a minimum. All of these tests for quality can be invoked within a CI environment and combined to provide a binary “pass” or “fail” report for each build of the software.\n
  • QUALITY: RUNS, TESTS, INTERNAL QUALITY\nNEED BINARY PASS/FAIL STATUS\n\n---\n\nSo the basic elements of a Continuous Integration environment are quite simple. You need:\n\n- A version control system which is the truth source of all changes to your codebase\n- Some form of continuous integration software. There are lots of options here, some free, some commercial, some suited for small scale CI, some targeting large scale environments. The one we’ll see later in the demo section is called Jenkins which is a well-known, mature, free CI server with an active community supporting it.\n- Some notion of what “quality” means with respect to the codebase being integrated. At it’s most basic level, you can consider an application to have passed CI if the application runs in any sense. For most teams though, they will at least have a suite of automated tests to run against their application. Usually these tests will provide a combination of white box, grey box and black box testing. Beyond that, many teams also attempt to make statements about the internal quality of their code by using quality metrics tools to ensure the amount of complexity and duplication within the codebase is kept to a minimum. All of these tests for quality can be invoked within a CI environment and combined to provide a binary “pass” or “fail” report for each build of the software.\n
  • QUALITY: RUNS, TESTS, INTERNAL QUALITY\nNEED BINARY PASS/FAIL STATUS\n\n---\n\nSo the basic elements of a Continuous Integration environment are quite simple. You need:\n\n- A version control system which is the truth source of all changes to your codebase\n- Some form of continuous integration software. There are lots of options here, some free, some commercial, some suited for small scale CI, some targeting large scale environments. The one we’ll see later in the demo section is called Jenkins which is a well-known, mature, free CI server with an active community supporting it.\n- Some notion of what “quality” means with respect to the codebase being integrated. At it’s most basic level, you can consider an application to have passed CI if the application runs in any sense. For most teams though, they will at least have a suite of automated tests to run against their application. Usually these tests will provide a combination of white box, grey box and black box testing. Beyond that, many teams also attempt to make statements about the internal quality of their code by using quality metrics tools to ensure the amount of complexity and duplication within the codebase is kept to a minimum. All of these tests for quality can be invoked within a CI environment and combined to provide a binary “pass” or “fail” report for each build of the software.\n
  • And this is how those components of a CI system integrate together....\n\nIt all begins with a developer who has finished a piece of work and wants to integrate with the rest of the people working on the same codebase...\n
  • The developer checks in their changes to their source control system...\n
  • The same system which is being monitored by the CI server...\n
  • The CI server will notice when the code has changed\n
  • At which point, it will run a number of specified tasks to assert the quality of the new version of the codebase. As mentioned before, typically some forms of automated testing and code analysis are involved at this point. All these activities are typically referred to as “the build”.\n\nWhen these tasks have completed, a report of the build will be produced. \n
  • Which in this case is being emailed out to people in the team who are interested in the results of the build. Email is a common channel for this communication, but there are many other forms as well, depending on the resources available and the distributed nature of the team. \n
  • CI + CLOUD?\nOFTEN CO-LO WITH DEV TEAM\nNEED FAST I/O PERFORMANCE -> LIMITS INSTANCE TYPES\n\n---\n\nSo what role does the cloud have with respect to CI? Well for most teams, the CI server itself is one which is physically located within their team area. Typically you want your CI server to be able to provide fairly fast I/O performance, so this limits the types of EC2 instances you might want to use if you were thinking about hosting CI on AWS.\n\nBut, just like with automated testing, the application of CI can benefit greatly from the elastic nature of the AWS platform... let’s have a look at how this might work in practice now...\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • HERE’S WHAT HAPPENED\n
  • \n
  • TEST + CI -> MATURE FOR AGILE\nSOLVED PROBLEMS\nRUTHLESS AUTOMATION\nENVIRONMENT SETUP\n“PROVISIONING”\n\n---\n\nNow in the Agile spirit of “ruthless automation”, we have looked at automating testing and automating the integration of code. Both of these activities are quite mature in the agile space and basically solved problems. However, there are many more places where the automation flashlight can be shined to reduce risk around our development and deployment activities any further. One of the more recent trends has been around automating as much of the deployment activity as possible, including the setup of the environments which are needed - a process commonly referred to as “provisioning”. \n
  • BEST DEFINITION...\nSAME TOOLS + PRACTICES -> INFRA AS BUSINESS PROBLEM\n\nMISLEADING TITLE -> INFRA NOT CODED, SPECIFICATION IS\n\n---\n\nAnd with that thought in mind, the best definition for Infrastructure as Code is “programmatic provisioning by API”. Practically speaking this means using the same mechanisms, tools and practices to specify what the infrastructure should look like and how it should behave as we have been using to specify what the solution to the business problem is (i.e., the application).\n\nTechnically the title is misleading as the infrastructure itself is not codified, but the mechanism to specify and provision that infrastructure certainly is. \n
  • DIFFERENT WAYS PROVISIONING HAS HAPPENED\nMANUAL - LOTS OF HUMAN DECISIONS\nSCRIPTED - PIECEMEAL, NOT SHARED\nPROGRAMMATIC - S/W ENG DISCIPLINE: VERSION CONTROL, TESTED\n\n---\n\nBecause environments and services have always needed to be provisioned, historically there have been a number of ways this has happened. At the most basic level, there is manual provisioning which is still using computers but also involves a large amount of human decision making and input, even if there are written instructions to follow.\n\nIn all but the most basic of environments, some form of scripting is applied to remove some of the human error risk from the deployment process. Typically, these scripts will be patched together using a variety of languages and approaches and often kept safe and sound by the person who wrote them.\n\nFull infrastructure-as-code programmatic provisioning takes the discipline agile engineers apply to their source code and transfers that to the code used to specify infrastructure. The languages used for this coding are generally customised specifically for infrastructure. The scripts built with these languages are maintained in version control and many of them can be the subject of automated testing, just like application code.\n\nAnd as you move further and further along this path of maturity, the speed of your provisioning increases, likewise the repeatability and the reliability of the same process also increases. \n
  • And when we look at the lower level activities that are typically part of the provisioning process, we can start to see where some of the benefits of running on a platform like AWS in terms of the amount of provisioning work that is required, irrespective of whether you’re doing this in a manual or programmatic fashion.\n\nTypical bootstrapping activities that occur as part of provisioning an environment include work around the hardware components (racking and stacking servers), configuration of the network elements, storage and compute components and the base operating system installation and configuration for each of the servers.\n\nRunning on AWS removes the need to do most of that work, certainly at a detailed level. Leveraging the existing compute, storage and network capabilities of AWS means your provisioning activities tend to start higher up the pyramid. \n
  • And when we look at the lower level activities that are typically part of the provisioning process, we can start to see where some of the benefits of running on a platform like AWS in terms of the amount of provisioning work that is required, irrespective of whether you’re doing this in a manual or programmatic fashion.\n\nTypical bootstrapping activities that occur as part of provisioning an environment include work around the hardware components (racking and stacking servers), configuration of the network elements, storage and compute components and the base operating system installation and configuration for each of the servers.\n\nRunning on AWS removes the need to do most of that work, certainly at a detailed level. Leveraging the existing compute, storage and network capabilities of AWS means your provisioning activities tend to start higher up the pyramid. \n
  • \n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • LOTS OF OPTIONS FOR PROG PROV.\nAWS -> CLOUD FORMATION\nCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPET\nFIGHTS CONFIGURATION DRIFT\n\n---\n\nThere are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:\n\nHere’s the basic workflow of a Puppet managed environment:\n\n- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).\n\n- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.\n\n- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.\n\nSo let’s have a look at how that will happen in practice.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • TEST + CI + IAC -> BENEFIT -> CD\nTRADITIONALLY, DELIVERY -> HIGH RISK\nMAKES PEOPLE SQUIRM\nMANY ORGS -> OPTIMISE BIGGER, LESS FREQUENT RELEASES\nJUST INCREASES PAIN\n\nAGILE “HURTS -> MORE OFTEN”\nHIGH FREQUENCY + LOW RISK\nBUSINESS DECISION, NOT TECHNICAL\n\n\nNow that we’ve got the topics of automated testing, continuous integration and infrastructure as code out of the way, we can tie them all together and show the major benefit of investing so much time and effort into this ruthless automation... the notion of continuous delivery.\n\nTraditionally delivery of software into a production environment is one of the highest risk activities an IT organisation does. Like integration of multiple streams of independent development, integrating an entire application into a production environment and architecture is usually an activity that makes project managers, release managers, operations staff and occasionally even developers squirm with unease. Not surprisingly, many organisations optimise their processes to make releases as infrequent as possible, delaying (and ironically vastly increasing) the pain they expect to suffer.\n\nContinuous Delivery applies the same Agile axiom of “if it hurts, do it more often” to delivery and optimises process to allow a team to continually produce software that is production ready. For teams that can master this activity, releasing to production becomes a highly frequent and extremely low risk activity, a decision which the business can make with little concern for the technical side of the activity. \n
  • CODEBASE CONFIDENCE -> AUTOMATED TESTING\nENVIRONMENT CONFIDENCE -> INFRA AS CODE\nRUTHLESS AUTOMATION -> CI\nSMALL CHANGES -> CARDS\n\n---\n\nSo what does it take to have a development team produce code that is permanently in a state of production readiness? These things:\n\n- Firstly, a high level of confidence in the functionality provided by the codebase. There are lots of elements that contribute to this confidence but the one that we’ve focussed on today is automated testing which gives you rapid feedback on whether the internal and external quality of the code is adequate\n\n- Second, you need an equal level of confidence in your environment: in it’s ability to house your application in a suitable fashion. By using techniques such as Infrastructure As Code in association with Continuous Integration, you can ensure your environments are known and proven.\n\n- Thirdly, the changes to make to either the infrastructure or the application should be small. Agile teams use physically constrained documents like index cards to emphasise that requirements be small - typically implementable in a couple of days. Small changes place the lowest overhead on the team trying to maintain a codebase in production ready fashion.\n\n- Finally, the approach of automation-wherever-possible when it comes to the development, provisioning, testing and deployment of the application. Remove every non crucial manual activity to provide the lowest risk path to production.\n
  • CI -> #1 TOOL FOR CD\nCI NOT JUST FOR QUICK FEEDBACK\n\nMORE ADVANCED WORKFLOWS NEEDED\n“PIPELINES”\n\nDIAGRAM -> LEFT\nDIAGRAM -> MIDDLE + RIGHT\n\n---\n\nAnd the primary tool that drives much of the workload in a Continuous Delivery environment is the CI server. Previously when we looked at CI, it was focussed on quick feedback on the quality of a change to our codebase. And this is a vital activity for CI, but CI is also the obvious place to start orchestrating deployment into various environments, including production. \n\nFor this extended use of CI, more advanced workflows are needed. These workflows are usually called “pipelines” in the CI world.\n\nThis diagram is from a book I’ll refer to later, but it’s a sample of a more complex build pipeline that includes deployment to multiple downstream environments. When we spoke about CI previously it was really only covering the left hand side of this diagram, not the middle part and certainly not the UAT, Capacity and Production environments to the right.\n
  • \n
  • Soon we’ll look at how our demonstration pipeline in Jenkins is configured and how we can promote a change in the codebase all the way through into production with minimal manual intervention.\n\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • TOY APP -> TRIVIAL\nMANY ORGS -> WEB SCALE -> CD\nE.G FLICKR\n\nObviously, we have a toy application here and the change is equally trivial, but there are plenty of examples of organisations that have invested very heavily in this form of agility and are rightfully and publicly proudly of their ability to deploy to production at will. \n
  • TOY APP -> TRIVIAL\nMANY ORGS -> WEB SCALE -> CD\nE.G FLICKR\n\nObviously, we have a toy application here and the change is equally trivial, but there are plenty of examples of organisations that have invested very heavily in this form of agility and are rightfully and publicly proudly of their ability to deploy to production at will. \n
  • \n
  • So we’re reached the end of our journey through the intersecting sets that are Agile development practices and Cloud-based hosting platforms. Given that, it makes sense to quickly recap the areas we looked at and see again how the AWS platform supports each of them.\n
  • So we’re reached the end of our journey through the intersecting sets that are Agile development practices and Cloud-based hosting platforms. Given that, it makes sense to quickly recap the areas we looked at and see again how the AWS platform supports each of them.\n
  • Next stop was Continuous Integration. CI is closely associated with automated testing and therefore has a lot of the same benefits, but also provides the ability for a team to use EC2 as their CI server instead of having to procure and manage this hardware themselves, scaling up and down to support the natural ebbs and flows of CI compute power needed.\n
  • And if you’re using AWS or an equivalent platform, then the amount of investment needed in the Infrastructure As Code space is also diminished. \n
  • And finally, the goal of Continuous Delivery which requires all the previous items as well as a comprehensive build pipeline with the Continuous Integration environment to allow automated of the end-to-end feature-to-production process.\n
  • \n

Agile in the Coud Agile in the Coud Presentation Transcript

  • Agile in the cloud Think BIG. Start small. Scale Fast!
  • introductions Andy Marks Joe ZieglerTechnical Principal Evangelist @andee_marks @jiyosub
  • definitions Cloud AgileManaged web services Optimising for rapid response to changeElastic computing andstorage High technical disciplinePay-as-you-go pricing model Ruthless automation
  • agenda Automated testingContinuous IntegrationInfrastructure as code Continuous Delivery
  • scope breadthdepth
  • scope breadth Automated testing Continuous Integration Infrastructure as code Continuous Deliverydepth
  • scope breadth Automated testing Continuous Integration Infrastructure as code Continuous Deliverydepth
  • our stack
  • our app
  • our app
  • our app
  • our app
  • Automated testingContinuous IntegrationInfrastructure as code Continuous Delivery
  • purpose
  • typesSource: http://www.agiletester.ca/downloads/Chapter__9x_Quadrant_Summary_v3.pdf
  • testing... in the cloud
  • testing... in the cloud unit
  • testing... in the cloud unit integration
  • testing... in the cloud unit integration functional
  • testing... in the cloud unit integration functional performance
  • demo
  • EC2 Instance ContentsPypoll demo app EC2 Instance ContentsInternet AWS Cloud
  • EC2 Instance Contents #1 clones from Pypoll demo appPypoll demo app EC2 Instance ContentsInternet AWS Cloud
  • #2 spawns EC2 Instance Contents Instances #1 clones from Pypoll demo appPypoll demo app EC2 Instance ContentsInternet AWS Cloud
  • #2 spawns EC2 Instance Contents Instances #3 attacks #1 clones from Pypoll demo appPypoll demo app EC2 Instance ContentsInternet AWS Cloud
  • #2 spawns EC2 Instance Contents Instances #3 attacks #1 clones from Pypoll demo #4 monitor appPypoll demo app Amazon CloudWatch EC2 Instance ContentsInternet AWS Cloud
  • hypothesis testing
  • further readinggithub.com/newsapps/beeswithmachinegunsgithub.com/andeemarks/pypoll
  • Automated testingContinuous Integration Infrastructure as code Continuous Delivery
  • introduction
  • introduction
  • consider this“Without continuous integration, yoursoftware is broken until somebody proves itworks” - “Continuous Delivery”
  • prerequisites
  • prerequisites1. Source control
  • prerequisites1. Source control2. CI server
  • prerequisites1. Source control2. CI server3. Automated evaluation of “quality”
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • basic workflowSource: http://www.falafel.com/testcomplete/continuous_integration.aspx
  • demo
  • Pypoll demo app EC2 Instance ContentsInternet EC2 Instance Contents AWS Cloud
  • Pypoll demo app EC2 Instance Contents #1 manage slaveInternet EC2 Instance Contents AWS Cloud
  • #2 pollsPypoll demo app EC2 Instance Contents #1 manage slaveInternet EC2 Instance Contents AWS Cloud
  • #2 pollsPypoll demo app EC2 Instance Contents #3 installs #1 manage slave fromInternet EC2 Instance Contents AWS Cloud
  • #2 pollsPypoll demo app EC2 Instance Contents #3 installs #1 manage slave fromInternet #4 run tests EC2 Instance Contents AWS Cloud
  • further reading http://jenkins-ci.org/http://martinfowler.com/articles/continuousIntegration.html
  • Automated testingContinuous IntegrationInfrastructure as code Continuous Delivery
  • definition “Programmaticprovisioning by API”
  • levels of maturity •Programmatic •Scripted•Manual
  • levels of maturity •Programmatic •Scripted ☝Speed ☝Repeatability ☝Reliability ☟Risk•Manual
  • activities 3. Launch 2. Configuration 1. Bootstrapping
  • activities 3. Launch 2. Configuration•Base OS•Compute 1. Bootstrapping•Storage•Network•Hardware
  • activities 3. Launch•Behaviour 2. Configuration•State•Base OS•Compute 1. Bootstrapping•Storage•Network•Hardware
  • puppet
  • puppet1. declare configuration
  • puppet1. declare configuration2. apply configuration
  • puppet1. declare configuration2. apply configuration3. (time passes)...
  • puppet1. declare configuration2. apply configuration3. (time passes)...4. verify configuration
  • puppet1. declare configuration2. apply configuration3. (time passes)...4. verify configuration5. re-apply configuration
  • puppet1. declare configuration2. apply configuration3. (time passes)...4. verify configuration5. re-apply configuration • if needed
  • demo
  • Pypoll demoEC2 Instance Contents EC2 Instance Contents polls Pypoll infra Pypoll demo Internet
  • Pypoll demoEC2 Instance Contents EC2 Instance Contents #1 clones from polls Pypoll infra Pypoll demo Internet
  • Pypoll demo #2 applyEC2 Instance Contents EC2 Instance Contents #1 clones from polls Pypoll infra Pypoll demo Internet
  • Pypoll demo #2 apply #3 configureEC2 Instance Contents EC2 Instance Contents #1 clones from polls Pypoll infra Pypoll demo Internet
  • Pypoll demo #2 apply #3 configureEC2 Instance Contents EC2 Instance Contents #1 clones from polls #4 install Pypoll infra Pypoll demo Internet
  • further reading• http://puppetlabs.com
  • Automated testingContinuous IntegrationInfrastructure as code Continuous Delivery
  • pre-requisitesconfidence ➡ codebaseconfidence ➡ environmentsmall batch sizesruthless automation
  • advanced CI pipelines Source: Continuous Delivery
  • demo
  • emo EC2 Instance Contentsernet Instances AWS Cloud
  • emo EC2 Instance Contents #1 publish artifacternet Bucket with Objects Instances AWS Cloud
  • #2 promote to stagingemo EC2 Instance Contents #1 publish artifact EC2 Instance Contentsernet Bucket with Objects Instances AWS Cloud
  • #2 promote to stagingemo #3 install #1 publish artifact EC2 Instance Contents EC2 Instance Contents artifacternet Bucket with Objects Instances AWS Cloud
  • #2 promote to stagingemo #3 install #1 publish artifact EC2 Instance Contents EC2 Instance Contents artifacternet #4 publish Bucket artifact with Objects Instances Bucket with Objects AWS Cloud
  • #2 promote to stagingemo #3 install #1 publish artifact EC2 Instance Contents EC2 Instance Contents artifacternet #5 promote to prod #4 publish Bucket artifact with Objects Instances EC2 Instance Contents Bucket with Objects AWS Cloud
  • #2 promote to stagingemo #3 install #1 publish artifact EC2 Instance Contents EC2 Instance Contents artifacternet #5 promote to prod #4 publish Bucket artifact with Objects Instances #6 install EC2 Instance Contents Bucket artifact with Objects AWS Cloud
  • @ scale
  • @ scaleSource: http://code.flickr.com/
  • further reading
  • conclusion Automated testingContinuous Integration Infrastructure as code Continuous Delivery
  • conclusion Automated testing Managed web servicesContinuous Integration Elastic computing and storage Infrastructure as code Pay-as-you-go pricing model Continuous Delivery
  • conclusionAutomated testing Managed web servicesContinuous Integration Elastic computing and storage Infrastructure as code Pay-as-you-go pricing model Continuous Delivery
  • conclusion Automated testing Managed web servicesContinuous Integration Elastic computing and storage Infrastructure as code Pay-as-you-go pricing model Continuous Delivery
  • conclusion Automated testing Managed web services Continuous Integration Elastic computing and storageInfrastructure as code Pay-as-you-go pricing model Continuous Delivery
  • conclusion Automated testing Managed web services Continuous Integration Elastic computing and storage Infrastructure as codeContinuous Delivery Pay-as-you-go pricing model
  • Questions?