Your SlideShare is downloading. ×
  • Like
  • Save
Development and Test on AWS
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Development and Test on AWS

  • 3,921 views
Published

The AWS Lunch and Learn Series with the topic of Development and Test on AWS

The AWS Lunch and Learn Series with the topic of Development and Test on AWS

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
3,921
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
20

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • S3cmd backupJungle Disk
  • CloudFormer template creation tool
  • MANY TYPES OF TESTING - NOT ALL AUTOMATEABLE.DIAGRAM FROM BRIAN MARICKSTATES WHAT SHOULD BE AUTOMATED.---Testing is a wide and open church. And there are many different types of testing which can be freely automated. In fact, some types of testing can only be done in an automated fashion.This diagram is from noted Agile tester, Brian Marick, and it breaks common types of testing that Agile teams do into 4 quadrants... I like this diagram because it does a good job of identifying lots of different types of testing that teams do, but also gives an opinion on what types of tests are best suited to automation or manual testing.
  • SUBSET OF AUTOMATEABLE TESTS FOR AWSALL CAN - NOT ALL SHOULDUNIT TESTS - NO: DEV LOCAL, TEST LOCALINTEGRATION TESTS - PERHAPSFUNCTIONAL TESTS - YES, ESP. FOR PARALLELISATIONPERFORMANCE TESTS - DEFINITELYPERF TESTING NOT 24/7 - SCALE DOWN AGAIN AS WELL---But there’s only a subset of those types of automated test are well suited for cloud platforms like AWS. Obviously everything which can be automated can run happily on EC2 instances inside AWS, but if we consider the elastic, pay-as-you-go model that AWS supports, there are a couple of specific types of automated testing that are really in the sweet spot.Unit tests are not a great choice to run solely within the cloud. Their principal consumer is the developer who writes them who wants instant feedback that the changes they are working on are all correct and haven’t caused any regressions. Assuming the developer is coding locally, then the unit tests should run locally as well instead of the cloud.Integration tests can benefit from running in a cloud environment, especially if you have a better chance of replicating the production environment than running locally. Functional tests are even better suited to AWS because the elastic nature of the platform provides the capacity to scale up your testing infrastructure to quickly execute large numbers of tests in parallel. Functional testing for web applications tend to be slow in nature, so many teams turn to parallelisation to ensure their test suite continues to run quickly. This can be especially useful if you are doing testing across multiple browser types.The broad category of Performance tests is also an ideal candidate for cloud-based testing, again because of the ability to cheaply spin up large numbers of test clients to simulate traffic against your application. And because Performance and Functional testing is not a 24/7 activity, AWS provides the ability to quickly scale back the testing fleet when the test cycles are over, reducing the need to keep capacity available to cover the maximum load: this is what would be needed if you were doing this testing out of your own data centres.
  • CI -> ROBOT TESTERNOT COVERED BY NAME...INTEGRATION = BIG RISK IN S/W DEVUSUALLY LATE, LONG, HARD TO ESTIMATE, MANY DEFECTSAGILE -> “HURTS -> MORE OFTEN”CULTURAL CONVENTION + SOFTWAREMANY TIMES/DAYINC. FREQUENCY -> DEC. BATCH SIZE -> DEC. RISK---At it’s simplest, you can think of Continuous Integration (or CI as it is popularly abbreviated) as a robot tester who’s sole job is to monitor changes to the codebase and run quality checks each time changes are detected.Although this definition isn’t really conveyed by the name “CI”. The “integration” aspect comes from the fact that one of the biggest risks of software development at scale is when streams of code developed in isolation are integrated. Traditionally, this integration happened late in the piece, especially if different teams were tasked with building different parts of a larger system. These delayed integration activities were usually lengthy, difficult to estimate and often produced many defects fundamental to the design of the software, requiring expensive and time consuming fixes.The Agile perspective on these sorts of things is “if it hurts, do it more often”, hence the notion of Continuous Integration. By using a combination of cultural convention with the development team and software support, even large teams can integrate disparate code streams multiple times a day. By integrating daily, hourly or even more frequently, the size of each change to be integrated is much smaller than when you are integrating weeks or months worth of work at a time. Smaller change sets means quicker and simpler integration and faster feedback should any problems occur.
  • QUALITY: RUNS, TESTS, INTERNAL QUALITYNEED BINARY PASS/FAIL STATUS---So the basic elements of a Continuous Integration environment are quite simple. You need:- A version control system which is the truth source of all changes to your codebase- Some form of continuous integration software. There are lots of options here, some free, some commercial, some suited for small scale CI, some targeting large scale environments. The one we’ll see later in the demo section is called Jenkins which is a well-known, mature, free CI server with an active community supporting it.- Some notion of what “quality” means with respect to the codebase being integrated. At it’s most basic level, you can consider an application to have passed CI if the application runs in any sense. For most teams though, they will at least have a suite of automated tests to run against their application. Usually these tests will provide a combination of white box, grey box and black box testing. Beyond that, many teams also attempt to make statements about the internal quality of their code by using quality metrics tools to ensure the amount of complexity and duplication within the codebase is kept to a minimum. All of these tests for quality can be invoked within a CI environment and combined to provide a binary “pass” or “fail” report for each build of the software.
  • LOTS OF OPTIONS FOR PROG PROV.AWS -> CLOUD FORMATIONCUSTOM LANGUAGES; CHEF, CFENGINE, PUPPETFIGHTS CONFIGURATION DRIFT---There are a variety of different ways to do programmatic provisioning. AWS has it’s own macro provisioning service called CloudFormation, which uses XML to declare how a group of AWS components should be created and configured. At a lower level, there are a number of popular languages which specialise in environment specification. One of these is Puppet which is one we’ll look at in detail fairly shortly. At this point, let’s just concentrate on some of the high operations of puppet:Here’s the basic workflow of a Puppet managed environment:- Each machine managed by Puppet will have it’s configuration codified using the declarative ruby-based language. Typically, there will be groups of machines serving the same role (e.g., web server, application server, etc).- When each of these machines connects to Puppet, the Puppet instance will determine whether the machine’s current configuration is correct according to the declaration. If the configuration is lacking in any form, Puppet will adjust the configuration accordingly, abstracting the details of exactly “how” the configuration is done.- Over time there is the risk that the configuration will drift against the specification, usually because people have been manually updating the server. Puppet will continue to monitor the configuration of each of it’s machines and as soon as it notices that the configuration is inconsistent, it will automatically re-apply the specified configuration.So let’s have a look at how that will happen in practice.
  • DIFFERENT WAYS PROVISIONING HAS HAPPENEDMANUAL - LOTS OF HUMAN DECISIONSSCRIPTED - PIECEMEAL, NOT SHAREDPROGRAMMATIC - S/W ENG DISCIPLINE: VERSION CONTROL, TESTED---Because environments and services have always needed to be provisioned, historically there have been a number of ways this has happened. At the most basic level, there is manual provisioning which is still using computers but also involves a large amount of human decision making and input, even if there are written instructions to follow.In all but the most basic of environments, some form of scripting is applied to remove some of the human error risk from the deployment process. Typically, these scripts will be patched together using a variety of languages and approaches and often kept safe and sound by the person who wrote them.Full infrastructure-as-code programmatic provisioning takes the discipline agile engineers apply to their source code and transfers that to the code used to specify infrastructure. The languages used for this coding are generally customised specifically for infrastructure. The scripts built with these languages are maintained in version control and many of them can be the subject of automated testing, just like application code.And as you move further and further along this path of maturity, the speed of your provisioning increases, likewise the repeatability and the reliability of the same process also increases.
  • And when we look at the lower level activities that are typically part of the provisioning process, we can start to see where some of the benefits of running on a platform like AWS in terms of the amount of provisioning work that is required, irrespective of whether you’re doing this in a manual or programmatic fashion.Typical bootstrapping activities that occur as part of provisioning an environment include work around the hardware components (racking and stacking servers), configuration of the network elements, storage and compute components and the base operating system installation and configuration for each of the servers.Running on AWS removes the need to do most of that work, certainly at a detailed level. Leveraging the existing compute, storage and network capabilities of AWS means your provisioning activities tend to start higher up the pyramid.
  • BRIAN MARICK

Transcript

  • 1. amazon web servicesLunch and Learn Series Development and Test On AWS
  • 2. Please silence your phonesYour presenter:Joe Ziegler, zieglerj@amazon.comTechnical Evangelist @jiyosub [ House Keeping ] 2
  • 3. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment [ Our plan for today ] 3
  • 4. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment 4
  • 5. Virtual Private Cloud 5
  • 6. CloudFormation• Create application stack from a reusable template• Declarative specification• Creates resources in dependency-driven order• Complete console support• Predefined templates
  • 7. Amazon API & SDKs 7
  • 8. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment 8
  • 9. Running Source in AWS• Secure • Accessible• Scale Vertically • Durable• Reusable 9
  • 10. Self Managed Source Control• Self Installed EC2 Instance• Use Community AMIs• AWS Marketplace 10
  • 11. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment 11
  • 12. Development Environment via CloudFormation• Virtual Private Cloud (VPC)• Template Related Resources• Integrate with Configuration Management Tools (Puppet & Chef)• Provide CloudFormation Templates Internally to Developers• RDS Example VPC Example 12
  • 13. Replicating Production Environments in Development Why How• Accurate Performance • Adopt Infrastructure as Testing Code Strategy• Empower Developers to • Leverage AWS APIs Experiment • Utilise Amazon• Production Debugging Relational Database• Improved Code Quality Service (RDS) and Point in Time Snapshots 13
  • 14. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment 14
  • 15. Test Scenarios Unit Tests Smoke TestUser Acceptance Testing (UAT) Integration Test Load & Performance Test Blue / Green Test (A/B) 15
  • 16. Automated Testing 16
  • 17. Testing in the Cloud Unit Integration Functional Performanc e 17
  • 18. Testing Approach• Use either an AMI or CloudFormation Template matching Production• Leverage Continuous Integration Server Pipeline (see next section)• Automate and repeat process using the AWS APIs 18
  • 19. Load & Performance Test 19
  • 20. Bees with Machine Guns #1 Spawns EC2 Instance Contents Instances My App #3 Monitors EC2 Instance Contents Amazon CloudWatch github.com/newsapps/beeswithmachineguns 20
  • 21. Blue / Green Testing Blue Elastic Load Balancer InstancesMy App Auto scaling Group Green Amazon CloudWatch Instances Auto scaling Group 21
  • 22. User Acceptance Testing• Quick Fast Deployments• Secure Isolated Environment• Utilise AWS Elastic Beanstalk AWS• Benefit from Elasticity Elastic Beanstalk 22
  • 23. • Relevant AWS Services• Source Control• Development Environments• Test• Agile Theory: Continuous Development, Integration & Deployment 23
  • 24. What is Agile?Optimising for rapid response to change High technical discipline Ruthless automation 24
  • 25. Agile ConceptsContinuous IntegrationInfrastructure as code Continuous Delivery 25
  • 26. Agile ConceptsContinuous IntegrationInfrastructure as code Continuous Delivery 26
  • 27. Introduction 27
  • 28. Prerequisites• Source control• CI server• Automated evaluation of “quality” 28
  • 29. Jenkins Continuous Integration Server 29
  • 30. Workflow 30
  • 31. Agile ConceptsContinuous IntegrationInfrastructure as code Continuous Delivery 31
  • 32. Infrastructure as Code “Programmatic provisioning by API”Everything in AWS is an API 32
  • 33. Tool Box AMI Libraries and SDKs AMICloudFormation 33
  • 34. Puppet1.declare configuration2.apply configuration3.(time passes)...4.verify configuration5.re-apply configuration6.if needed 34
  • 35. Infrastructure as Code ☝Speed ☝Repeatability ☝Reliability ☟Risk 35
  • 36. Activities 36
  • 37. Agile ConceptsContinuous IntegrationInfrastructure as code Continuous Delivery 37
  • 38. Prerequisitesconfidence ➡ codebaseconfidence ➡ environmentsmall batch sizesruthless automation 38
  • 39. Advanced CI Pipeline 39
  • 40. Demo 40
  • 41. Successful Implementation 41
  • 42. Next Steps• Talk to your Account Manager• Access our Solution Architects• Check out our Webinars and Pod Casts• Reference our SlideShare Presentations 42
  • 43. Further Reading 43
  • 44. Further Reading 44
  • 45. Further Reading• http://puppetlabs.com• http://www.opscode.com/products/• http://github.com/newsapps/beeswithmachineguns• http://jenkins-ci.org/• http://www.exampler.com/ 45
  • 46. Shameless Plug 46
  • 47. amazon web serviceshttp://aws.amazon.com Joe Ziegler, Technical Evangelist zieglerj@amazon.com Please Fill out the @jiyosub Feedback Form 47