Successfully reported this slideshow.

Uses, considerations, and recommendations for AWS

1,573 views

Published on

From an information session on Amazon Web Services (AWS), looking at uses, considerations, and recommendations for leveraging AWS in your organization.
Topics covered:
- AWS Services Overview
- Some ideal use cases: Disaster Recovery, Backup and Archive, Test/Dev
- Data residency and security considerations

Published in: Technology
  • Be the first to comment

Uses, considerations, and recommendations for AWS

  1. 1. © 2014 Scalar Decisions Inc. Not for distribution outside of intended audience. 1 Uses, considerations and recommendations for AWS
  2. 2. This is intended to be an information session and any information presented here should not be substituted for or interpreted as legal advice.
  3. 3. •  AWS Services •  Sample Use Cases •  Examining data sovereignty & trans-border data flows Our Agenda Today © 2015 Scalar Decisions Inc. Not for distribution outside of intended audience. 3
  4. 4. AWS Services
  5. 5. What is Cloud Computing with Amazon Web Services? AWS provides a complete set of computing, storage and database services accessed via the internet to help you build and run applications These services are available to you on demand and you pay for only the services that you use
  6. 6. Gartner Magic Quadrant for Cloud Infrastructure as a Service
  7. 7. Gartner Magic Quadrant for Cloud Infrastructure as a Service
  8. 8. 8 2003 •  $5.2B retail business •  7,800 employees •  A whole lot of servers 2013 Every day, AWS adds enough server capacity to power that whole $5B enterprise Amazon 2003-2013
  9. 9. Why Do Enterprises Choose AWS?
  10. 10. 1. Pay For Infrastructure as you Need it, Not Up Front On- Premises •  $0 to get started •  Pay as you go
  11. 11. 2. Lower Total Cost of IT Scale allows AWS to constantly reduce their costs AWS are comfortable running a high volume, low margin business They pass the savings along to their customers in the form of low prices
  12. 12. 3. You Don’t Need to Guess Capacity Self Hosti ng Waste Customer Dissatisfaction Actual demand Predicted Demand Rigid Elastic Actual demand AWS
  13. 13. 4. Increase Innovation: Experiment Fast with Low Cost and Low Risk On-Premises •  Experiment Infrequently •  Failure is expensive •  Less Innovation •  Experiment Often •  Fail quickly at a low cost •  More Innovation $ Millions Nearly $0
  14. 14. 5. Get Rid of Undifferentiated Heavy Lifting Data Centres Power Cooling Cabling Networking Racks Servers Storage Labour Buy and install new hardware Setup and configure new software build or upgrade data centres takes care of… So customers don’t have to …
  15. 15. 6. Go Global in Minutes
  16. 16. What are AWS’ Products and How Do You Use Them To Run Workloads?
  17. 17. AWS Services AWS Global Infrastructure Application Services Networking Deployment & Administration DatabaseStorageCompute
  18. 18. AWS Global Infrastructure 9 Regions 40+ AWS Edge Locations Continuous Expansion
  19. 19. Architected for Enterprise Security Requirements “The Amazon Virtual Private Cloud [Amazon VPC] was a unique option that offered an additional level of security and an ability to integrate with other aspects of our infrastructure.” Dr. Michael Miller, Head of HPC for R&D http://aws.amazon.com/security/
  20. 20. Shared Responsibility for Security & Compliance Facilities Physical Security Compute Infrastructure Storage Infrastructure Network Infrastructure Virtualization Layer Operating System Applications Security Groups Firewalls Network Configuration Account Management + = Customer
  21. 21. On-Demand Pay for compute capacity by the hour with no long- term commitments For spiky workloads, or to define needs Many purchase models to support different needs Reserved Make a low, one- time payment and receive a significant discount on the hourly charge For committed utilization Spot Bid for unused capacity, charged at a Spot Price which fluctuates based on supply and demand For time- insensitive or transient workloads Dedicated Launch instances within Amazon VPC that run on hardware dedicated to a single customer For highly sensitive or compliance related workloads Free Tier Get Started on AWS with free usage & no commitment For POCs and getting started
  22. 22. Compute Services Amazon Elastic Compute Cloud (EC2) Auto Scaling Elastic Load Balancing Actual EC2 Elastic Virtual servers in the cloud Dynamic traffic distribution Automated scaling of EC2 capacity
  23. 23. Networking Services Amazon Virtual Private Cloud (VPC): AWS DirectConnect Amazon Route 53 Availability Zone B Availability Zone A Private, isolated section of the AWS Cloud Private connectivity between AWS and your data centre Domain Name System (DNS) web service.
  24. 24. Storage Services Amazon Elastic Block Storage (EBS) EBS Block storage for use with Amazon EC2 Amazon Simple Storage Service (S3) Images Videos Files Binaries Snapshots Internet scale storage via API AWS Storage Gateway S3, Glacier Integrates on- premises IT and AWS storage Amazon Glacier Images Videos Files Binaries Snapshots Storage for archiving and backup 1 G to 1 TB Provisioned iOPs Up to 5 TB 11 x 9’s of durability
  25. 25. Application Services Amazon CloudFront distribute content globally Amazon CloudSearch Managed search service Amazon Elastic Transcoder Video transcoding in the cloud
  26. 26. Database Services Amazon RDS Amazon DynamoDB Managed relational database service Managed NoSQL database service DBA Amazon ElastiCache In-Memory Caching Service
  27. 27. Big Data Services Amazon EMR (Elastic Map Reduce) AWS Data Pipeline Hosted Hadoop framework Move data among AWS services and on-premises data sources Amazon Redshift Petabyte-scale data warehouse service
  28. 28. Deployment & Administration Amazon CloudWatch AWS IAM (Identity & Access Mgmt) AWS OpsWorks AWS CloudFormation AWS Elastic Beanstalk Web App Enterprise App Database Monitor resources Manage users, groups & permissions Dev-Ops framework for application lifecycle management Templates to deploy & manage Automate resource management
  29. 29. AWS supports a wide range of technologies
  30. 30. The AWS Ecosystem Allows You to use your Existing Management Tools Single Pane of Glass Management Tool Partners
  31. 31. Elastic Beanstalk Simple Email Service CloudFormation RDS for Oracle ElastiCache 2012 150 + 2011 82 2010 61 2009 48 2008 24 2007 9 Amazon FPS Red Hat EC2 SimpleDB CloudFront EBS Availability Zones Elastic IPs Relational Database Service Virtual Private Cloud Elastic Map Reduce Auto Scaling Reserved Instances Elastic Load Balancer Simple Notification Service Route 53 RDS Multi-AZ Singapore Region Identity Access Management Cluster Instances Redshift DynamoDB Simple Workflow CloudSearch Storage Gateway Route 53 Latency Based Routing RedShift number of released features, sample services described
  32. 32. The Good News is that Cloud isn’t an ‘All or Nothing’ Choice Corporate Data Centres On- Premises Resources Cloud ResourcesIntegration
  33. 33. AWS Use Cases
  34. 34. AWS Use Cases •  Disaster Recovery •  Archive & Backup •  Development & Test
  35. 35. Disaster Recovery (Traditional) The traditional method of architecting and designing a properly functioning disaster recovery environment has many moving parts, is complex and generally takes a long time to deploy. Typical items that need to be in place to support a traditional disaster recovery environment include: •  Facilities to house the infrastructure including power and cooling. •  Security to ensure the physical protection of assets. •  Suitable capacity to scale the environment. •  Support for repairing, replacing, and refreshing the infrastructure. •  Contractual agreements with an Internet Service Provider (ISP) to provide Internet connectivity that can sustain bandwidth utilization for the environment under a full load. •  Network infrastructure such as firewalls, routers, switches, and load balancers. •  Enough server capacity to run all mission-critical services including storage appliances for the supporting data and servers to run applications and backend services such as user authentication, Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), monitoring, and alerting.
  36. 36. Disaster Recovery (AWS) •  Businesses of all sizes are using cloud computing to enable faster disaster recovery of their critical IT systems, without incurring the expenses required to purchase and maintain a second physical datacenter. AWS provides a set of services that enable rapid recovery of your IT infrastructure and data, any time and from anywhere. •  Using a combination of AWS’ services that Matt described earlier an organization has many different options for using AWS as their DR environment including •  Pilot Light for Simple Recovery into AWS •  Warm Standby Solution •  Multi-site Solution
  37. 37. Pilot Light •  Infrastructure elements for the pilot light itself typically include your database servers, which would be replicating data to Amazon EC2. Depending on the system, there may be other critical data outside of the database that needs to be replicated to AWS. This is the critical core of the system (the pilot light) around which all other infrastructure pieces in AWS can quickly be provisioned (the rest of the furnace) to restore the complete system •  To provision the remainder of the infrastructure to restore business critical services, you would typically have some pre-configured servers bundled as Amazon Machine Images (AMIs), which are ready to be started up at a moment’s notice. When starting recovery, instances from these AMIs come up quickly and find their role within the deployment around the pilot light. From a networking point of view, you can either use Elastic IP Addresses (which can be pre-allocated in the preparation phase for DR) and associate them with your instances, or use Elastic Load Balancing to distribute traffic to multiple instances. You would then update your DNS records to point at your Amazon EC2 instance or point to your Elastic Load Balancing using a CNAME.
  38. 38. Pilot Light Preparation Key points for preparation: •  Set up EC2 instances to replicate or mirror data. •  Ensure that you have all supporting custom software packages available in AWS. •  Create and Maintain Amazon Machine Images (AMI) of key servers where fast recovery is required. •  Regularly run these servers, test them, and apply any software updates and configuration changes. •  Consider automating the provisioning of AWS resources.
  39. 39. Pilot Light Recovery Key points for recovery: •  Start your application EC2 instances from your custom AMIs. •  Resize and/or scale any database / data store instances, where necessary. •  Change DNS to point at the EC2 servers. •  Install and configure any non-AMI based systems, ideally in an automated fashion.
  40. 40. Pilot Light Overview Before After
  41. 41. Warm Standby •  A warm standby solution extends the pilot light elements and preparation. It further decreases the recovery time because in this case, some services are always running. By identifying your business-critical systems, you would fully duplicate these systems on AWS and have them always on. •  These servers can be running on a minimum sized fleet of EC2 instances on the smallest sizes possible. This solution is not scaled to take a full-production load, but it is fully functional. It may be used for non-production work, such as testing, quality assurance, and internal use, etc. •  In a disaster, the system is scaled up quickly to handle the production load. In AWS, this can be done by adding more instances to the load balancer and by resizing the small capacity servers to run on larger EC2 instance types. Horizontal scaling, if possible, is often preferred over vertical scaling.
  42. 42. Warm Standby Preparation •  Key points for preparation: •  Set up EC2 instances to replicate or mirror data. •  Create and maintain Amazon Machine Images (AMIs). •  Run your application using a minimal footprint of EC2 instances or AWS infrastructure. •  Patch and update software and configuration files in line with your live environment.
  43. 43. Warm Standby Recovery Key points for recovery: •  Start applications on larger EC2 Instance types as needed (vertical scaling). •  Increase the size of the EC2 fleets in service with the Load Balancer (horizontal scaling). •  Change the DNS records so that all traffic is routed to the AWS environment. •  Consider using Auto scaling to right-size the fleet or accommodate the increased load.
  44. 44. Warm Standby Overview Before After
  45. 45. Multi-site •  A multi-site solution runs in AWS as well as on your existing on-site infrastructure in an active-active configuration. The data replication method that you employ will be determined by the recovery point you choose. Various replication methods exist. •  A weighted DNS service, such as Amazon Route 53, is used to route production traffic to the different sites. A proportion of traffic will go to your infrastructure in AWS, and the remainder will go to your on-site infrastructure. •  In an on-site disaster situation, you can adjust the DNS weighting and send all traffic to the AWS servers. The capacity of the AWS service can be rapidly increased to handle the full production load. EC2 Auto Scaling can be used to automate this process. You may need some application logic to detect the failure of the primary database services and cut over to the parallel database services running in AWS.
  46. 46. Multi-site Preparation Key points for preparation: •  Set up your AWS environment to duplicate your production environment. •  Set up DNS weighting or similar technology to distribute incoming requests to both sites.
  47. 47. Multi-site Recovery Key points for recovery: •  Change the DNS weighting, so that all requests are sent to the AWS site. •  Have application logic for failover to use the local AWS database servers. •  Consider using Auto scaling to automatically right-size the AWS fleet.
  48. 48. Multi-site Overview Before After
  49. 49. Archive & Backup (Traditional) •  The traditional method of architecting and designing a fully functioning archive & backup environment is typically painful and requires constant care and feeding to ensure the environment is running optimally and also has the resources it requires. Typical items that need to be in place to support a traditional backup & archive environment include: •  An off-site location to store either tapes or a fully functioning disaster recovery environment to backup or archive data. •  Storage environment to store the archived & backup data (SAN, VTL, Tape Library, etc.). •  Software to ensure that scheduled jobs, backup catalogs and metadata is stored in a central repository. •  Suitable capacity to scale the environment. •  Support for repairing, replacing, and refreshing the infrastructure. •  Storage infrastructure such as SAN, NAS, FC switching, network switching.
  50. 50. Archive & Backup (AWS) AWS has many platforms for storing your mission-critical data. With AWS, you pay as you go and you can scale up and down as required. With your data stored in the AWS cloud, it’s easy to use other Amazon Web Services to take advantage of additional cost savings and benefits. Amazon storage services remove the need for complex and time-consuming capacity planning, ongoing negotiations with multiple hardware and software vendors, specialized training, and maintenance of offsite facilities or transportation of storage media to third party offsite locations Using a combination of AWS’ services that Matt described earlier an organization has many different options for using AWS for archive & backup including: •  Amazon Glacier •  Amazon S3 •  AWS Storage Gateway
  51. 51. AWS Storage Gateway •  The AWS Storage Gateway’s software appliance is available for download as a virtual machine (VM) image that you install on a host in your datacenter. Once you’ve installed your gateway and associated it with your AWS Account through our activation process, you can use the AWS Management Console to create either Gateway- Cached or Gateway-Stored storage volumes that can be mounted as iSCSI devices by your on-premises applications. •  Three main modes of operation: •  Gateway-Cached Volumes •  Gateway-Stored Volumes •  Gateway-VTL
  52. 52. Gateway-Cache Volumes •  Gateway-Cached volumes allow you to utilize Amazon S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data •  As your applications write data to and read data from a Gateway-Cached volume, this data is initially stored on-premises on Direct Attached Storage (DAS), Network Attached Storage (NAS), or Storage Area Network (SAN) storage •  This local storage is used to prepare and buffer data for upload to your storage volume in Amazon S3 as well as to cache your application’s recently written and recently read data on-premises for low-latency access •  When your application reads data from your Gateway-Cached volume, your on- premises gateway first checks its local cache for this data before checking Amazon S3
  53. 53. Gateway-Stored Volumes •  Gateway-Stored volumes store your primary data locally, while asynchronously backing up that data to AWS •  Your Gateway-Stored volumes are mapped to on-premises DAS, NAS, or SAN storage. You can start with either new storage or storage already holding data •  As your on-premises applications write data to and read data from your storage volume, this data is retrieved locally from or stored locally on the on-premises DAS, NAS, or SAN storage you mapped to your storage volume •  Your on-premises gateway also temporarily stores this data on local DAS, NAS, or SAN storage to prepare and buffer it for upload to Amazon S3, where it is stored in the form of Amazon EBS snapshots
  54. 54. Gateway-Cache/Stored Overview
  55. 55. Gateway-VTL •  Presents your existing backup application with an industry-standard iSCSI-based Virtual Tape Library (VTL) consisting of a virtual media changer and virtual tape drives •  Each Virtual Tape Library can hold up to 1,500 virtual tapes with a maximum aggregate capacity of 150 TB •  Once created, virtual tapes are discovered by your backup application using its standard media inventory procedure, are available for immediate access and are backed by Amazon S3 •  When you no longer require immediate or frequent access to data contained on a virtual tape, you can use your backup application to move it from its Virtual Tape Library to your Virtual Tape Shelf (VTS) that is backed by Amazon Glacier, further reducing your storage costs
  56. 56. Gateway-VTL Overview
  57. 57. AWS Storage Gateway Overview •  Recommended only for archive & backup purposes •  Ensure proper thought and care is given when architecting your solutions to your outbound network connection to AWS •  All network communication between the AWS Storage Gateway appliance and AWS is encrypted end-to-end and data is encrypted at rest using 256-bit AES encryption •  Snapshots are available for both Gateway-Cache & Gateway-Stored volumes •  For more information talk to your local Scalar SE or go to http://aws.amazon.com/storagegateway/
  58. 58. Test & Development (Traditional) The traditional method most companies approach test & development environments is an environment that is either lumped in with production infrastructure (sharing network, storage, compute, cooling, etc.) or a separate environment that requires its own network, compute, storage, power, cooling, etc. Either approach is not ideal and does not allow IT departments to move at the pace required to compete in an increasingly shorter time to market dev/test/release cycle that many organization are adopting. Pitfalls to both traditional approaches include: •  Facilities to house the infrastructure including power and cooling. •  Possibility of test/dev environments impacting production •  Rigid environments with long configuration timelines to setup new development and test environments •  Support for repairing, replacing, and refreshing the infrastructure. •  Network infrastructure such as firewalls, routers, switches, and load balancers.
  59. 59. Test & Development (AWS) By running your organization’s test & development environments in AWS you gain the ability to fail often and fail fast as well as less rigidity overall when it comes to the build/test/fix cycle. All the power is in the hands of your developers and typically IT does not need to be involved at all except for the initial architecture and configuration as it pertains to connecting your developers environment to AWS. Some services that typically are in-scope are: •  Virtual Private Cloud •  CloudFormation •  Amazon API & SDKs
  60. 60. Virtual Private Cloud By leveraging VPC you can simply make AWS look like an extension of your network and push development & test completely to AWS freeing up local on- premise resources for production and also giving your developers a fully extensible and self- service option:
  61. 61. CloudFormation •  CloudFormation makes it easy to organize and deploy a collection of AWS resources and lets you describe any dependencies or special parameters to pass in at runtime. This is great for the dev/test use case as being able to package up your entire application as a human readable manifest and deploy it consistently is great as it: •  Eliminates configuration drift •  Automates the entire infrastructure •  Can be stored along with the application source code in your source repository of choice (“Infrastructure-as-code”) •  Great for quick smoke tests (deploy, test, tear down) •  Easily integrates with other configuration management tools (Puppet, Chef, etc.)
  62. 62. Testing Once you have your developers developing in AWS and leveraging configuration and automation platforms (CloudFormation, Puppet, Chef, etc.) creating test environments for all different scenarios now takes minutes rather than days and if you are leveraging the “Infrastructure-as-code” strategy. Some common test scenarios are: §  Unit Tests §  Smoke Test §  User Acceptance Testing (UAT) §  Integration Testing §  Load & Performance Testing §  Blue/Green Testing
  63. 63. Data Sovereignty
  64. 64. Rapid Expansion & Growth *Note: S3 is AWS’ storage product and used as proxy for AWS scale / growth . Source: Company data; KPCB May 24 2014. 0 500 1,000 1,500 2,000 Q4 2006 Q4 2007 Q4 2008 Q4 2009 Q4 2010 Q4 2011 Q1 2012 Q3 2012 Q2 2013 Objects Stored in Amazon S3* NumberofAmazonS3 Objects(B)
  65. 65. What underpins AWS success? •  Pay for what you use •  Programmatic scalability •  (The appearance of) unlimited capacity •  Deep library of web tools – and more coming all the time •  Scale like never before •  Do things you could never do before •  Dramatic reduction in financial risk •  Focus on what you need to do Technical Features & Value Business Benefits
  66. 66. Common Impediments to Adoption •  Many workloads aren’t cloud ready •  Savings are not guaranteed and difficult to forecast •  Legal & regulatory issues abound – but which ones?
  67. 67. Applicable Laws & Regulations
  68. 68. Applicable Laws & Regulations Law or Regulation Governing Body Jurisdiction Applicability To whom does it apply? Cloud Services Allowed? Conditions PIPEDA (law) Office of the Privacy Commissioner of Canada   Canada   Protection of Personal Information   The law applies to almost all organizations and organizations that conduct commercial activities within Canada. Yes   Organizations are responsible for ensuring cloud service providers can provide security and privacy controls that meet PIPEDA requirements.   OSFI Guideline B-10 (industry guideline)   Office of the Superintenden t of Financial Institutions (OSFI)   Canada   Outsourcing Agreements   The guideline applies to outsourcing agreements for all Canadian federally regulated entities (FREs), such as banks and insurance companies.   Yes   Organizations are responsible for ensuring cloud service providers can provide security and privacy controls that meet B-10 requirements.  
  69. 69. Applicable Laws & Regulations Law or Regulation Governing Body Jurisdiction Applicability To whom does it apply? Cloud Services Allowed? Conditions Rules Notice 14-0012 for Outsourcing Arrangement s (industry guidelines)   Investment Industry Regulatory Organization of Canada   Canada   Outsourcing Agreements   The guideline applies to financial institutions involved in debt markets, equity markets, investments; and to investment brokers, dealers, and providers.   Yes   Organizations are responsible for ensuring cloud service providers can provide security and privacy controls that meet 14-0012 requirements. O Organizations are not allowed to outsource business functions/roles that must be performed by approved persons, which means that most client facing activities cannot be outsourced.   SOX (law)   Securities and Exchange Commission (SEC)   U.S. & Some Canadian   Internal Control & Reporting Requirements   All listed companies in the U.S., and all international companies registered with the U.S. Stock Exchange.   Yes   Organizations are responsible for ensuring cloud service providers can provide security controls that meet SOX requirements. . Cloud services should have a SSAE 16 audit report (formerly called SAS 70) as these audits are the primary method for evaluating a third- party’s compliance with SOX.   IT Handbook (industry guidelines)   FFIEC Members   U.S.   Outsourcing Arrangements, Security Controls, and Privacy Controls   Financial institutions such as banks, insurance companies, and credit unions.   Yes   Organizations are responsible for ensuring cloud service providers can provide security controls that meet IT Handbook guidelines. . Cloud service providers should have a SSAE 16/ SAS 70 audit report as these audits can be used for evaluating a third-party’s compliance with the IT Handbook.  
  70. 70. US Patriot Act •  Law allows US law enforcement to inspect data without informing affected party (and in some cases with limited judicial oversight) •  Canadian organizations are responsible for data “throughout its lifecycle”, including transfers across borders •  Most cases, organizations are not prohibited from using US-based cloud services – those organizations should seek meaningful contractual commitments regarding the procedural, technical & physical security protections •  Privacy Commissioner study in 2009 of surveillance laws in Canada, US, France & UK concluded that Canadians are at risk of personal information being seized by Canadian authorities, and that there’s a risk this information is already being shared with US authorities
  71. 71. Key Conclusions •  Most laws & regulations do not prevent using cloud services – they outline controls & standards, much like any outsourced or managed service – you remain accountable for its security & safety •  Some laws require disclosure be made with respect to personal information leaving the province or country •  As with any audit, the key factors to demonstrate compliance are: •  Clear controls •  Audit rights to inspect & enforce those controls •  Independent reports to inspect compliance •  Legal concerns about data privacy can persist – but technology & procedural controls & audits can mitigate that risk
  72. 72. AWS Compliance Standards
  73. 73. Why Scalar?
  74. 74. How Scalar can help •  Independence •  Technical skills & experience •  Commitment to AWS & cloud •  POCs & Test Environments •  Architecture & design •  Build & configuration •  Ongoing management & support •  Escalated support & AWS relationship Why Scalar? Where we can help
  75. 75. Visit our Blog on Cloud Practice: scalar.ca/en/category/practice/cloud Interested in Learning More?
  76. 76. Connect with us! facebook.com/scalardecisions @scalardecisions linkedin.com/company/scalar-decisions slideshare.net/scalardecisions

×