• Save
Your Future with Cloud Computing - Dr. Werner Vogels - AWS Summit 2012 Australia
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Your Future with Cloud Computing - Dr. Werner Vogels - AWS Summit 2012 Australia

on

  • 2,028 views

Keynote presentation from the Sydney AWS Summit 2012 event.

Keynote presentation from the Sydney AWS Summit 2012 event.

Statistics

Views

Total Views
2,028
Views on SlideShare
2,028
Embed Views
0

Actions

Likes
7
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • South America, Sao Paulo region – Dec 2011\n
  • \n
  • Small sliver of the enterprises running on us\n
  • \n
  • \n
  • Many organization first choose the AWS cloud for financial reasons, then realize the agility they gain.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Amazon Web Services provides highly scalable computing infrastructure that enables organizations around the world to requisition compute power, storage, and other on-demand services in the cloud.  These services are available on demand so a customer doesn’t need to think about controlling them, maintaining them or even where they are located. \n\nLet’s take a look at the services that we provide.\n
  • \n
  • One of the reasons we believe companies are adopting these services so quickly is because of our rapid innovation based on customer feedback.  In the past four years we’ve delivered over 200 new technology releases.\n
  • One of the reasons we believe companies are adopting these services so quickly is because of our rapid innovation based on customer feedback.  In the past four years we’ve delivered over 200 new technology releases.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • How many people work on Fatwire on a daily basis\n
  • \n
  • \n
  • \n
  • \n
  • 2\n
  • \n
  • 1/3 of all people on the internet daily use AWS - WIRED\n
  • \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • service that randomly kills EC2 instances in Netflix production environment\nForces engineers to build services that automatically recover without any manual intervention\nPlan for failure as a religion\nConstantly tests Netflix’s ability to succeed despite failure so they are prepared when unexpected events happen\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • \n
  • - Now we’re going to show a video introducing DynamoDB\n
  • \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • \n
  • First let me tell you a bit about Cycle. If you'd have told me 7 years ago when I started bootstrapping Cycle, that today 2 of the 3 largest banks, 3 of the 5 largest insurance, and 4 of the 5 largest Pharma would use Cycle's software to manage supercomputing-class computations, I'd have said you were crazy. The AWS Cloud helps companies do amazing things\n
  • \n
  • \n
  • \n
  • …Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
  • \n
  • \n
  • Today - markets, brands, financials, growth profile\n\nHistory\nstartup, listing > bankrupt\nearly growth > leadership\ninternationalisation > defocused/stagnation\n2008: leadership change\n2009: rebuilding a healthy core. Key: TW (Agile, XD), LM (platform), HW (reliable ops); core group of key staff (~25), lots of sweat and commitment from all staff, lots of contractors.\nMid 2010: people (Delivery).\n\nCurrent focus:\nbroadening the value proposition > market maker, not just market participant.\noptimising operational performance > global operating model.\n\nFinancial performance\n
  • \n
  • Continuous delivery\n
  • Register. Opportunity to guide customer-focused thinking, without telling. What unmet customer need are we solving?\nHacking. Get your product or business or design personnel to participate in teams.\nShowcase and vote. Watch your team start to vote up hack entries that are most likely to have the biggest customer impacts, rather than just the coolest tech stuff.\n
  • But if you can’t tighten the loop between coding and deploying – reducing the time between having an idea and testing it in the wild - it becomes a tough effort to change the business mindset from planning perfection to planning experiments.\n
  • Continuous delivery\n
  • \n
  • \n
  • As you might guess, we run these big data jobs in the Cloud with Amazon Web Services. We load web site log file data into Amazon S3, use Amazon Elastic MapReduce to spin up large clusters of virtual severs to process the data and then use the results to update our product catalog.\n
  • \n
  • 1st... the way online advertising is bought and sold is fundamentally broken. The typical process is a media buyer builds a media plan using ratings data from companies like Nielsen or Comscore. They then send request-for-proposal documents to publishers, who then prepare proposal documents. Negotiation then ensues and at the end, a contract is signed. Once the media contract begins, its difficult to change if you're not meeting your goals. So, the process is very inefficient in the preparation and execution of the advertising campaign.\n\nNow, a lot of people also had this insight, and there were many products trying to automate the media buying process. But at their core, they were automating a fundamentally broken process.\n
  • 2nd... if you abstract the media buying system, it is a one-sided market. In fact, structurally, it is a commodity market. So the insight here is that the solution is to trade media not using the old system which was basically "forward contracts" that have little flexibility, but rather execute the trades in real time as a "spot market”.\n
  • And to execute these trades programmatically, leveraging powerful machine learning algorithms. In this sort of system, we watch every ad impression available and make a buying decision instantaneously of whether to bid for the impression, how much to bid and which ad to show. If a strategy isn't working, you can pause it within minutes. To start a new campaign takes only a few minutes.\n\nOnly a few companies had this insight, and we were fortunate to be in the leading group.\n\nOK - so those two insights were the hard bit. The easy bit was implementing that system.... no, wait, other way around. Actually, it turns out\nthat the implementation is very challenging. Because we're watching every ad impression in the market, and making decisions in real-time, we have\nthree very hard constraints:\n\n1st... Very low latency: we have to make a high quality decision on which ad to show and how much to pay in milliseconds.\n2nd... Very high throughput: we have to make these very fast decisions over 7 million times every minute.\n3rd... Very high volume: we see billions of ad impressions every single day. And we have to report, analyse and learn from all this data.\n
  • Hence the "Big Data" challenge:\n\nIn raw terms, we have over a petabyte of raw log data stored on Amazon Simple Storage Service (S3), and that is growing at 4 terabytes per day or 130 terabytes per month.\nWhen this is compressed down and actually stored, it compresses to around 100 TB. \n\nWhen you're seeing billions of new events every day and processing terabytes per day, traditional database systems just don't cope. So, to help us with this volume, we use Hadoop MapReduce jobs. This is all powered by Amazon Elastic Map Reduce. At any given time, we might have 30-40 Hadoop nodes running various processing jobs, from report aggregations to machine learning algorithms.\n
  • At the time when we started using Amazon Elastic Map Reduce, we didn't have CAPEX, time and in-house skills to setup and maintain a 30-40 node Hadoop cluster required to run these sorts of processing jobs. So Amazon Elastic Map Reduce really enabled us to quickly build the Big Data capability we required without any big up-front investment that would have easily cost us several months and a couple hundred thousand dollars. This accelerated our product time-to-market by months.\n
  • Another requirement is to do Machine Learning "at scale". Sometimes, we want to test a new algorithm. With Amazon Elastic Map Reduce, we can run a once-off job on months of data (literally 100's of terabytes) and test the new algorithm in a couple of hours. If we were using a non-cloud Hadoop cluster, this sort of agile analytics would be cost prohibitive and time consuming. We can do this sort of analysis in hours instead of weeks. With Amazon Elastic Map Reduce, we can innovate quickly and continuously enhance our customer offerings.\n
  • Finally, some of the key learnings from our adoption of Amazon Web Services:\n1) Experiment: It is fast and cheap to experiment, so just get started and iterate. When the experiment is over, just turn off the services.\n2) Learn: Spend some time on the forums and reading the documentation to pick up some tips and pointers to optimise.\n3) Plan: Just because its "in the cloud", doesn't excuse you from having to architect a fault tolerant solution and think about redundancy and single points of failure. Amazon just makes it easier to execute the fault tolerant solutions - you still have to do the thinking and planning. In any reasonable large, complicated distributed system, things are bound to go wrong-network connections timeout, jobs fail to start, and machines occasionally die. Build things expecting failure and put in place the necessary mechanisms to gracefully deal with these minor failures.\n\nThank you for your time today and the opportunity to share a bit about Brandscreen...our challenges with Big Data...and how we're solving those challenges with Amazon Web Services.\n
  • Highly competitive, but requires rich applications\n
  • \n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
  • \n
  • \n
  • Also not shown here is our iPhone app which launched in January of 2011.\nWe are currently developing a number of new mobile products which will target other mobile platforms as well as reach alternative platforms such as over-the-top devices\n
  • PBS is #1 amongst major Networks for unique visitors\n9 months ago we were at 15% which we considered to be very good\n
  • \n
  • \n
  • \n
  • \n
  • Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability, offering the flexibility to enable customers to build a wide range of applications. Helping to protect the confidentiality, integrity, and availability of our customers’ systems and data is of the utmost importance to AWS, as is maintaining customer trust and confidence. This document is intended to answer questions such as, “How does AWS help me protect my data?” Specifically, AWS physical and operational security processes are described for network and server infrastructure under AWS’ management, as well as service-specific security implementations. This document provides an overview of security as it pertains to the following areas relevant to AWS: \n \nShared Responsibility Environment\nControl Environment Summary\nSecure Design Principles\nBackup\nMonitoring\nInformation and Communication\nEmployee Lifecycle\nPhysical Security\nEnvironmental Safeguards\nConfiguration Management \nBusiness Continuity Management\nBackups\nFault Separation \nAmazon Account Security Features\nNetwork Security\nAWS Service Specific Security \nAmazon Elastic Compute Cloud (Amazon EC2) Security\nAmazon Virtual Private Cloud (Amazon VPC)\nAmazon Simple Storage Service (Amazon S3) Security\nAmazon SimpleDB Security\nAmazon Relational Database Service (Amazon RDS) Security\nAmazon Simple Queue Service (Amazon SQS) Security\nAmazon Simple Notification Service (SNS) Security\nAmazon CloudWatch Security\nAuto Scaling Security\nAmazon CloudFront Security\nAmazon Elastic MapReduce Security\n \n
  • Risk and Compliance Overview\nSince AWS and its customers share control over the IT environment, both parties have responsibility for managing the IT environment. AWS’ part in this shared responsibility includes providing its services on a highly secure and controlled platform and providing a wide array of security features customers can use. The customers’ responsibility includes configuring their IT environments in a secure and controlled manner for their purposes. While customers don’t communicate their use and configurations to AWS, AWS does communicate its security and control environment relevant to customers. AWS does this by doing the following:\n \nObtaining industry certifications and independent third party attestations described in this document\nPublishing information about the AWS security and control practices in whitepapers and web site content\n \nPlease see the AWS Security Whitepaper, located at www.aws.amazon.com/security, for a more detailed description of AWS security. The AWS Security Whitepaper covers AWS’s general security controls and service-specific security.\n \nShared Responsibility Environment\nMoving IT infrastructure to AWS services creates a model of shared responsibility between the customer and AWS. This shared model can help relieve customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility and management of, but not limited to, the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. It is possible for customers to enhance security and/or meet their more stringent compliance requirements by leveraging technology such as host based firewalls, host based intrusion detection/prevention, encryption and key management. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment of solutions that meet industry-specific certification requirements. \n \nThis customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment. Customers can then use the AWS control and compliance documentation available to them (described in the “AWS Certifications and Third-party Attestations” section of this document) to perform their control evaluation and verification procedures as required. \n \nThe next section provides an approach on how AWS customers can evaluate and validate their distributed control environment effectively. \n \nStrong Compliance Governance\nAs always, AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of how IT is deployed. Leading practices include an understanding of required compliance objectives and requirements (from relevant sources), establishment of a control environment that meets those objectives and requirements, an understanding of the validation required based on the organization’s risk tolerance, and verification of the operating effectiveness of their control environment. Deployment in the AWS cloud gives enterprises different options to apply various types of controls and various verification methods.\n \nStrong customer compliance and governance might include the following basic approach: \n \nReview information available from AWS together with other information to understand as much of the entire IT environment as possible, and then document all compliance requirements.\nDesign and implement control objectives to meet the enterprise compliance requirements. \nIdentify and document controls owned by outside parties.\nVerify that all control objectives are met and all key controls are designed and operating effectively.\n \nApproaching compliance governance in this manner will help companies gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed.\n\nFISMA\nAWS enables U.S. government agency customers to achieve and sustain compliance with the Federal Information Security Management Act (FISMA). AWS has been certified and accredited to operate at the FISMA-Low level. AWS has also completed the control implementation and successfully passed the independent security testing and evaluation required to operate at the FISMA-Moderate level. AWS is currently pursuing a certification and accreditation to operate at the FISMA-Moderate level from government agencies.\n
  • SAS 70 Type II\nAmazon Web Services publishes a Statement on Auditing Standards No. 70 (SAS 70) Type II Audit report every six months and maintains a favorable opinion from its independent auditors. AWS identifies those controls relating to the operational performance and security of its services. Through the SAS 70 Type II report, an auditor evaluates the design of the stated control objectives and control activities and attests to the effectiveness of their design. The auditors also verify the operation of those controls, attesting that the controls are operating as designed. Provided a customer has signed a non-disclosure agreement with AWS, this report is available to customers who require a SAS 70 to meet their own audit and compliance needs.\n \nThe AWS SAS 70 control objectives are provided here. The report itself identifies the control activities that support each of these objectives.\n \nSecurity Organization\n \nControls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization.\nAmazon User Access\n \nControls provide reasonable assurance that procedures have been established so that Amazon user accounts are added, modified and deleted in a timely manner and are reviewed on a periodic basis.\nLogical Security\n \nControls provide reasonable assurance that unauthorized internal and external access to data is appropriately restricted and access to customer data is appropriately segregated from other customers.\nSecure Data Handling\n \nControls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately.\nPhysical Security\n \nControls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.\nEnvironmental Safeguards\n \nControls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or physical disaster to the computer and data center facilities.\nChange Management\n \nControls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.\nData Integrity, Availability and Redundancy\nControls provide reasonable assurance that data integrity is maintained through all phases including transmission, storage and processing.\nIncident Handling\n \nControls provide reasonable assurance that system incidents are recorded, analyzed, and resolved.\n \nAWS’ commitment to SAS 70 is on-going, and AWS will continue the process of periodic audits. In addition, in 2011 AWS plans to convert the SAS 70 to the new Statement on Standards for Attestation Engagements (SSAE) 16 format (equivalent to the International Standard on Assurance Engagements [ISAE] 3402). The SSAE 16 standard replaces the existing SAS 70 standard, and implementation is currently expected to be required by all SAS 70 publishers in 2011. This new report will be similar to the SAS 70 Type II report, but with additional required disclosures and a modified format.\n
  • Control Objective 1: Security Organization: Controls provide reasonable assurance that information security policies have been implemented and\ncommunicated throughout the organization.\nControl Objective 2: Amazon Employee Lifecycle: Controls provide reasonable assurance that procedures have been established so that Amazon employee\nuser accounts are added, modified and deleted in a timely manner and reviewed on a periodic basis.\nControl Objective 3: Logical Security: Controls provide reasonable assurance that unauthorized internal and external access to data is\nappropriately restricted and access to customer data is appropriately segregated from other customers.\nControl Objective 4: Secure Data Handling: Controls provide reasonable assurance that data handling between the customer’s point of initiation to\nan AWS storage location is secured and mapped accurately.\nControl Objective 5: Physical Security: Controls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.\nControl Objective 6: Environmental Safeguards: Controls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or\nphysical disaster to the computer and data center facilities.\nControl Objective 7: Change Management: Controls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.\nControl Objective 8: Data Integrity, Availability and Redundancy: Controls provide reasonable assurance that data integrity is maintained through all phases including\ntransmission, storage and processing.\nControl Objective 9: Incident Handling: Controls provide reasonable assurance that system incidents are recorded, analyzed, and resolved.\n
  • ISO 27001\nAWS has achieved ISO 27001 certification of our Information Security Management System (ISMS) covering AWS infrastructure, data centers, and services including Amazon EC2, Amazon S3 and Amazon VPC. ISO 27001/27002 is a widely-adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk assessments appropriate to ever-changing threat scenarios. In order to achieve the certification, a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality, integrity, and availability of company and customer information. This certification reinforces Amazon’s commitment to providing significant information regarding our security controls and practices. AWS’s ISO 27001 certification includes all AWS data centers in all regions worldwide and AWS has established a formal program to maintain the certification. AWS provides additional information and frequently asked questions about its ISO 27001 certification on their web site.\n
  • Physical Security\nAmazon has many years of experience in designing, constructing, and operating large-scale datacenters. This experience has been applied to the AWS platform and infrastructure. AWS datacenters are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication a minimum of two times to access datacenter floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff. \n \nAWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee of Amazon or Amazon Web Services. All physical access to datacenters by AWS employees is logged and audited routinely.\n
  • Amazon Web Services is steadily expanding its global infrastructure to help customers achieve lower latency and higher throughput. As our customers grow their businesses, AWS will continue to provide infrastructure that meets their global requirements.\n
  • You can choose to deploy and run your applications in multiple physical locations within the AWS cloud. Amazon Web Services are available in geographic Regions. When you use AWS, you can specify the Region in which your data will be stored, instances run, queues started, and databases instantiated. For most AWS infrastructure services, including Amazon EC2, there are eight regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore) and Asia Pacific (Tokyo), AWS GovCloud (US), US West (Oregon), and South America (Sao Paulo).\n\nWithin each Region are Availability Zones (AZs). Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.95% availability for each Amazon EC2 Region.\n
  • AWS Identity and Access Management (AWS IAM)\nAWS Identity and Access Management (AWS IAM) enables a customer to create multiple users and manage the permissions for each of these users within their AWS Account. A user is an identity (within a customer AWS Account) with unique security credentials that can be used to access AWS Services. AWS IAM eliminates the need to share passwords or access keys, and makes it easy to enable or disable a user’s access as appropriate.\n \nAWS IAM enables customers to implement security best practices, such as least privilege, by granting unique credentials to every user within their AWS Account and only granting permission to access the AWS Services and resources required for the users to perform their job. AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted.\n \nAWS IAM enables customers to minimize the use of their AWS Account credentials. Instead all interactions with AWS Services and resources should be with AWS IAM user security credentials. More information about AWS Identity and Access Management (AWS IAM) is available on the AWS website: http://aws.amazon.com/iam/\n
  • \n
  • \n
  • Amazon Elastic Compute Cloud (Amazon EC2) Security\nSecurity within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host system, the virtual instance operating system or guest OS, a firewall, and signed API calls. Each of these items builds on the capabilities of the others. The goal is to protect against data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexibility in configuration that customers demand. \n \nMultiple Levels of Security\nHost Operating System: Administrators with a business need to access the management plane are required to use multi-factor authentication to gain access to purpose-built administration hosts. These administrative hosts are systems that are specifically designed, built, configured, and hardened to protect the management plane of the cloud. All such access is logged and audited. When an employee no longer has a business need to access the management plane, the privileges and access to these hosts and relevant systems are revoked.\n \nGuest Operating System: Virtual instances are completely controlled by the customer. Customers have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to customer instances and cannot log into the guest OS. AWS recommends a base set of security best practices to include disabling password-only access to their hosts, and utilizing some form of multi-factor authentication to gain access to their instances (or at a minimum certificate-based SSH Version 2 access). Additionally, customers should employ a privilege escalation mechanism with logging on a per-user basis. For example, if the guest OS is Linux, after hardening their instance, they should utilize certificate-based SSHv2 to access the virtual instance, disable remote root login, use command-line logging, and use ‘sudo’ for privilege escalation. Customers should generate their own key pairs in order to guarantee that they are unique, and not shared with other customers or with AWS. \n \nFirewall: Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic. The traffic may be restricted by protocol, by service port, as well as by source IP address (individual IP or Classless Inter-Domain Routing (CIDR) block).\n \n
  • The Hypervisor\nAmazon EC2 currently utilizes a highly customized version of the Xen hypervisor, taking advantage of paravirtualization (in the case of Linux guests). Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access, the guest OS has no elevated access to the CPU. The CPU provides four separate privilege modes: 0-3, called rings. Ring 0 is the most privileged and 3 the least. The host OS executes in Ring 0. However, rather than executing in Ring 0 as most operating systems do, the guest OS runs in a lesser-privileged Ring 1 and applications in the least privileged Ring 3. This explicit virtualization of the physical resources leads to a clear separation between guest and hypervisor, resulting in additional security separation between the two. \n \nInstance Isolation\nDifferent instances running on the same physical machine are isolated from each other via the Xen hypervisor. Amazon is active in the Xen community, which provides awareness of the latest developments. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance's virtual interface. All packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms. \n
  • Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customer’s data are never unintentionally exposed to another. AWS recommends customers further protect their data using appropriate means. One common solution is to run an encrypted file system on top of the virtualized disk device. \n
  • Network Security\nThe AWS network provides significant protection against traditional network security issues and the customer can implement further protection. The following are a few examples:\n \nDistributed Denial Of Service (DDoS) Attacks\nAWS Application Programming Interface (API) endpoints are hosted on large, Internet-scale, world-class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer. Proprietary DDoS mitigation techniques are used. Additionally, AWS’s networks are multi-homed across a number of providers to achieve Internet access diversity. \n \nMan In the Middle (MITM) Attacks \nAll of the AWS APIs are available via SSL-protected endpoints which provide server authentication. Amazon EC2 AMIs automatically generate new SSH host certificates on first boot and log them to the instance’s console. Customers can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time. Customers are encouraged to use SSL for all of their interactions with AWS.\n \nIP Spoofing\nAmazon EC2 instances cannot send spoofed network traffic. The AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.\n \nPort Scanning \nUnauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. Violations of the AWS Acceptable Use Policy are taken seriously, and every reported violation is investigated. Customers can report suspected abuse via the contacts available on our website at: http://aws.amazon.com/contact-us/report-abuse/ When unauthorized port scanning is detected it is stopped and blocked. Port scans of Amazon EC2 instances are generally ineffective because, by default, all inbound ports on Amazon EC2 instances are closed and are only opened by the customer. The customer’s strict management of security groups can further mitigate the threat of port scans. If the customer configures the security group to allow traffic from any source to a specific port, then that specific port will be vulnerable to a port scan. In these cases, the customer must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan. For example, a web server must clearly have port 80 (HTTP) open to the world, and the administrator of this server is responsible for the security of the HTTP server software, such as Apache. Customers may request permission to conduct vulnerability scans as required to meet their specific compliance requirements. These scans must be limited to the customer’s own instances and must not violate the AWS Acceptable Use Policy. Advanced approval for these types of scans can be initiated by submitting a request via the website at: https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/AWSSecurityPenTestRequest \n \nPacket sniffing by other tenants\nIt is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice customers should encrypt sensitive traffic.\n\nConfiguration Management \nEmergency, non-routine, and other configuration changes to existing AWS infrastructure are authorized, logged, tested, approved, and documented in accordance with industry norms for similar systems. Updates to AWS’ infrastructure are done to minimize any impact on the customer and their use of the services. AWS will communicate with customers, either via email, or through the AWS Service Health Dashboard (http://status.aws.amazon.com/) when service use is likely to be adversely affected. \n \nSoftware\nAWS applies a systematic approach to managing change so that changes to customer impacting services are thoroughly reviewed, tested, approved and well communicated. \n \nAWS’ change management process is designed avoid unintended service disruptions and to maintain the integrity of service to the customer. Changes deployed into production environments are: \nReviewed: Peer reviews of the technical aspects of a change\nTested: being applied will behave as expected and not adversely impact performance\nApproved: to provide appropriate oversight and understanding of business impact \n \nChanges are typically pushed into production in a phased deployment starting with lowest impact areas. Deployments are tested on a single system and closely monitored so impact can be evaluated. Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies. These metrics are closely monitored with thresholds and alarming in place. Rollback procedures are documented in the Change Management (CM) ticket. \n \nWhen possible, changes are scheduled during regular change windows. Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate.\n \nPeriodically, AWS performs self-audits of changes to key services to monitor quality, maintain high standards and to facilitate continuous improvement of the change management process. Any exceptions are analyzed to determine the root cause and appropriate actions are taken to bring the change into compliance or roll back the change if necessary. Actions are then taken to address and remediate the process or people issue.\n \nInfrastructure\nAmazon’s Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third-party software delivery, internally developed software and configuration management. The Infrastructure team maintains and operates a UNIX/Linux configuration management framework to address hardware scalability, availability, auditing, and security management. By centrally managing hosts through the use of automated processes that manage change, the Company is able to achieve its goals of high availability, repeatability, scalability, robust security and disaster recovery. Systems and Network Engineers monitor the status of these automated tools on a daily basis, reviewing reports to respond to hosts that fail to obtain or update their configuration and software.\n \nInternally developed configuration management software is installed when new hardware is provisioned. These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host. This configuration management software also helps to regularly update packages that are already installed on the host. Only approved personnel enabled through the permissions service may log in to the central configuration management servers. \n
  • \n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
  • Multiple Levels of Security\nVirtual Private Cloud: Each VPC is a distinct, isolated network within the cloud. At creation time, an IP address range for each VPC is selected by the customer. Network traffic within each VPC is isolated from all other VPCs; therefore, multiple VPCs may use overlapping (even identical) IP address ranges without loss of this isolation. By default, VPCs have no external connectivity. Customers may create and attach an Internet Gateway, VPN Gateway, or both to establish external connectivity, subject to the controls below.\n \nAPI: Calls to create and delete VPCs, change routing, security group, and network ACL parameters, and perform other functions are all signed by the customer’s Amazon Secret Access Key, which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM. Without access to the customer’s Secret Access Key, Amazon VPC API calls cannot be made on the customer’s behalf. In addition, API calls can be encrypted with SSL to maintain confidentiality. Amazon recommends always using SSL-protected API endpoints. AWS IAM also enables a customer to further control what APIs a newly created user has permissions to call. \n \nSubnets: Customers create one or more subnets within each VPC; each instance launched in the VPC is connected to one subnet. Traditional Layer 2 security attacks, including MAC spoofing and ARP spoofing, are blocked.\n \nRoute Tables and Routes: Each Subnet in a VPC is associated with a routing table, and all network traffic leaving a subnet is processed by the routing table to determine the destination.\n \nVPN Gateway: A VPN Gateway enables private connectivity between the VPC and another network. Network traffic within each VPN Gateway is isolated from network traffic within all other VPN Gateways. Customers may establish VPN Connections to the VPN Gateway from gateway devices at the customer premise. Each connection is secured by a pre-shared key in conjunction with the IP address of the customer gateway device.\n \nInternet Gateway: An Internet Gateway may be attached to a VPC to enable direct connectivity to Amazon S3, other AWS services, and the Internet. Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance. Additionally, network routes are configured (see above) to direct traffic to the Internet Gateway. AWS provides reference NAT AMIs that can be extended by customers to perform network logging, deep packet inspection, application-layer filtering, or other security controls.\n \nThis access can only be modified through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the Internet Gateway, therefore enabling the customer to implement additional security through separation of duties.\n \nAmazon EC2 Instances: Amazon EC2 instances running with an Amazon VPC contain all of the benefits described above related to the Host Operating System, Guest Operating System, Hypervisor, Instance Isolation, and protection against packet sniffing.\n \nTenancy: VPC allows customers to launch Amazon EC2 instances that are physically isolated at the host hardware level; they will run on single tenant hardware. A VPC can be created with ‘dedicated’ tenancy, in which case all instances launched into the VPC will utilize this feature. Alternatively, a VPC may be created with ‘default’ tenancy, but customers may specify ‘dedicated’ tenancy for particular instances launched into the VPC.\n \nFirewall (Security Groups): Like Amazon EC2, Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance. The default group enables inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, as well as source/destination IP address (individual IP or Classless Inter-Domain Routing (CIDR) block). \n \nThe firewall isn’t controlled through the Guest OS; rather it can be modified only through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall, therefore enabling the customer to implement additional security through separation of duties. The level of security afforded by the firewall is a function of which ports are opened by the customer, and for what duration and purpose. Well-informed traffic management and security design are still required on a per-instance basis. AWS further encourages customers to apply additional per-instance filters with host-based firewalls such as IPtables or the Windows Firewall.\n \nNetwork Access Control Lists: To add a further layer of security within Amazon VPC, customers can configure Network ACLs. These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within VPC. These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol, by service port, as well as source/destination IP address.\n \nLike security groups, network ACLs are managed through Amazon VPC APIs, adding an additional layer of protection and enabling additional security through separation of duties.\n
  • \n
  • \n
  • Amazon Simple Data Base (SimpleDB) Security\nAmazon SimpleDB APIs provide domain-level controls that only permit authenticated access by the domain creator, therefore the customer maintains full control over who has access to their data. \n \nAmazon SimpleDB access can be granted based on an AWS Account ID. Once authenticated, an AWS Account has full access to all operations. Access to each individual domain is controlled by an independent Access Control List that maps authenticated users to the domains they own. A user created with AWS IAM only has access to the operations and domains for which they have been granted permission via policy. \n \nAmazon SimpleDB is accessible via SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2. Data stored within Amazon SimpleDB is not encrypted by AWS; however the customer can encrypt data before it is uploaded to Amazon SimpleDB. These encrypted attributes would be retrievable as part of a Get operation only. They could not be used as part of a query filtering condition. Encrypting before sending data to Amazon SimpleDB helps protect against access to sensitive customer data by anyone, including AWS.\n\nAmazon SimpleDB Data Management \nWhen a domain is deleted from Amazon SimpleDB, removal of the domain mapping starts immediately, and is generally processed across the distributed system within seconds. Once the mapping is removed, there is no remote access to the deleted domain. \n \nWhen item and attribute data are deleted within a domain, removal of the mapping within the domain starts immediately, and is also generally complete within seconds. Once the mapping is removed, there is no remote access to the deleted data. That storage area is then made available only for write operations and the data are overwritten by newly stored data. \n

Your Future with Cloud Computing - Dr. Werner Vogels - AWS Summit 2012 Australia Presentation Transcript

  • 1. Your Future with Cloud Computing Dr. Werner Vogels CTO, Amazon.com
  • 2. AWS Global Infrastructure GovCloud US West US West US East South America EU Asia Pacific Asia(US ITAR Region)(Northern California) (Oregon) (Northern Virginia) (Sao Paulo) (Ireland) (Singapore) Pacific (Tokyo) AWS Regions AWS Edge Locations
  • 3. Powering the Most Popular Internet Businesses
  • 4. Trusted by Enterprises
  • 5. And Government Agencies
  • 6. Partner EcosystemSystem Integrators Independent Software Vendors
  • 7. What Enterprises are Running on AWS Business Applications Web Applications Big Data & High Performance Computing Disaster Recovery & Archive
  • 8. What Analysts are Saying about AWSInfrastructure-as-a-Service Leader in 2011 Gartner IaaS Leader in 2011 Forrester Market Share Leader Magic Quadrant Hadoop Wave
  • 9. The Scale of AWS: Amazon S3 Growth Peak Requests: 650,000+ per secondTotal Number of Objects Stored in Amazon S3
  • 10. The Scale of AWS: Amazon S3 Growth Peak Requests: 650,000+ 762 Billion per second Total Number of Objects Stored in Amazon S3 262 Billion 102 Billion 14 Billion 40 Billion2.9 BillionQ4 2006 Q4 2007 Q4 2008 Q4 2009 Q4 2010 Q4 2011
  • 11. The Scale of AWS: Amazon S3 Growth 905 Billion Peak Requests: 650,000+ 762 Billion per second Total Number of Objects Stored in Amazon S3 262 Billion 102 Billion 14 Billion 40 Billion2.9 BillionQ4 2006 Q4 2007 Q4 2008 Q4 2009 Q4 2010 Q4 2011 Q1 2012
  • 12. Our Price Reduction Philosophy Scale & Innovation… … Drive Costs Down Invest in CapitalAttract MoreCustomers Invest in Technology 19 Price Reductions Reduce Improve Prices Efficiency
  • 13. AWS Platform Overview Deployment & Administration App ServicesCompute Storage Database Networking AWS Global Infrastructure
  • 14. AWS Global InfrastructureSecure, redundant Cloud infrastructurefor global companies and global apps Regions Deployment & Administration Availability Zones App Services Compute Storage Database Networking Edge Locations AWS Global Infrastructure
  • 15. AWS Networking ServicesExtend your enterprise infrastructure tothe AWS Cloud Amazon Virtual Private Cloud VPN to Extend Your Network Topology to AWS Deployment & Administration AWS Direct Connect Private, Dedicated Connection to AWS App Services Compute Storage Database Amazon Route 53 Networking Scalable Domain Name Service AWS Global Infrastructure
  • 16. Compute ServicesScalable Linux and Windowscompute services Amazon EC2 Virtual Servers in the AWS Cloud Deployment & Administration Auto Scaling App Services Rule-driven scaling service for EC2 Compute Storage Database Amazon Elastic Load Balancing Networking Virtual load balancers for EC2 AWS Global Infrastructure
  • 17. Storage ServicesScalable and Durable High Performance Cloud Storage Amazon S3 Redundant, High-Scale Object Store Deployment & Administration App Services Amazon Elastic Block Store Persistent block storage for EC2 Compute Storage Database Networking AWS Storage Gateway AWS Global Infrastructure Seamless backup of enterprise data to S3
  • 18. Database ServicesScalable and Durable HighPerformance Cloud Storage Amazon DynamoDB High Performance NoSQL Database Service Amazon RDS Deployment & Administration Managed Oracle & MySQL Database Service App Services Compute Storage Database Amazon ElastiCache Managed Memecached Service Networking AWS Global Infrastructure
  • 19. AWS App ServicesHighly abstracted services that Amazon CloudFrontreplace software for commonly Global Content Delivery Serviceneeded application functionality Amazon CloudSearch Managed Search Service that Automatically Scales Amazon SWF Deployment & Administration Simple Workflow Service App Services Amazon SNS Simple Notification Service Compute Storage Database Amazon SQS Networking Simple Queuing Service AWS Global Infrastructure Amazon SES Simple Transactional Email Service
  • 20. Ecosystem App Services3rd party highly abstracted services that Securityreplace software for commonly needed Servicesapplication functionality… and already run on AWS Log Analysis Services Deployment & Administration Developer Services App Services BI Services Compute Storage Database Networking Test Services AWS Global Infrastructure
  • 21. Deployment & Administration3rd party managed services thatreplace software for commonly AWS Ecosystemneeded application functionality … AWS Management Consoleand already run on AWS Web-based management interface Amazon Elastic MapReduce Big Data Analytics Service a Deployment & Administration AWS IAM Identity & Access Management App Services Amazon CloudWatch Automated monitoring & alerts Compute Storage Database AWS CloudFormation Networking Automated AWS resource provisioning AWS Elastic Beanstalk AWS Global Infrastructure Java & PHP App deployment & management
  • 22. AWS Pace of Innovation… 82 Including: 61 AWS Oregon Region Elastic Beanstalk (Beta) Amazon SES (Beta) Including: AWS CloudFormation Amazon SNS Amazon RDS for Oracle Amazon CloudFront AWS Direct Connect Amazon Route 53 48 S3 Bucket Policies AWS GovCloud (US) Amazon ElastiCache Including: RDS Multi-AZ Support VPC Virtual Networking Amazon RDS RDS Reserved Databases VPC Dedicated Instances Amazon VPC AWS Import/Export SMS Text Notification Amazon EMR AWS IAM Beta 24 EC2 Auto Scaling AWS Singapore Region CloudFront Live Streaming AWS Tokyo Region EC2 Reserved Instances Cluster Instances for EC2 Including: SAP RDS on EC2 EC2 Elastic Load Balance Micro Instances for EC2 Amazon SimpleDB 9 Amazon Cloudfront AWS Import/Export Amazon Linux AMI SAP BO on EC2 Win Srv 2008 R2 on EC2 AWS Mngmt Console Oracle Apps on EC2 Including: Amazon EBS Win Srv 2003 VM Import Win Srv 2008 on EC2 SUSE Linux on EC2 Amazon FPS EC2 Availability Zones Amazon S3 SSE IBM Apps on EC2 VM Import for EC2 Red Hat Enterprise on EC2 EC2 Elastic IP Addresses 2007 2008 2009 2010 2011
  • 23. …Continuing in 2012 15 Amazon DynamoDB in Europe Storage Gateway in South America CloudFront Live Streaming 9 Route 53 Latency Based Routing PHP and Git for Elastic Beanstalk Live Smooth Streaming for Amazon CloudFront Lowers Content Expiration CloudFront 7 RDS Increases Backup Retention Reserved Cache Nodes for Amazon ElastiCache 6 IAM Password Management AWS CloudSearch Amazon DynamoDB IAM User Access to Account Billing AWS Marketplace AWS Storage Gateway Amazon Simple Workflow Service Amazon RDS Free Trial program DynamoDB Announces BatchWriteItem Amazon RDS on Amazon VPC Amazon DynamoDB in Japan Amazon EC2 Medium Instances Feature AWS IAM Identity Federation ElastiCache in Oregon and Sao Paulo 64-bit AMI on Small & Medium AWS Elastic Beanstalk in Japan Windows Free Usage Tier Amazon S3 Lower Prices EC2 Linux Login from Console DynamoDB in Three Regions New Premium Support Features AWS CloudFormation for VPC Beanstalk Resource Permissions AWS CloudFormation in VPCNew AWS Direct Connect Locations New Osaka and Milan Edge Locations EC2, RDS, ElastiCache Lower Prices EC2 CC2 Instance in Amazon VPC January February March April
  • 24. AWS Direct Connect Private secure connection to AWS AWS Cloud Bypass the public Internet AWS Direct Connect High bandwidth and predictable Internet latencyCorporate Data Center
  • 25. AWS Storage Gateway Easily backup on-premises data to AWS Snapshots in S3 Amazon S3 Store snapshots in Amazon S3 for backup and disaster recovery Simple software appliance - no changes required to your on-premises architectureAWS Storage Gateway Your Data Center
  • 26. Amazon Simple Workflow Service Run application workflows and business processes on AWS Amazon SWF Manage processes across Cloud, mobile and on-premises environmentsCloud Mobile On Premises Use any programming language for workflow logic
  • 27. Amazon DynamoDB Non Relational (NoSQL) Database Fast & predictable performance Seamless Scalability Zero administration
  • 28. Oracle Multi-AZ Replicates database updates across two Availability Zones Automatically fail over to the standby for planned maintenance and unplanned disruptions Increased durability and availabilityAvailability Zone Availability Zone
  • 29. PHP & Git Deployment for AWS Beanstalk git push Elastic Beanstalk Run and manage existing PHP applications with no changes to application code PHPYour App Apache HTTP Server Provides full control over the Amazon Linux infrastructure and the software Elastic Load Balancer yourApp.elasticbeanstalk.com
  • 30. SQL Server & .NET Beanstalk Fully managed Express,Web, Standard and Enterprise Editions of SQL Server 2008 R2 .NET SQL Server (Express Edition) covered Text under the free usage tier for a full year Elastic Beanstalk leverages the WindowsSQL Server 2008 R2 AMI and IIS 7.5Server Deploy using AWS Toolkit for Visual Studio
  • 31. Amazon CloudSearch Fully managed search service Up and running in less than an hour Automatically scales for data and traffic Starting at less than $100 / month
  • 32. AWS Marketplace Find, buy and run software running on AWS More than 250 listings at launch Sell your software or SaaS app to our hundreds of thousands of customers aws.amazon.com/marketplace
  • 33. VPC 2
  • 34. News LimitedCraige Pendleton-BrowneChief Technology Officer
  • 35. Context •News Ltd runs a single enterprise CMS platform •Supporting 8 major web sites •12 different critical systems •Over 600m page impressions per month •Approximately 2400 new assets created daily34
  • 36. The Challenge •Complex technology stack – development = 46 servers •All configuration and deployment manual •56 days and 6 teams to build a new environment •Impact – slow project start up – Only run one major project at a time – Lack of innovation The Challenge go from 56 days to 1 day in the cloud35
  • 37. Current Status •Virtual Private Cloud configured and working •Configuration separated out and all systems packaged •Semi automated build process implemented in EC2 •2 project environments up and running in EC2 •From 56 days to 3 days semi automated36
  • 38. Current Status •Developers can run up or tear down environments •Two new projects starting this month with poof of concepts in the cloud •Ability to stand up 8 distinct environments quickly •By the end of the month reduce time to 6 hours37
  • 39. Where to next •An agreed corporate cloud governance model •Seamlessly integrate cloud and physical environments •Automated procedures for managing costs •Move towards a devops model •Move production to the cloud38
  • 40. The Seven Transformations of Cloud Computing
  • 41. A common misconception: cloud computing is only about….Saving money Doing things faster
  • 42. Cloud Transforms what’s possible
  • 43. Transformation 1:Distributed Architectures Made Easy High Availability
  • 44. Building Distributed Architectures
  • 45. Cloud Computing Makes This Easier Distributed Multi-AZ Building Blocks Loosely CoupledInfrastructure Services Process Coordination AWS Regions S3 EC2 SWF Instances DynamoDB SNS Availability Zones Elastic Load SQS RDS Balancer
  • 46. Architecture Templates for Common Patterns MICROSOFT SHAREPOINTaws.amazon.com/architecture
  • 47. … open source Simian Armycoming soon
  • 48. Vodafone Hutchison Australia Easwaren Siva General Manager Technology Strategy & Product
  • 49. VodafoneCricket LIVEAustralia Behind the Scenes Vodafone Hutchison Australia
  • 50. Vodafone AustraliaVodafone Australia operated by Vodafone Hutchison Australia (VHA)2009 merger, Vodafone Australia and Hutchison 3G AustraliaOperates Vodafone, 3 Mobile and Crazy John’s brandsVHA mobile services to over 7.0 million customersShareholders operate Mobile Networks across the globe 50
  • 51. Big Brother Key Learning: ‘Big Brother’ 05 No Smartphones - No Apps Environments Early days of 3G – 3 Mobile 100% 3G 3 Mobile pioneered ‘Live’ Mobile TV with ‘Live’ interactive TV can drive immense traffic towards your Portals and Content 0 200 400 600 800 16:44:23 16:45:23 16:46:23 16:47:23 16:48:23 16:49:23 16:50:23 16:51:23 16:52:23 16:53:23 16:54:23 16:55:23 16:56:23 16:57:23 16:58:23 16:59:23 17:00:23 17:01:23 17:02:23 17:03:23 17:04:23 17:05:23 17:06:23 17:07:23 17:08:23 17:09:23 17:10:23 17:11:23 17:12:23 17:13:23 17:14:23 17:15:23 17:16:23 17:17:23 17:18:23 17:19:23 17:20:23 17:21:23 17:22:23 17:23:23 17:24:23 17:25:23 17:26:23 17:27:23 17:28:23 17:29:23 17:30:23 17:31:23 17:32:23 17:33:23 17:34:23 17:35:23 17:36:23 17:37:23 17:38:23 17:39:23 17:40:23 17:41:23 17:42:23 17:43:23 17:44:23 17:45:23 17:46:23 17:47:23 17:48:23 17:49:23 17:50:23 17:51:23 17:52:23 17:53:23 17:54:23 17:55:23 17:56:23 17:57:23 17:58:23 17:59:23 18:00:23 18:01:23 18:02:23 18:03:23 18:04:23 18:05:23 18:06:23 18:07:23 18:08:23 18:09:23 18:10:23 18:11:23 18:12:23 18:13:23 18:14:23 18:15:23 18:16:23 18:17:23 18:18:23 18:19:23 18:20:23 18:21:23 18:22:23 18:23:23 18:24:23 18:25:23 18:26:23 18:27:23 18:28:23 18:29:23 18:30:23 18:31:23 18:32:23 18:33:23 18:34:23 18:35:23 18:36:23 18:37:23 18:38:23 18:39:23 18:40:23 18:41:23 18:42:23 18:43:23 18:44:23 18:45:23 18:46:23 18:47:23 18:48:23 18:49:23 18:50:23 18:51:23 18:52:23 18:53:23 18:54:23 18:55:23 18:56:23 18:57:23 18:58:23 18:59:23 19:00:23 19:01:23 19:02:23 19:03:23 19:04:23 19:05:23 19:06:23 19:07:23 19:08:23 19:09:23 19:10:23 19:11:23 19:12:23 19:13:23 19:14:23 19:15:23 19:16:23 19:17:23 19:18:23 19:19:23 19:20:23 19:21:23 19:22:23 19:23:23 19:24:23 19:25:23 19:26:23 19:27:23 19:28:23 19:29:23 19:30:23 19:31:23 19:32:23 19:33:23 19:34:23 19:35:23 19:36:23 19:37:23 19:38:23 19:39:23 19:40:23 19:41:23 19:42:23 19:43:23 19:44:23 19:45:23 19:46:23 19:47:23 19:48:23 19:49:23 19:50:23 19:51:23 19:52:23 19:53:23 19:54:23 19:55:23 19:56:23 19:57:23 19:58:23 19:59:23 20:00:23 20:01:23 20:02:23 20:03:23 20:04:23 20:05:23 20:06:23 20:07:23 20:08:23 20:09:23 20:10:23 20:11:23 20:12:23 20:13:23 20:14:23 20:15:23 20:16:23 20:17:23 20:18:23 20:19:23 20:20:23 20:21:23 20:22:23 20:23:23 20:24:23 20:25:23 20:26:23 20:27:23 20:28:23 20:29:23 20:30:23 20:31:23 20:32:23 20:33:23 20:34:23 20:35:23 20:36:23 20:37:23 20:38:23 20:39:23 20:40:23 20:41:23 20:42:23 20:43:23 20:44:23 20:45:23 20:46:23 20:47:23 20:48:23 20:49:23 20:50:23 20:51:23 20:52:23 20:53:23 20:54:23 20:55:23 20:56:23 20:57:23 20:58:23 20:59:23 21:00:23 21:01:23 21:02:23 21:03:23 21:04:23 Total Concurrent Conenctions (Sun 26th June - 16:45 -> 23:00) 21:05:23 21:06:23 21:07:23 21:08:23 21:09:23 21:10:23 21:11:23 21:12:23 21:13:23 21:14:23 21:15:23 21:16:23 21:17:23 21:18:23 21:19:23 21:20:23 21:21:23 21:22:23 21:23:23 21:24:23 21:25:23 21:26:23 21:27:23 21:28:23 21:29:23 21:30:23 21:31:23 21:32:23 21:33:23 21:34:23 21:35:23 21:36:23 21:37:23 21:38:23 21:39:23 21:40:23 21:41:23 21:42:23 21:43:23 21:44:23 21:45:23 21:46:23 21:47:23 21:48:23 21:49:23 21:50:23 21:51:23 21:52:23 21:53:23 21:54:23 21:55:23 21:56:23 21:57:23 21:58:23 21:59:23 22:00:23 22:01:23 22:02:23 22:03:23 22:04:23 22:05:23 22:06:23 22:07:23 22:08:23 22:09:23 22:10:23 22:11:23 22:12:23 22:13:23 22:14:23 22:15:23 22:16:23 22:17:23 22:18:23 22:19:23 22:20:23 22:21:23 22:22:23 22:23:23 22:24:23 22:25:23 22:26:23 22:27:23 22:28:23 22:29:23 22:30:23 22:31:23 22:32:23 22:33:23 22:34:23 22:35:23 22:36:2351 22:37:23 22:38:23 22:39:23 22:40:23 22:41:23 22:42:23 22:43:23 22:44:23 22:45:23 22:46:23 22:47:23 22:48:23 22:49:23 22:50:23 22:51:23 22:52:23 22:53:23 22:54:23 22:55:23 22:56:23 22:57:23 22:58:23 22:59:23 23:00:23
  • 52. 2011/12 Vodafone Cricket Live Australia iPhone and iPad App Android and Tablet App Scores and Highlights ‘Live’ Cricket TV Streaming Vodafone Viewers verdict
  • 53. 2011/12 Vodafone Cricket Live Australia – Some Stats Over 700K Apps downloaded Approximately 4 Million visits Over 500K streams 24.7TB iPhone streaming data for December Peak 10K Simultaneous Streams Live scores peaked at 1000 rps (Jan)
  • 54. 2011/12 Vodafone Cricket Live Australia – Some Stats Scores Data Requests iPhone Streaming Traffic
  • 55. Cricket App - Vodafone Viewers Verdict Challenge - managing ‘peak’ load cost effectively 55
  • 56. Vodafone Cricket Live Australia - Architecture 56
  • 57. Vodafone Cricket Live Australia - Architecture 57
  • 58. Vodafone Cricket Live Australia – Amazon Components 2 Elastic Load Balancers (ELB) 3 EC2 instances in idle configuration (2 large 1 small), auto expandable up to 9 (8 large 1 small) under load All EC2 instances are bootstrapped to load application after instantiation. 1 S3 bucket to store application itself 2 auto-scaling groups to protect from hardware failure and give expandability. Any failed server will be automatically replaced MySQL relational database service (RDS) instance to hold all data Cloudwatch CPU usage alarms linked to auto-scaling groups for auto expand and auto shrink Contracted ProQuest to build and optimise our AWS instances/environment
  • 59. Key Learnings and Next Step Key Learnings Public Cloud Infrastructure - best cost option for Low Frequency but High Demand services Content Delivery Networks (CDN) and Cloud Computing provides an optimal solution Next Step in Progress Unified Content Management System on Amazon to manage ‘peak demands’ when new devices are released Online Oracle Webcentre Sites / Fatwire 7.6 Content Management System in Production
  • 60. Transformation 2:Embracing the security advantages of shared systems
  • 61. ApplicationsFlexibility to Choose the Right Security Model for Each Application You Infrastructure AWS Security Infrastructure SOC 1/SSAE 16/ISAE 3402, Every Customer Gets the Highest ISO 27001, PCI DSS, HIPAA, ITAR, FISMA Moderate, FIPS 140-2 Level of Security
  • 62. Kit, go fasterTransformation 3: From Scaling by Architecture …to Scaling By Command Yes Michael
  • 63. Scaling by Architecture: NoSQL Database Cluster Set up Config & Shard & Rinse &more servers Tune Repartition Repeat
  • 64. Scaling by Command with Amazon DynamoDB Amazon DynamoDB Data is automatically spread across enough hardware to deliver single digit millisecond latency.
  • 65. Transformation 4:A Supercomputer in the Hands of Every Developer
  • 66. Supercomputers used to be Privileges of the EliteExpensiveRationed timeOnly for the “highest value” jobs
  • 67. Supercomputers by the Hour… for Everyone.AWS built the 42 nd fastest supercomputer in the world1,064 Amazon EC2 CC2 instances with17,024 cores240 teraflops cluster (240 trillion calculations per second)Less than $1,000 per hour
  • 68. Develops leading computational
  • 69. Instead of $20M in datacenter spend… 51,132 Cores… 3 Hours… $4,828/ hour …
  • 70. Transformation 5:Experiment Often & Fail Quickly
  • 71. Traditional Infrastructure Drives up the Cost of Failure … Innovation Suffers $1 2How many big ticket 7 M $technology ideas can your $9 Mbudget tolerate?
  • 72. Experiment Often & Fail Quickly with AWS  $1 00  $2 K  $5 00  Cost of failure falls dramaticallyPeople are free to try out new ideas  $7 5 $3 3  $3 K More risk taking, more innovation  $2 34  $5 00  $6 92  $1 K  $9 6  $1 2
  • 73. REA Group Richard Durnall Head of Delivery
  • 74. • Picture of view from my desk 77
  • 75. A bit about us• Picture of view from my desk 77
  • 76. helped by
  • 77. Distributed Agile helped by
  • 78. helped by
  • 79. Continuous Delivery helped by
  • 80. helped by
  • 81. Hack Days helped by
  • 82. helped by
  • 83. Home Ideas helped by
  • 84. Transformation 6:Big Data without Big Servers
  • 85. Attacking Big Data Problems Shouldn’t Be This Complicated Storing Massive Data Investing In Expensive Volumes Into A Huge Data Server Clusters To Process Warehouse The Data
  • 86. The Cloud Makes This a Lot Simpler Hadoop Clusters Amazon S3Amazon DynamoDB Amazon EMR Load Data in Organize & Visualize the Cloud Analyze Data Results 1 2 3
  • 87. BrandscreenSeth YatesFounder & CTO
  • 88. It’s broken
  • 89. Structurally, acommoditymarket
  • 90. Low latency. High throughput. Huge volume.
  • 91. 1 petabyte10% per month
  • 92. 1.Experimen t 2.Learn 3.PlanAll images sourced fromiStockPhoto.com
  • 93. Transformation 7:Mobile Ecosystem for a Mobile-First World
  • 94. Building Mobile
  • 95. Rich media experienceLocation context awareReal-time presence drivenSocial graph basedUser generated contentRecommendationsIntegration w/ social networksVirtual goods economyAdvertisement / premium supportMulti-device access
  • 96. Cloud Mobile Ecosystem
  • 97. PBS Video for iPad PBSKids Video for iPad Launched Nov ‘10 Launched April ‘11
  • 98. Fun With Numbers - February 2012Total Video Mobile VideoUnique visitors: 30M/mo 115k unique visitors per dayVisits: 57M/mo 310k daily app opensPage views: 367M / mo 27% of hours watched, 40% of streamsVideo streams: 145M/moHours watched: 2.3M/mo
  • 99. The AWS MissionEnable businesses and developers to use web services to build scalable, sophisticated applications.
  • 100. Security and Privacyin the CloudStephen SchmidtVice President &Chief Information Security Officer
  • 101. AWS Security Model Overview Certifications & Accreditations Shared Responsibility Model Sarbanes-Oxley (SOX) compliance Customer/SI Partner/ISV controls guest ISO 27001 Certification OS-level security, including patching and PCI DSS Level I Certification maintenance HIPAA compliant architecture Application level security, including password and role based access SAS 70(SOC 1) Type II Audit Host-based firewalls, including Intrusion FISMA Low & Moderate ATOs Detection/Prevention Systems DIACAP MAC III-Sensitive Separation of Access  Pursuing DIACAP MAC II–SensitivePhysical Security VM Security Network Security Multi-level, multi-factor controlled access Multi-factor access to Amazon Account Instance firewalls can be configured in environment Instance Isolation security groups; Controlled, need-based access for AWS • Customer-controlled firewall at the The traffic may be restricted by protocol, employees (least privilege) hypervisor level by service port, as well as by source IPManagement Plane Administrative Access • Neighboring instances prevented access address (individual IP or Classless Inter- Multi-factor, controlled, need-based Domain Routing (CIDR) block). • Virtualized disk management layer access to administrative host ensure only account owners can access Virtual Private Cloud (VPC) provides All access logged, monitored, reviewed storage disks (EBS) IPSec VPN access from existing enterprise AWS Administrators DO NOT have logical data center to a set of logically isolated Support for SSL end point encryption for access inside a customer’s VMs, including AWS resources API calls applications and data
  • 102. Shared Responsibility Model AWS Customer•Facilities •Operating System•Physical Security •Application•Physical •Security Groups Infrastructure •Network ACLs•Network •Network Infrastructure Configuration •Account Management
  • 103. AWS Security Resources http://aws.amazon.com/security/ Security Whitepaper Risk and Compliance Whitepaper Latest Versions May 2011, January 2012respectively Regularly Updated Feedback is welcome
  • 104. AWS CertificationsSarbanes-Oxley (SOX)ISO 27001 CertificationPayment Card Industry Data Security Standard (PCI DSS) Level 1 CompliantSAS70(SOC 1) Type II AuditFISMA A&As• Multiple NIST Low Approvals to Operate (ATO)• NIST Moderate, GSA issued ATO• FedRAMPDIACAP MAC III Sensitive ATOCustomers have deployed various compliant applications such as HIPAA(healthcare)
  • 105. SOC 1 Type IIAmazon Web Services now publishes a Service Organization Controls 1 (SOC 1), Type 2 reportevery six months and maintains a favorable unbiased and unqualified opinion from itsindependent auditors. AWS identifies those controls relating to the operational performanceand security to safeguard customer data. The SOC 1 report audit attests that AWS’ controlobjectives are appropriately designed and that the individual controls defined to safeguardcustomer data are operating effectively. Our commitment to the SOC 1 report is on-going and weplan to continue our process of periodic audits.The audit for this report is conducted in accordance with the Statement on Standards forAttestation Engagements No. 16 (SSAE 16) and the International Standards for AssuranceEngagements No. 3402 (ISAE 3402) professional standards. This dual-standard report can meeta broad range of auditing requirements for U.S. and international auditing bodies. This audit isthe replacement of the Statement on Auditing Standards No. 70 (SAS 70) Type II report.
  • 106. SOC 1Control Objective 1: Security OrganizationControl Objective 2: Amazon Employee LifecycleControl Objective 3: Logical SecurityControl Objective 4: Secure Data HandlingControl Objective 5: Physical SecurityControl Objective 6: Environmental SafeguardsControl Objective 7: Change ManagementControl Objective 8: Data Integrity, Availability and RedundancyControl Objective 9: Incident Handling
  • 107. ISO 27001 AWS has achieved ISO 27001 certification of ourInformation Security Management System (ISMS) coveringAWS infrastructure, data centers in all regions worldwide,and services including Amazon Elastic Compute Cloud(Amazon EC2), Amazon Simple Storage Service (AmazonS3) and Amazon Virtual Private Cloud (Amazon VPC). Wehave established a formal program to maintain thecertification.
  • 108. Physical Security Amazon has been building large-scale data centers formany years Important attributes: •Non-descript facilities •Robust perimeter controls •Strictly controlled physical access •2 or more levels of two-factor auth Controlled, need-based access for AWS employees (least privilege) All access is logged and reviewed
  • 109. GovCloud US West US West US East South America EU Asia Asia (US ITAR (Northern (Oregon) (Northern (Sao Paulo) (Ireland) Pacific Pacific Region) California) Virginia) (Singapore) (Tokyo) AWS Regions AWS Edge Locations
  • 110. AWS Regions and Availability Zones Customer Decides Where Applications and Data Reside
  • 111. AWS Identity and Access ManagementEnables a customer to create multiple Usersand manage the permissions for each ofthese Users.Secure by default; new Users have no accessto AWS until permissions are explicitlygranted. UsAWS IAM enables customers to minimize theuse of their AWS Account credentials.Instead all interactions with AWS Servicesand resources should be with AWS IAM Usersecurity credentials.erCustomers can enable MFA devices for theirAWS Account as well as for the Users theyhave created under their AWS Account withAWS IAM.
  • 112. AWS MFA Benefits Helps prevent anyone with unauthorized knowledgeof your e-mail address and password fromimpersonating you Requires a device in your physical possession togain access to secure pages on the AWS Portal or togain access to the AWS Management Console Adds an extra layer of protection to sensitiveinformation, such as your AWS access identifiers Extends protection to your AWS resources such asAmazon EC2 instances and Amazon S3 data
  • 113. Amazon EC2 SecurityHost operating system• Individual SSH keyed logins via bastion host for AWS admins• All accesses logged and auditedGuest operating system• Customer controlled at root level• AWS admins cannot log in• Customer-generated keypairsFirewall• Mandatory inbound instance firewall, default deny mode• Outbound instance firewall available in VPC• VPC subnet ACLsSigned API calls• Require X.509 certificate or customer’s secret AWS key
  • 114. Amazon EC2 Instance Isolation Customer 1 Customer 2 … Customer n Hypervisor Virtual Interfaces Customer 1 Security Groups Customer 2 Security Groups … Customer n Security Groups Firewall Physical Interfaces
  • 115. Virtual Memory & Local Disk Amazon EC2 Instances Encrypted File System Amazon EC2 Instance Encrypted Swap File•Proprietary Amazon disk management prevents one Instance from reading the disk contents of another•Local disk storage can also be encrypted by the customer for an added layer of security
  • 116. Network Security ConsiderationsDDoS (Distributed Denial of Service):• Standard mitigation techniques in effectMITM (Man in the Middle):• All endpoints protected by SSL• Fresh EC2 host keys generated at bootIP Spoofing:• Prohibited at host OS levelUnauthorized Port Scanning:• Violation of AWS TOS• Detected, stopped, and blocked• Ineffective anyway since inbound ports blocked by defaultPacket Sniffing:• Promiscuous mode is ineffective
  • 117. Amazon Virtual Private Cloud (VPC)Create a logically isolated environment in Amazon’s highly scalable infrastructureSpecify your private IP address range into one or more public or private subnetsControl inbound and outbound access to and from individual subnets using statelessNetwork Access Control ListsProtect your Instances with stateful filters for inbound and outbound traffic usingSecurity GroupsAttach an Elastic IP address to any instance in your VPC so it can be reacheddirectly from the InternetBridge your VPC and your onsite IT infrastructure with an industry standard encryptedVPN connection and/or AWS Direct ConnectUse a wizard to easily create your VPC in 4 different topologies
  • 118. Amazon VPC Architecture Customer’s isolated AWS resources Subnets Router VPN GatewaySecure VPN AmazonConnection over theInternet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 119. Amazon VPC Architecture Customer’s isolated AWS resources Subnets Router VPN GatewaySecure VPN AmazonConnection over theInternet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 120. Amazon VPC Architecture Customer’s isolated AWS resources SubnetsInternet Router VPN Gateway Secure VPN Amazon Connection over the Internet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 121. Amazon VPC Architecture Customer’s isolated AWS resources SubnetsInternet Router VPN Gateway Secure VPN Amazon Connection over the Internet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 122. Amazon VPC Architecture Customer’s isolated AWS resources Subnets NATInternet Router VPN Gateway Secure VPN Amazon Connection over the Internet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 123. Amazon VPC Architecture Customer’s isolated AWS resources Subnets NATInternet Router VPN Gateway Secure VPN Amazon Connection over the Internet Web Services AWS Direct Connect Cloud – Dedicated Path/ Bandwidth Customer’s Network
  • 124. Amazon VPC Network Security Controls
  • 125. Amazon VPC - Dedicated Instances New option to ensure physical hosts are not shared withother customers $10/hr flat fee per Region + small hourly charge Can identify specific Instances as dedicated Optionally configure entire VPC as dedicated
  • 126. AWS Deployment Models Logical Server Granular Logical Physical Government Only ITAR Sample Workloads and Application Information Network server Physical Network Compliant Isolation Access Policy Isolation Isolation and Facility (US Persons Isolation Only)Commercial Cloud   Public facing apps. Web sites, Dev test etc.Virtual Private     Data Center extension,Cloud (VPC) TIC environment, email, FISMA low and ModerateAWS GovCloud       US Persons Compliant(US) and Government Specific Apps.
  • 127. Thanks! Remember to visithttps://aws.amazon.com/security