Vodafone Hutchison Australia operates mobile networks serving over 7 million customers across various brands, and in the early 2000s pioneered live mobile TV streaming of shows like Big Brother, which drove immense traffic to their content portals. This early adoption of new technologies demonstrated how interactive live content on mobile networks can engage users and boost traffic to online services.
2011 State of the Cloud: A Year's Worth of Innovation in 30 Minutes - Jinesh...Amazon Web Services
A Year's Worth of Innovation in 30 Minutes -
In this Keynote talk, Jinesh Varia discuss all the new features and services that AWS released in 2011 and discusses AWS growth and innovation along with customers and partners.
The speaker notes contain the links to the blog posts of announcements.
2011 State of the Cloud: A Year's Worth of Innovation in 30 Minutes - Jinesh...Amazon Web Services
A Year's Worth of Innovation in 30 Minutes -
In this Keynote talk, Jinesh Varia discuss all the new features and services that AWS released in 2011 and discusses AWS growth and innovation along with customers and partners.
The speaker notes contain the links to the blog posts of announcements.
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think.
This introductory session covers some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
AWS Core services:
* The AWS web console: the entry point for configuring your infrastructure in the AWS cloud
* The Free Tier and how to setup billing alerts
* Elastic Compute Cloud (EC2) instances, and the ease with which you can pick a particular Amazon Machine Image (AMI) for your workload, and spin it up as an instance right away
* How to create and deploy a high-availability web application in AWS, with an Elastic Load Balancer (ELB) and a multi-availability-zone Relational-Database-Service (RDS) instance
* How CloudFormation can automate all of the above.
Serverless Functions:
Serverless architecture allows developers to focus on code and their business problem rather than spending time looking after backend infrastructure. Serverless architecture can help developers build scalable, high-performing, and cost-effective applications quickly
We will talk about how serverless architecture and AWS Lambda can make things easier, cheaper, and help to accelerate development of projects.
AWS Evangelist, Ryan Shuttleworth, explores the extended features of AWS S3 in this Masterclass webinar.
AWS S3 hosts over 1.3 trillion objects and is used for storing a wide range of data, from system backups, web site assets and digital media. In this webinar we will explain the features of S3 from static website hosting, through server side encryption to Glacier integration. We'll dive deep into the feature sets of S3 to give a rounded overview of its capabilities, looking at common use cases, APIs and best practice.
To see the recording and demostration for this webinar on YouTube, please click on the following links:
Masterclass Webinar: Amazon S3 Recording - http://www.youtube.com/watch?v=HHuRJZChCYQ
Masterclass Webinar: Amazon S3 Demonstration - http://www.youtube.com/watch?v=JuffWMBeJkw
Cloud Computing and Eclipse technology - how does it fit together?Markus Knauer
Today, many companies, such as Amazon, Google, Microsoft, and others claim to provide the one and only cloud solution, but their offerings are different, aren’t they? Or do they have more in common than we think? Our talk starts with an introduction to cloud technology as it exists today by comparing the different products from the cloud providers. Next we will outline how technology from the Eclipse Runtime projects can contribute to a combined ’Cloud Stack’ and discuss currently available and possible future scenarios.
In this presentation, the Eclipse plugins from Amazon (announced at EclipseCon 2009) will be compared with the tooling for Microsoft Azure (announced at Eclipse Summit Europe 2009). Additionally, the features of the g-Eclipse project will be presented. g-Eclipse 1.0 was released in December 2009 as an Eclipse project for Grid and Cloud computing within the Eclipse community. g-Eclipse is a framework that allows users and developers to access Computing Grids and Cloud Computing resources in a unified way.
IT infrastructure planning for Thanksgiving and the holiday season is a real challenge for e-commerce companies. A typical e-commerce site sees a 4x to 6x spike in user visits to its website during Thanksgiving (Black Friday and Cyber Monday). You either provision less infrastructure and risk losing out on potential sales on account of your site going down or over-provision and risk having too much of spare capacity later.
Architectures for open and scalable cloudsRandy Bias
My presentation for 2012's Cloud Connect that goes over architectural and design patterns for open and scalable clouds. Technical deck targeted at business audiences with a technical bent.
The fourth in our series of webinars, 'Journey Through the AWS Cloud'. This complimentary presentation discusses the use of services offered by AWS that alleviate the need for you to install and manage software on EC2 instances. We introduce the key services customers employ to keep them focused on developing their applications, whilst AWS takes care of running the scalable and reliable building blocks upon which they are built.
AWS Canberra WWPS Summit 2013 - Disaster Recovery with the AWS CloudAmazon Web Services
Disaster recovery is about preparing for and recovering from any event that has a negative impact on your IT systems. A typical approach involves duplicating infrastructure to ensure the availability of spare capacity in the event of a disaster. Learn how Amazon Web Services allows you to scale up your infrastructure on an as-needed basis. For a disaster recovery solution, this results in significant cost savings.
Developing applications on Amazon Web Services (AWS) or moving your business into the cloud is more straightforward than you think.
This introductory session covers some of the most popular Amazon Web Services: Amazon Elastic Compute Service (EC2), Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon Elastic Block Storage (EBS) and Amazon Relational Database Service (RDS).
AWS Core services:
* The AWS web console: the entry point for configuring your infrastructure in the AWS cloud
* The Free Tier and how to setup billing alerts
* Elastic Compute Cloud (EC2) instances, and the ease with which you can pick a particular Amazon Machine Image (AMI) for your workload, and spin it up as an instance right away
* How to create and deploy a high-availability web application in AWS, with an Elastic Load Balancer (ELB) and a multi-availability-zone Relational-Database-Service (RDS) instance
* How CloudFormation can automate all of the above.
Serverless Functions:
Serverless architecture allows developers to focus on code and their business problem rather than spending time looking after backend infrastructure. Serverless architecture can help developers build scalable, high-performing, and cost-effective applications quickly
We will talk about how serverless architecture and AWS Lambda can make things easier, cheaper, and help to accelerate development of projects.
AWS Evangelist, Ryan Shuttleworth, explores the extended features of AWS S3 in this Masterclass webinar.
AWS S3 hosts over 1.3 trillion objects and is used for storing a wide range of data, from system backups, web site assets and digital media. In this webinar we will explain the features of S3 from static website hosting, through server side encryption to Glacier integration. We'll dive deep into the feature sets of S3 to give a rounded overview of its capabilities, looking at common use cases, APIs and best practice.
To see the recording and demostration for this webinar on YouTube, please click on the following links:
Masterclass Webinar: Amazon S3 Recording - http://www.youtube.com/watch?v=HHuRJZChCYQ
Masterclass Webinar: Amazon S3 Demonstration - http://www.youtube.com/watch?v=JuffWMBeJkw
Cloud Computing and Eclipse technology - how does it fit together?Markus Knauer
Today, many companies, such as Amazon, Google, Microsoft, and others claim to provide the one and only cloud solution, but their offerings are different, aren’t they? Or do they have more in common than we think? Our talk starts with an introduction to cloud technology as it exists today by comparing the different products from the cloud providers. Next we will outline how technology from the Eclipse Runtime projects can contribute to a combined ’Cloud Stack’ and discuss currently available and possible future scenarios.
In this presentation, the Eclipse plugins from Amazon (announced at EclipseCon 2009) will be compared with the tooling for Microsoft Azure (announced at Eclipse Summit Europe 2009). Additionally, the features of the g-Eclipse project will be presented. g-Eclipse 1.0 was released in December 2009 as an Eclipse project for Grid and Cloud computing within the Eclipse community. g-Eclipse is a framework that allows users and developers to access Computing Grids and Cloud Computing resources in a unified way.
IT infrastructure planning for Thanksgiving and the holiday season is a real challenge for e-commerce companies. A typical e-commerce site sees a 4x to 6x spike in user visits to its website during Thanksgiving (Black Friday and Cyber Monday). You either provision less infrastructure and risk losing out on potential sales on account of your site going down or over-provision and risk having too much of spare capacity later.
Architectures for open and scalable cloudsRandy Bias
My presentation for 2012's Cloud Connect that goes over architectural and design patterns for open and scalable clouds. Technical deck targeted at business audiences with a technical bent.
The fourth in our series of webinars, 'Journey Through the AWS Cloud'. This complimentary presentation discusses the use of services offered by AWS that alleviate the need for you to install and manage software on EC2 instances. We introduce the key services customers employ to keep them focused on developing their applications, whilst AWS takes care of running the scalable and reliable building blocks upon which they are built.
AWS Canberra WWPS Summit 2013 - Disaster Recovery with the AWS CloudAmazon Web Services
Disaster recovery is about preparing for and recovering from any event that has a negative impact on your IT systems. A typical approach involves duplicating infrastructure to ensure the availability of spare capacity in the event of a disaster. Learn how Amazon Web Services allows you to scale up your infrastructure on an as-needed basis. For a disaster recovery solution, this results in significant cost savings.
Cloud computing in Australia - Separating hype from realityRussell_Kennedy
The growth of cloud computing in Australia has been exponential and analysts forecast that cloud computing will dominate the Australian IT landscape within the next decade.
It has a reputation for delivering economies of scale, reducing overheads and driving increased efficiencies within organisations. However, the reality is that, like any IT procurement, implementing a cloud computing solution for your business still requires careful planning, effective project management, robust contracts and sound oversight.
Russell Kennedy Lawyers delve into the risks and rewards of adopting Cloud Computing in Australia.
Recent presentation to Infosys on HP's cloud capabilities, opportunities to partner, case studies and what HP is doing in Private Cloud to enable partner and business success in the ANZ market.
Users and Mobility are driving Change and Disruption in the traditional IT environment.
Customers and Users expect faster response times, availability to systems from anywhere & anytime.
Considering a Next Generation Data Centre to support these changes and deliver the platform you need to right source IT services and applications is a fundamental requirement.
Dimension Data provides insight into how we do this from a User Perspective, through to defining a Mobility strategy and architecting an IT landscape to support this.
This session will cover practical strategies for breaking down barriers to delivering content, accessing information and overcoming economics to meet student needs where they are.
Speaker: Rob Carr, Solutions Architect, Amazon Web Services
What is everything you know about change was wrong?Oscar Trimboli
Navigating the myths of change and the importance of listening beyond what you hear, exploring the difference between a fixed and growth learning mindset
Oscar Trimboli
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...Amazon Web Services
The cloud not only helps organizations do things better, cheaper, and faster; it also drives breakthroughs that transform mission delivery. This session will feature a panel of international government and university leaders who are using the cloud to take on big data challenges, and innovating in the “white space” between data silos to deliver impact.
This is the presentation I gave for the ePortfolio Australia conference about how I use my blog, wiki etc as an ePortfolio in my role of a midwife: http://www.flexiblelearning.net.au/files/EAC_Programme_website_v11_FINAL.pdf
With an increasing number of applications being deployed in the cloud, this trend will soon touch performance testers within every organisation. This presentation will dispel the hype, tell you what you need to know to embrace this opportunity, and answer the following questions:
* What are the challenges specifically related to performance testing cloud-based applications?
* What are some common performance problems seen in cloud-based applications, and how can you test for them?
* How will cloud-based load generators help your performance testing?
Don't get left behind! A solid understanding of cloud concepts will be invaluable to your testing career.
This presentation was originally given at Iqnite Australia (Melbourne) on October 16th, 2014.
Presentation created for Red Hat's Technical Event Series (Journey to the Cloud) in Australia & New Zealand, May 2015. Credit to Trevor Quinn for the Cloud Enabled Application techniques.
Running Microsoft SharePoint On AWS - Smartronix and AWS - WebinarAmazon Web Services
Miles Ward, Solution Architect, AWS
Robert Groat, Chief Technology Officer, Smartronix
discuss how you can run microsoft Enterprise Applications like SharePoint on AWS Cloud, Architecture. Recovery.gov
From the AWS Briefing that took place at Croke Park, Dublin on 20 March 2014. Includes the following:
What is Cloud Computing and what are its benefits?
Who is using AWS and what are they using it for?
What are AWS’s products & how do I use them to run my workloads?
Day 1 - Introduction to Cloud Computing with Amazon Web ServicesAmazon Web Services
Whether you are running applications that share photos or support critical operations of your business, you need rapid access to flexible and low cost IT resources. The term "cloud computing" refers to the on-demand delivery of IT resources via the Internet with pay-as-you-go pricing. Whether you are a startup who wants to accelerate growth without a big upfront investment in cash or time for technology or an Enterprise looking for IT innovation, agility and resiliency while reducing costs, the AWS Cloud provides a complete set of infrastructure services at zero upfront costs which are available with a few clicks and within minutes. Join this webinar to learn more about the benefits of Cloud Computing.
Reasons to attend:
- Learn the concepts of utility computing and elasticity and why these are important to a cost-effective, scalable and reliable IT architecture.
- Hear about the AWS service portfolio and the global footprint on which it is delivered and the value proposition of the AWS Cloud.
Journey Through the AWS Cloud; Building Powerful Web ApplicationsAmazon Web Services
The penultimate in our series of webinars, 'Journey Through the AWS Cloud'. This complimentary presentation discusses how to build powerful web applications in the AWS Cloud. Bringing together many concepts from previous webinars in the series, we summarise a rule book to give you a reference point for architecting with AWS.
Listen to the recording of this webinar: http://www.youtube.com/watch?v=IHRlQPpgbEs
Join this foundational session to understand the core concepts of “Cloud Computing” and different attributes such as reliability, fault tolerance, elasticity, scalability and pay-as-you-go pricing. Whether you are a startup who wants to accelerate growth without a big upfront investment in cash or time for technology or an Enterprise looking for IT innovation, agility and resiliency while reducing costs, the AWS Cloud provides a complete set of infrastructure services at zero upfront costs which are available with a few clicks and within minutes. Join this webinar to learn more about the benefits of Cloud Computing.
This session will highlight the breadth and depth of services that make up the AWS platform. Participants will learn about the AWS Global Infrastructure, Networking, Compute, Storage, Database, Application Services, and Deployment & Administration. This session is designed for technical decision-makers to come away with a top-level understanding of AWS building block cloud services.
In this presentation from the AWS User Group UK meetup in November 2014 I recap the new AWS services that were launched and announced at AWS re:Invent 2014.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Your Future with Cloud Computing - Dr. Werner Vogels - AWS Summit 2012 Australia
1. Your Future with Cloud Computing
Dr. Werner Vogels
CTO, Amazon.com
2. AWS Global Infrastructure
GovCloud US West US West US East South America EU Asia Pacific Asia
(US ITAR Region)(Northern California) (Oregon) (Northern Virginia) (Sao Paulo) (Ireland) (Singapore) Pacific
(Tokyo)
AWS Regions
AWS Edge Locations
7. What Enterprises are Running on AWS
Business
Applications
Web
Applications
Big Data & High
Performance Computing
Disaster Recovery
& Archive
8. What Analysts are Saying about AWS
Infrastructure-as-a-Service Leader in 2011 Gartner IaaS Leader in 2011 Forrester
Market Share Leader Magic Quadrant Hadoop Wave
9. The Scale of AWS: Amazon S3 Growth
Peak Requests:
650,000+
per second
Total Number of Objects Stored in Amazon S3
10. The Scale of AWS: Amazon S3 Growth
Peak Requests:
650,000+ 762 Billion
per second
Total Number of Objects Stored in Amazon S3
262 Billion
102 Billion
14 Billion 40 Billion
2.9 Billion
Q4 2006 Q4 2007 Q4 2008 Q4 2009 Q4 2010 Q4 2011
11. The Scale of AWS: Amazon S3 Growth
905 Billion
Peak Requests:
650,000+ 762 Billion
per second
Total Number of Objects Stored in Amazon S3
262 Billion
102 Billion
14 Billion 40 Billion
2.9 Billion
Q4 2006 Q4 2007 Q4 2008 Q4 2009 Q4 2010 Q4 2011 Q1 2012
12. Our Price Reduction Philosophy
Scale & Innovation… … Drive Costs Down
Invest in
Capital
Attract More
Customers
Invest in
Technology
19 Price Reductions
Reduce Improve
Prices Efficiency
14. AWS Global Infrastructure
Secure, redundant Cloud infrastructure
for global companies and global apps
Regions
Deployment & Administration
Availability Zones
App Services
Compute Storage Database
Networking Edge Locations
AWS Global Infrastructure
15. AWS Networking Services
Extend your enterprise infrastructure to
the AWS Cloud
Amazon Virtual Private Cloud
VPN to Extend Your Network Topology to AWS
Deployment & Administration AWS Direct Connect
Private, Dedicated Connection to AWS
App Services
Compute Storage Database
Amazon Route 53
Networking Scalable Domain Name Service
AWS Global Infrastructure
16. Compute Services
Scalable Linux and Windows
compute services
Amazon EC2
Virtual Servers in the AWS Cloud
Deployment & Administration
Auto Scaling
App Services
Rule-driven scaling service for EC2
Compute Storage Database
Amazon Elastic Load Balancing
Networking
Virtual load balancers for EC2
AWS Global Infrastructure
17. Storage Services
Scalable and Durable High Performance Cloud Storage
Amazon S3
Redundant, High-Scale Object Store
Deployment & Administration
App Services Amazon Elastic Block Store
Persistent block storage for EC2
Compute Storage Database
Networking
AWS Storage Gateway
AWS Global Infrastructure
Seamless backup of enterprise data to S3
18. Database Services
Scalable and Durable High
Performance Cloud Storage
Amazon DynamoDB
High Performance NoSQL Database Service
Amazon RDS
Deployment & Administration
Managed Oracle & MySQL Database Service
App Services
Compute Storage Database
Amazon ElastiCache
Managed Memecached Service
Networking
AWS Global Infrastructure
19. AWS App Services
Highly abstracted services that
Amazon CloudFront
replace software for commonly Global Content Delivery Service
needed application functionality
Amazon CloudSearch
Managed Search Service that Automatically Scales
Amazon SWF
Deployment & Administration Simple Workflow Service
App Services
Amazon SNS
Simple Notification Service
Compute Storage Database
Amazon SQS
Networking Simple Queuing Service
AWS Global Infrastructure Amazon SES
Simple Transactional Email Service
20. Ecosystem App Services
3rd party highly abstracted services that
Security
replace software for commonly needed Services
application functionality
… and already run on AWS Log Analysis
Services
Deployment & Administration Developer
Services
App Services
BI Services
Compute Storage Database
Networking Test
Services
AWS Global Infrastructure
21. Deployment & Administration
3rd party managed services that
replace software for commonly AWS Ecosystem
needed application functionality … AWS Management Console
and already run on AWS Web-based management interface
Amazon Elastic MapReduce
Big Data Analytics Service
a
Deployment & Administration AWS IAM
Identity & Access Management
App Services
Amazon CloudWatch
Automated monitoring & alerts
Compute Storage Database
AWS CloudFormation
Networking Automated AWS resource provisioning
AWS Elastic Beanstalk
AWS Global Infrastructure Java & PHP App deployment & management
22. AWS Pace of Innovation… 82
Including:
61 AWS Oregon Region
Elastic Beanstalk (Beta)
Amazon SES (Beta)
Including:
AWS CloudFormation
Amazon SNS
Amazon RDS for Oracle
Amazon CloudFront
AWS Direct Connect
Amazon Route 53
48 S3 Bucket Policies
AWS GovCloud (US)
Amazon ElastiCache
Including: RDS Multi-AZ Support
VPC Virtual Networking
Amazon RDS RDS Reserved Databases
VPC Dedicated Instances
Amazon VPC AWS Import/Export
SMS Text Notification
Amazon EMR AWS IAM Beta
24 EC2 Auto Scaling AWS Singapore Region
CloudFront Live Streaming
AWS Tokyo Region
EC2 Reserved Instances Cluster Instances for EC2
Including:
SAP RDS on EC2
EC2 Elastic Load Balance Micro Instances for EC2
Amazon SimpleDB
9 Amazon Cloudfront AWS Import/Export Amazon Linux AMI
SAP BO on EC2
Win Srv 2008 R2 on EC2
AWS Mngmt Console Oracle Apps on EC2
Including: Amazon EBS
Win Srv 2003 VM Import
Win Srv 2008 on EC2 SUSE Linux on EC2
Amazon FPS EC2 Availability Zones
Amazon S3 SSE
IBM Apps on EC2 VM Import for EC2
Red Hat Enterprise on EC2 EC2 Elastic IP Addresses
2007 2008 2009 2010 2011
23. …Continuing in 2012
15
Amazon DynamoDB in Europe
Storage Gateway in South America
CloudFront Live Streaming
9
Route 53 Latency Based Routing
PHP and Git for Elastic Beanstalk
Live Smooth Streaming for Amazon
CloudFront Lowers Content Expiration CloudFront
7 RDS Increases Backup Retention Reserved Cache Nodes for Amazon
ElastiCache
6 IAM Password Management
AWS CloudSearch
Amazon DynamoDB IAM User Access to Account Billing
AWS Marketplace
AWS Storage Gateway Amazon Simple Workflow Service Amazon RDS Free Trial program
DynamoDB Announces BatchWriteItem
Amazon RDS on Amazon VPC Amazon DynamoDB in Japan Amazon EC2 Medium Instances Feature
AWS IAM Identity Federation ElastiCache in Oregon and Sao Paulo 64-bit AMI on Small & Medium AWS Elastic Beanstalk in Japan
Windows Free Usage Tier Amazon S3 Lower Prices EC2 Linux Login from Console DynamoDB in Three Regions
New Premium Support Features AWS CloudFormation for VPC Beanstalk Resource Permissions AWS CloudFormation in VPC
New AWS Direct Connect Locations New Osaka and Milan Edge Locations EC2, RDS, ElastiCache Lower Prices EC2 CC2 Instance in Amazon VPC
January February March April
24.
25.
26. AWS Direct Connect
Private secure connection to AWS
AWS Cloud
Bypass the public Internet
AWS Direct
Connect
High bandwidth and predictable
Internet latency
Corporate Data Center
27. AWS Storage Gateway
Easily backup on-premises data to AWS
Snapshots in
S3 Amazon S3
Store snapshots in Amazon S3 for backup
and disaster recovery
Simple software appliance - no changes
required to your on-premises architecture
AWS Storage
Gateway
Your Data Center
28. Amazon Simple Workflow Service
Run application workflows and business
processes on AWS
Amazon SWF
Manage processes across Cloud, mobile
and on-premises environments
Cloud Mobile On Premises Use any programming language for
workflow logic
29. Amazon DynamoDB
Non Relational (NoSQL) Database
Fast & predictable performance
Seamless Scalability
Zero administration
30. Oracle Multi-AZ
Replicates database updates across two
Availability Zones
Automatically fail over to the standby for
planned maintenance and unplanned
disruptions
Increased durability and availability
Availability Zone Availability Zone
31. PHP & Git Deployment for AWS Beanstalk
git push
Elastic Beanstalk
Run and manage existing PHP
applications with no changes to
application code
PHP
Your App Apache HTTP
Server Provides full control over the
Amazon Linux infrastructure and the software
Elastic Load
Balancer
yourApp.elasticbeanstalk.com
32. SQL Server & .NET Beanstalk
Fully managed Express,Web, Standard
and Enterprise Editions of SQL Server
2008 R2
.NET
SQL Server (Express Edition) covered
Text
under the free usage tier for a full year
Elastic Beanstalk leverages the Windows
SQL
Server 2008 R2 AMI and IIS 7.5
Server
Deploy using AWS Toolkit for Visual
Studio
33. Amazon CloudSearch
Fully managed search service
Up and running in less than an hour
Automatically scales for data and traffic
Starting at less than $100 / month
34. AWS Marketplace
Find, buy and run software running on
AWS
More than 250 listings at launch
Sell your software or SaaS app to our
hundreds of thousands of customers
aws.amazon.com/marketplace
37. Context
•News Ltd runs a single enterprise CMS platform
•Supporting 8 major web sites
•12 different critical systems
•Over 600m page impressions per month
•Approximately 2400 new assets created daily
34
38. The Challenge
•Complex technology stack – development = 46 servers
•All configuration and deployment manual
•56 days and 6 teams to build a new environment
•Impact
– slow project start up
– Only run one major project at a time
– Lack of innovation
The Challenge
go from 56 days to 1 day in the cloud
35
39. Current Status
•Virtual Private Cloud configured and working
•Configuration separated out and all systems packaged
•Semi automated build process implemented in EC2
•2 project environments up and running in EC2
•From 56 days to 3 days semi automated
36
40. Current Status
•Developers can run up or tear down environments
•Two new projects starting this month with poof of concepts in the cloud
•Ability to stand up 8 distinct environments quickly
•By the end of the month reduce time to 6 hours
37
41. Where to next
•An agreed corporate cloud governance model
•Seamlessly integrate cloud and physical environments
•Automated procedures for managing costs
•Move towards a devops model
•Move production to the cloud
38
53. Vodafone Australia
Vodafone Australia operated by Vodafone Hutchison Australia (VHA)
2009 merger, Vodafone Australia and Hutchison 3G Australia
Operates Vodafone, 3 Mobile and Crazy John’s brands
VHA mobile services to over 7.0 million customers
Shareholders operate Mobile Networks across the globe
50
55. 2011/12 Vodafone Cricket Live Australia
iPhone and iPad App
Android and Tablet App
Scores and Highlights
‘Live’ Cricket TV Streaming
Vodafone Viewers verdict
56. 2011/12 Vodafone Cricket Live Australia – Some Stats
Over 700K Apps downloaded
Approximately 4 Million visits
Over 500K streams
24.7TB iPhone streaming data for December
Peak 10K Simultaneous Streams
Live scores peaked at 1000 rps (Jan)
57. 2011/12 Vodafone Cricket Live Australia – Some Stats
Scores Data Requests
iPhone Streaming Traffic
61. Vodafone Cricket Live Australia – Amazon Components
2 Elastic Load Balancers (ELB)
3 EC2 instances in idle configuration (2 large 1 small), auto expandable up to 9 (8 large 1 small)
under load
All EC2 instances are bootstrapped to load application after instantiation.
1 S3 bucket to store application itself
2 auto-scaling groups to protect from hardware failure and give expandability. Any failed server
will be automatically replaced
MySQL relational database service (RDS) instance to hold all data
Cloudwatch CPU usage alarms linked to auto-scaling groups for auto expand and auto shrink
Contracted ProQuest to build and optimise our AWS instances/environment
62. Key Learnings and Next Step
Key Learnings
Public Cloud Infrastructure - best cost option for Low Frequency but High
Demand services
Content Delivery Networks (CDN) and Cloud Computing provides an
optimal solution
Next Step in Progress
Unified Content Management System on Amazon to manage ‘peak
demands’ when new devices are released Online
Oracle Webcentre Sites / Fatwire 7.6 Content Management System in
Production
65. Applications
Flexibility to Choose the Right Security
Model for Each Application
You
Infrastructure AWS Security Infrastructure
SOC 1/SSAE 16/ISAE 3402,
Every Customer Gets the Highest ISO 27001, PCI DSS, HIPAA, ITAR,
FISMA Moderate, FIPS 140-2
Level of Security
66. Kit, go
faster
Transformation 3:
From Scaling by
Architecture …
to Scaling By Command
Yes
Michael
67. Scaling by Architecture: NoSQL Database Cluster
Set up Config & Shard & Rinse &
more servers Tune Repartition Repeat
68. Scaling by Command with Amazon DynamoDB
Amazon DynamoDB
Data is automatically spread across
enough hardware to deliver single
digit millisecond latency.
71. Supercomputers used to be Privileges of the Elite
Expensive
Rationed time
Only for the “highest value” jobs
72. Supercomputers by the Hour… for Everyone.
AWS built the 42 nd fastest supercomputer in the world
1,064 Amazon EC2 CC2 instances with17,024 cores
240 teraflops cluster (240 trillion calculations per second)
Less than $1,000 per hour
77. Traditional Infrastructure Drives up the Cost of
Failure … Innovation Suffers
$1
2
How many big ticket 7 M
$
technology ideas can your $9
M
budget tolerate?
78. Experiment Often & Fail Quickly with AWS
$1
00
$2
K
$5
00
Cost of failure falls dramatically
People are free to try out new ideas
$7
5
$3
3
$3
K
More risk taking, more innovation
$2
34
$5
00
$6
92
$1
K
$9
6
$1
2
92. Attacking Big Data Problems Shouldn’t Be This
Complicated
Storing Massive Data Investing In Expensive
Volumes Into A Huge Data Server Clusters To Process
Warehouse The Data
93. The Cloud Makes This a Lot Simpler
Hadoop Clusters
Amazon S3
Amazon DynamoDB Amazon EMR
Load Data in Organize & Visualize
the Cloud Analyze Data Results
1 2 3
105. Rich media experience
Location context aware
Real-time presence driven
Social graph based
User generated content
Recommendations
Integration w/ social networks
Virtual goods economy
Advertisement / premium support
Multi-device access
108. PBS Video for iPad PBSKids Video for iPad
Launched Nov ‘10 Launched April ‘11
109. Fun With Numbers - February 2012
Total Video Mobile Video
Unique visitors: 30M/mo 115k unique visitors per day
Visits: 57M/mo 310k daily app opens
Page views: 367M / mo 27% of hours watched, 40%
of streams
Video streams: 145M/mo
Hours watched: 2.3M/mo
110. The AWS Mission
Enable businesses and developers to use web services
to build scalable, sophisticated applications.
111. Security and Privacy
in the Cloud
Stephen Schmidt
Vice President &
Chief Information Security Officer
112. AWS Security Model Overview
Certifications & Accreditations Shared Responsibility Model
Sarbanes-Oxley (SOX) compliance Customer/SI Partner/ISV controls guest
ISO 27001 Certification OS-level security, including patching and
PCI DSS Level I Certification maintenance
HIPAA compliant architecture Application level security, including
password and role based access
SAS 70(SOC 1) Type II Audit
Host-based firewalls, including Intrusion
FISMA Low & Moderate ATOs
Detection/Prevention Systems
DIACAP MAC III-Sensitive
Separation of Access
Pursuing DIACAP MAC II–Sensitive
Physical Security VM Security Network Security
Multi-level, multi-factor controlled access Multi-factor access to Amazon Account Instance firewalls can be configured in
environment Instance Isolation security groups;
Controlled, need-based access for AWS • Customer-controlled firewall at the The traffic may be restricted by protocol,
employees (least privilege) hypervisor level by service port, as well as by source IP
Management Plane Administrative Access • Neighboring instances prevented access address (individual IP or Classless Inter-
Multi-factor, controlled, need-based Domain Routing (CIDR) block).
• Virtualized disk management layer
access to administrative host ensure only account owners can access Virtual Private Cloud (VPC) provides
All access logged, monitored, reviewed storage disks (EBS) IPSec VPN access from existing enterprise
AWS Administrators DO NOT have logical data center to a set of logically isolated
Support for SSL end point encryption for
access inside a customer’s VMs, including AWS resources
API calls
applications and data
113. Shared Responsibility Model
AWS Customer
•Facilities •Operating System
•Physical Security •Application
•Physical •Security Groups
Infrastructure •Network ACLs
•Network •Network
Infrastructure Configuration
•Account Management
114. AWS Security Resources
http://aws.amazon.com/security/
Security Whitepaper
Risk and Compliance Whitepaper
Latest Versions May 2011, January 2012
respectively
Regularly Updated
Feedback is welcome
115. AWS Certifications
Sarbanes-Oxley (SOX)
ISO 27001 Certification
Payment Card Industry Data Security
Standard (PCI DSS) Level 1 Compliant
SAS70(SOC 1) Type II Audit
FISMA A&As
• Multiple NIST Low Approvals to Operate (ATO)
• NIST Moderate, GSA issued ATO
• FedRAMP
DIACAP MAC III Sensitive ATO
Customers have deployed various compliant applications such as HIPAA
(healthcare)
116. SOC 1 Type II
Amazon Web Services now publishes a Service Organization Controls 1 (SOC 1), Type 2 report
every six months and maintains a favorable unbiased and unqualified opinion from its
independent auditors. AWS identifies those controls relating to the operational performance
and security to safeguard customer data. The SOC 1 report audit attests that AWS’ control
objectives are appropriately designed and that the individual controls defined to safeguard
customer data are operating effectively. Our commitment to the SOC 1 report is on-going and we
plan to continue our process of periodic audits.
The audit for this report is conducted in accordance with the Statement on Standards for
Attestation Engagements No. 16 (SSAE 16) and the International Standards for Assurance
Engagements No. 3402 (ISAE 3402) professional standards. This dual-standard report can meet
a broad range of auditing requirements for U.S. and international auditing bodies. This audit is
the replacement of the Statement on Auditing Standards No. 70 (SAS 70) Type II report.
117. SOC 1
Control Objective 1: Security Organization
Control Objective 2: Amazon Employee Lifecycle
Control Objective 3: Logical Security
Control Objective 4: Secure Data Handling
Control Objective 5: Physical Security
Control Objective 6: Environmental Safeguards
Control Objective 7: Change Management
Control Objective 8: Data Integrity, Availability and Redundancy
Control Objective 9: Incident Handling
118. ISO 27001
AWS has achieved ISO 27001 certification of our
Information Security Management System (ISMS) covering
AWS infrastructure, data centers in all regions worldwide,
and services including Amazon Elastic Compute Cloud
(Amazon EC2), Amazon Simple Storage Service (Amazon
S3) and Amazon Virtual Private Cloud (Amazon VPC). We
have established a formal program to maintain the
certification.
119. Physical Security
Amazon has been building large-scale data centers for
many years
Important attributes:
•Non-descript facilities
•Robust perimeter controls
•Strictly controlled physical access
•2 or more levels of two-factor auth
Controlled, need-based access for
AWS employees (least privilege)
All access is logged and reviewed
120. GovCloud US West US West US East South America EU Asia Asia
(US ITAR (Northern (Oregon) (Northern (Sao Paulo) (Ireland) Pacific Pacific
Region) California) Virginia) (Singapore) (Tokyo)
AWS Regions
AWS Edge Locations
121. AWS Regions and Availability Zones
Customer Decides Where Applications and Data Reside
122. AWS Identity and Access Management
Enables a customer to create multiple Users
and manage the permissions for each of
these Users.
Secure by default; new Users have no access
to AWS until permissions are explicitly
granted. Us
AWS IAM enables customers to minimize the
use of their AWS Account credentials.
Instead all interactions with AWS Services
and resources should be with AWS IAM User
security credentials.er
Customers can enable MFA devices for their
AWS Account as well as for the Users they
have created under their AWS Account with
AWS IAM.
123.
124. AWS MFA Benefits
Helps prevent anyone with unauthorized knowledge
of your e-mail address and password from
impersonating you
Requires a device in your physical possession to
gain access to secure pages on the AWS Portal or to
gain access to the AWS Management Console
Adds an extra layer of protection to sensitive
information, such as your AWS access identifiers
Extends protection to your AWS resources such as
Amazon EC2 instances and Amazon S3 data
125. Amazon EC2 Security
Host operating system
• Individual SSH keyed logins via bastion host for AWS admins
• All accesses logged and audited
Guest operating system
• Customer controlled at root level
• AWS admins cannot log in
• Customer-generated keypairs
Firewall
• Mandatory inbound instance firewall, default deny mode
• Outbound instance firewall available in VPC
• VPC subnet ACLs
Signed API calls
• Require X.509 certificate or customer’s secret AWS key
126. Amazon EC2 Instance Isolation
Customer 1 Customer 2 … Customer n
Hypervisor
Virtual Interfaces
Customer 1
Security Groups
Customer 2
Security Groups … Customer n
Security Groups
Firewall
Physical Interfaces
127. Virtual Memory & Local Disk
Amazon EC2
Instances
Encrypted
File System Amazon EC2
Instance
Encrypted
Swap File
•Proprietary Amazon disk management prevents one Instance
from reading the disk contents of another
•Local disk storage can also be encrypted by the customer for
an added layer of security
128. Network Security Considerations
DDoS (Distributed Denial of Service):
• Standard mitigation techniques in effect
MITM (Man in the Middle):
• All endpoints protected by SSL
• Fresh EC2 host keys generated at boot
IP Spoofing:
• Prohibited at host OS level
Unauthorized Port Scanning:
• Violation of AWS TOS
• Detected, stopped, and blocked
• Ineffective anyway since inbound ports
blocked by default
Packet Sniffing:
• Promiscuous mode is ineffective
129. Amazon Virtual Private Cloud (VPC)
Create a logically isolated environment in Amazon’s highly scalable infrastructure
Specify your private IP address range into one or more public or private subnets
Control inbound and outbound access to and from individual subnets using stateless
Network Access Control Lists
Protect your Instances with stateful filters for inbound and outbound traffic using
Security Groups
Attach an Elastic IP address to any instance in your VPC so it can be reached
directly from the Internet
Bridge your VPC and your onsite IT infrastructure with an industry standard encrypted
VPN connection and/or AWS Direct Connect
Use a wizard to easily create your VPC in 4 different topologies
130. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
131. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
132. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
Internet Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
133. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
Internet Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
134. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
NAT
Internet Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
135. Amazon VPC Architecture
Customer’s isolated
AWS resources
Subnets
NAT
Internet Router
VPN
Gateway
Secure VPN Amazon
Connection over the
Internet Web Services
AWS Direct Connect Cloud
– Dedicated Path/
Bandwidth
Customer’s
Network
137. Amazon VPC - Dedicated Instances
New option to ensure physical hosts are not shared with
other customers
$10/hr flat fee per Region + small hourly charge
Can identify specific Instances as dedicated
Optionally configure entire VPC as dedicated
138. AWS Deployment Models
Logical Server Granular Logical Physical Government Only ITAR Sample Workloads
and Application Information Network server Physical Network Compliant
Isolation Access Policy Isolation Isolation and Facility (US Persons
Isolation Only)
Commercial Cloud Public facing apps. Web
sites, Dev test etc.
Virtual Private Data Center extension,
Cloud (VPC) TIC environment, email,
FISMA low and Moderate
AWS GovCloud US Persons Compliant
(US) and Government Specific
Apps.
139. Thanks!
Remember to visit
https://aws.amazon.com/security
Editor's Notes
\n
South America, Sao Paulo region – Dec 2011\n
\n
Small sliver of the enterprises running on us\n
\n
\n
Many organization first choose the AWS cloud for financial reasons, then realize the agility they gain.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Amazon Web Services provides highly scalable computing infrastructure that enables organizations around the world to requisition compute power, storage, and other on-demand services in the cloud.  These services are available on demand so a customer doesn’t need to think about controlling them, maintaining them or even where they are located. \n\nLet’s take a look at the services that we provide.\n
\n
One of the reasons we believe companies are adopting these services so quickly is because of our rapid innovation based on customer feedback.  In the past four years we’ve delivered over 200 new technology releases.\n
One of the reasons we believe companies are adopting these services so quickly is because of our rapid innovation based on customer feedback.  In the past four years we’ve delivered over 200 new technology releases.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
How many people work on Fatwire on a daily basis\n
\n
\n
\n
\n
2\n
\n
1/3 of all people on the internet daily use AWS - WIRED\n
\n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
service that randomly kills EC2 instances in Netflix production environment\nForces engineers to build services that automatically recover without any manual intervention\nPlan for failure as a religion\nConstantly tests Netflix’s ability to succeed despite failure so they are prepared when unexpected events happen\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
\n
- Now we’re going to show a video introducing DynamoDB\n
\n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
\n
First let me tell you a bit about Cycle. If you'd have told me 7 years ago when I started bootstrapping Cycle, that today 2 of the 3 largest banks, 3 of the 5 largest insurance, and 4 of the 5 largest Pharma would use Cycle's software to manage supercomputing-class computations, I'd have said you were crazy. The AWS Cloud helps companies do amazing things\n
\n
\n
\n
…Treat failure as the common case instead of exception. But it was extremely hard to implement, you had to do al lot of hard work to make that reality and many software system have been built to try and make this easier. \n
\n
\n
Today - markets, brands, financials, growth profile\n\nHistory\nstartup, listing > bankrupt\nearly growth > leadership\ninternationalisation > defocused/stagnation\n2008: leadership change\n2009: rebuilding a healthy core. Key: TW (Agile, XD), LM (platform), HW (reliable ops); core group of key staff (~25), lots of sweat and commitment from all staff, lots of contractors.\nMid 2010: people (Delivery).\n\nCurrent focus:\nbroadening the value proposition > market maker, not just market participant.\noptimising operational performance > global operating model.\n\nFinancial performance\n
\n
Continuous delivery\n
Register. Opportunity to guide customer-focused thinking, without telling. What unmet customer need are we solving?\nHacking. Get your product or business or design personnel to participate in teams.\nShowcase and vote. Watch your team start to vote up hack entries that are most likely to have the biggest customer impacts, rather than just the coolest tech stuff.\n
But if you can’t tighten the loop between coding and deploying – reducing the time between having an idea and testing it in the wild - it becomes a tough effort to change the business mindset from planning perfection to planning experiments.\n
Continuous delivery\n
\n
\n
As you might guess, we run these big data jobs in the Cloud with Amazon Web Services. We load web site log file data into Amazon S3, use Amazon Elastic MapReduce to spin up large clusters of virtual severs to process the data and then use the results to update our product catalog.\n
\n
1st... the way online advertising is bought and sold is fundamentally broken. The typical process is a media buyer builds a media plan using ratings data from companies like Nielsen or Comscore. They then send request-for-proposal documents to publishers, who then prepare proposal documents. Negotiation then ensues and at the end, a contract is signed. Once the media contract begins, its difficult to change if you're not meeting your goals. So, the process is very inefficient in the preparation and execution of the advertising campaign.\n\nNow, a lot of people also had this insight, and there were many products trying to automate the media buying process. But at their core, they were automating a fundamentally broken process.\n
2nd... if you abstract the media buying system, it is a one-sided market. In fact, structurally, it is a commodity market. So the insight here is that the solution is to trade media not using the old system which was basically "forward contracts" that have little flexibility, but rather execute the trades in real time as a "spot market”.\n
And to execute these trades programmatically, leveraging powerful machine learning algorithms. In this sort of system, we watch every ad impression available and make a buying decision instantaneously of whether to bid for the impression, how much to bid and which ad to show. If a strategy isn't working, you can pause it within minutes. To start a new campaign takes only a few minutes.\n\nOnly a few companies had this insight, and we were fortunate to be in the leading group.\n\nOK - so those two insights were the hard bit. The easy bit was implementing that system.... no, wait, other way around. Actually, it turns out\nthat the implementation is very challenging. Because we're watching every ad impression in the market, and making decisions in real-time, we have\nthree very hard constraints:\n\n1st... Very low latency: we have to make a high quality decision on which ad to show and how much to pay in milliseconds.\n2nd... Very high throughput: we have to make these very fast decisions over 7 million times every minute.\n3rd... Very high volume: we see billions of ad impressions every single day. And we have to report, analyse and learn from all this data.\n
Hence the "Big Data" challenge:\n\nIn raw terms, we have over a petabyte of raw log data stored on Amazon Simple Storage Service (S3), and that is growing at 4 terabytes per day or 130 terabytes per month.\nWhen this is compressed down and actually stored, it compresses to around 100 TB. \n\nWhen you're seeing billions of new events every day and processing terabytes per day, traditional database systems just don't cope. So, to help us with this volume, we use Hadoop MapReduce jobs. This is all powered by Amazon Elastic Map Reduce. At any given time, we might have 30-40 Hadoop nodes running various processing jobs, from report aggregations to machine learning algorithms.\n
At the time when we started using Amazon Elastic Map Reduce, we didn't have CAPEX, time and in-house skills to setup and maintain a 30-40 node Hadoop cluster required to run these sorts of processing jobs. So Amazon Elastic Map Reduce really enabled us to quickly build the Big Data capability we required without any big up-front investment that would have easily cost us several months and a couple hundred thousand dollars. This accelerated our product time-to-market by months.\n
Another requirement is to do Machine Learning "at scale". Sometimes, we want to test a new algorithm. With Amazon Elastic Map Reduce, we can run a once-off job on months of data (literally 100's of terabytes) and test the new algorithm in a couple of hours. If we were using a non-cloud Hadoop cluster, this sort of agile analytics would be cost prohibitive and time consuming. We can do this sort of analysis in hours instead of weeks. With Amazon Elastic Map Reduce, we can innovate quickly and continuously enhance our customer offerings.\n
Finally, some of the key learnings from our adoption of Amazon Web Services:\n1) Experiment: It is fast and cheap to experiment, so just get started and iterate. When the experiment is over, just turn off the services.\n2) Learn: Spend some time on the forums and reading the documentation to pick up some tips and pointers to optimise.\n3) Plan: Just because its "in the cloud", doesn't excuse you from having to architect a fault tolerant solution and think about redundancy and single points of failure. Amazon just makes it easier to execute the fault tolerant solutions - you still have to do the thinking and planning. In any reasonable large, complicated distributed system, things are bound to go wrong-network connections timeout, jobs fail to start, and machines occasionally die. Build things expecting failure and put in place the necessary mechanisms to gracefully deal with these minor failures.\n\nThank you for your time today and the opportunity to share a bit about Brandscreen...our challenges with Big Data...and how we're solving those challenges with Amazon Web Services.\n
Highly competitive, but requires rich applications\n
\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
The new cost of doing business\nThis is what new application builders just need to do to just enter the market\nHeroku doesn’t give you this, nor does AWS\n
\n
\n
Also not shown here is our iPhone app which launched in January of 2011.\nWe are currently developing a number of new mobile products which will target other mobile platforms as well as reach alternative platforms such as over-the-top devices\n
PBS is #1 amongst major Networks for unique visitors\n9 months ago we were at 15% which we considered to be very good\n
\n
\n
\n
\n
Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability, offering the flexibility to enable customers to build a wide range of applications. Helping to protect the confidentiality, integrity, and availability of our customers’ systems and data is of the utmost importance to AWS, as is maintaining customer trust and confidence. This document is intended to answer questions such as, “How does AWS help me protect my data?” Specifically, AWS physical and operational security processes are described for network and server infrastructure under AWS’ management, as well as service-specific security implementations. This document provides an overview of security as it pertains to the following areas relevant to AWS: \n \nShared Responsibility Environment\nControl Environment Summary\nSecure Design Principles\nBackup\nMonitoring\nInformation and Communication\nEmployee Lifecycle\nPhysical Security\nEnvironmental Safeguards\nConfiguration Management \nBusiness Continuity Management\nBackups\nFault Separation \nAmazon Account Security Features\nNetwork Security\nAWS Service Specific Security \nAmazon Elastic Compute Cloud (Amazon EC2) Security\nAmazon Virtual Private Cloud (Amazon VPC)\nAmazon Simple Storage Service (Amazon S3) Security\nAmazon SimpleDB Security\nAmazon Relational Database Service (Amazon RDS) Security\nAmazon Simple Queue Service (Amazon SQS) Security\nAmazon Simple Notification Service (SNS) Security\nAmazon CloudWatch Security\nAuto Scaling Security\nAmazon CloudFront Security\nAmazon Elastic MapReduce Security\n \n
Risk and Compliance Overview\nSince AWS and its customers share control over the IT environment, both parties have responsibility for managing the IT environment. AWS’ part in this shared responsibility includes providing its services on a highly secure and controlled platform and providing a wide array of security features customers can use. The customers’ responsibility includes configuring their IT environments in a secure and controlled manner for their purposes. While customers don’t communicate their use and configurations to AWS, AWS does communicate its security and control environment relevant to customers. AWS does this by doing the following:\n \nObtaining industry certifications and independent third party attestations described in this document\nPublishing information about the AWS security and control practices in whitepapers and web site content\n \nPlease see the AWS Security Whitepaper, located at www.aws.amazon.com/security, for a more detailed description of AWS security. The AWS Security Whitepaper covers AWS’s general security controls and service-specific security.\n \nShared Responsibility Environment\nMoving IT infrastructure to AWS services creates a model of shared responsibility between the customer and AWS. This shared model can help relieve customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility and management of, but not limited to, the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall. Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. It is possible for customers to enhance security and/or meet their more stringent compliance requirements by leveraging technology such as host based firewalls, host based intrusion detection/prevention, encryption and key management. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment of solutions that meet industry-specific certification requirements. \n \nThis customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment. Customers can then use the AWS control and compliance documentation available to them (described in the “AWS Certifications and Third-party Attestations” section of this document) to perform their control evaluation and verification procedures as required. \n \nThe next section provides an approach on how AWS customers can evaluate and validate their distributed control environment effectively. \n \nStrong Compliance Governance\nAs always, AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of how IT is deployed. Leading practices include an understanding of required compliance objectives and requirements (from relevant sources), establishment of a control environment that meets those objectives and requirements, an understanding of the validation required based on the organization’s risk tolerance, and verification of the operating effectiveness of their control environment. Deployment in the AWS cloud gives enterprises different options to apply various types of controls and various verification methods.\n \nStrong customer compliance and governance might include the following basic approach: \n \nReview information available from AWS together with other information to understand as much of the entire IT environment as possible, and then document all compliance requirements.\nDesign and implement control objectives to meet the enterprise compliance requirements. \nIdentify and document controls owned by outside parties.\nVerify that all control objectives are met and all key controls are designed and operating effectively.\n \nApproaching compliance governance in this manner will help companies gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed.\n\nFISMA\nAWS enables U.S. government agency customers to achieve and sustain compliance with the Federal Information Security Management Act (FISMA). AWS has been certified and accredited to operate at the FISMA-Low level. AWS has also completed the control implementation and successfully passed the independent security testing and evaluation required to operate at the FISMA-Moderate level. AWS is currently pursuing a certification and accreditation to operate at the FISMA-Moderate level from government agencies.\n
SAS 70 Type II\nAmazon Web Services publishes a Statement on Auditing Standards No. 70 (SAS 70) Type II Audit report every six months and maintains a favorable opinion from its independent auditors. AWS identifies those controls relating to the operational performance and security of its services. Through the SAS 70 Type II report, an auditor evaluates the design of the stated control objectives and control activities and attests to the effectiveness of their design. The auditors also verify the operation of those controls, attesting that the controls are operating as designed. Provided a customer has signed a non-disclosure agreement with AWS, this report is available to customers who require a SAS 70 to meet their own audit and compliance needs.\n \nThe AWS SAS 70 control objectives are provided here. The report itself identifies the control activities that support each of these objectives.\n \nSecurity Organization\n \nControls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization.\nAmazon User Access\n \nControls provide reasonable assurance that procedures have been established so that Amazon user accounts are added, modified and deleted in a timely manner and are reviewed on a periodic basis.\nLogical Security\n \nControls provide reasonable assurance that unauthorized internal and external access to data is appropriately restricted and access to customer data is appropriately segregated from other customers.\nSecure Data Handling\n \nControls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately.\nPhysical Security\n \nControls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.\nEnvironmental Safeguards\n \nControls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or physical disaster to the computer and data center facilities.\nChange Management\n \nControls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.\nData Integrity, Availability and Redundancy\nControls provide reasonable assurance that data integrity is maintained through all phases including transmission, storage and processing.\nIncident Handling\n \nControls provide reasonable assurance that system incidents are recorded, analyzed, and resolved.\n \nAWS’ commitment to SAS 70 is on-going, and AWS will continue the process of periodic audits. In addition, in 2011 AWS plans to convert the SAS 70 to the new Statement on Standards for Attestation Engagements (SSAE) 16 format (equivalent to the International Standard on Assurance Engagements [ISAE] 3402). The SSAE 16 standard replaces the existing SAS 70 standard, and implementation is currently expected to be required by all SAS 70 publishers in 2011. This new report will be similar to the SAS 70 Type II report, but with additional required disclosures and a modified format.\n
Control Objective 1: Security Organization: Controls provide reasonable assurance that information security policies have been implemented and\ncommunicated throughout the organization.\nControl Objective 2: Amazon Employee Lifecycle: Controls provide reasonable assurance that procedures have been established so that Amazon employee\nuser accounts are added, modified and deleted in a timely manner and reviewed on a periodic basis.\nControl Objective 3: Logical Security: Controls provide reasonable assurance that unauthorized internal and external access to data is\nappropriately restricted and access to customer data is appropriately segregated from other customers.\nControl Objective 4: Secure Data Handling: Controls provide reasonable assurance that data handling between the customer’s point of initiation to\nan AWS storage location is secured and mapped accurately.\nControl Objective 5: Physical Security: Controls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.\nControl Objective 6: Environmental Safeguards: Controls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or\nphysical disaster to the computer and data center facilities.\nControl Objective 7: Change Management: Controls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.\nControl Objective 8: Data Integrity, Availability and Redundancy: Controls provide reasonable assurance that data integrity is maintained through all phases including\ntransmission, storage and processing.\nControl Objective 9: Incident Handling: Controls provide reasonable assurance that system incidents are recorded, analyzed, and resolved.\n
ISO 27001\nAWS has achieved ISO 27001 certification of our Information Security Management System (ISMS) covering AWS infrastructure, data centers, and services including Amazon EC2, Amazon S3 and Amazon VPC. ISO 27001/27002 is a widely-adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk assessments appropriate to ever-changing threat scenarios. In order to achieve the certification, a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality, integrity, and availability of company and customer information. This certification reinforces Amazon’s commitment to providing significant information regarding our security controls and practices. AWS’s ISO 27001 certification includes all AWS data centers in all regions worldwide and AWS has established a formal program to maintain the certification. AWS provides additional information and frequently asked questions about its ISO 27001 certification on their web site.\n
Physical Security\nAmazon has many years of experience in designing, constructing, and operating large-scale datacenters. This experience has been applied to the AWS platform and infrastructure. AWS datacenters are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication a minimum of two times to access datacenter floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff. \n \nAWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee of Amazon or Amazon Web Services. All physical access to datacenters by AWS employees is logged and audited routinely.\n
Amazon Web Services is steadily expanding its global infrastructure to help customers achieve lower latency and higher throughput. As our customers grow their businesses, AWS will continue to provide infrastructure that meets their global requirements.\n
You can choose to deploy and run your applications in multiple physical locations within the AWS cloud. Amazon Web Services are available in geographic Regions. When you use AWS, you can specify the Region in which your data will be stored, instances run, queues started, and databases instantiated. For most AWS infrastructure services, including Amazon EC2, there are eight regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore) and Asia Pacific (Tokyo), AWS GovCloud (US), US West (Oregon), and South America (Sao Paulo).\n\nWithin each Region are Availability Zones (AZs). Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.95% availability for each Amazon EC2 Region.\n
AWS Identity and Access Management (AWS IAM)\nAWS Identity and Access Management (AWS IAM) enables a customer to create multiple users and manage the permissions for each of these users within their AWS Account. A user is an identity (within a customer AWS Account) with unique security credentials that can be used to access AWS Services. AWS IAM eliminates the need to share passwords or access keys, and makes it easy to enable or disable a user’s access as appropriate.\n \nAWS IAM enables customers to implement security best practices, such as least privilege, by granting unique credentials to every user within their AWS Account and only granting permission to access the AWS Services and resources required for the users to perform their job. AWS IAM is secure by default; new users have no access to AWS until permissions are explicitly granted.\n \nAWS IAM enables customers to minimize the use of their AWS Account credentials. Instead all interactions with AWS Services and resources should be with AWS IAM user security credentials. More information about AWS Identity and Access Management (AWS IAM) is available on the AWS website: http://aws.amazon.com/iam/\n
\n
\n
Amazon Elastic Compute Cloud (Amazon EC2) Security\nSecurity within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host system, the virtual instance operating system or guest OS, a firewall, and signed API calls. Each of these items builds on the capabilities of the others. The goal is to protect against data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexibility in configuration that customers demand. \n \nMultiple Levels of Security\nHost Operating System: Administrators with a business need to access the management plane are required to use multi-factor authentication to gain access to purpose-built administration hosts. These administrative hosts are systems that are specifically designed, built, configured, and hardened to protect the management plane of the cloud. All such access is logged and audited. When an employee no longer has a business need to access the management plane, the privileges and access to these hosts and relevant systems are revoked.\n \nGuest Operating System: Virtual instances are completely controlled by the customer. Customers have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to customer instances and cannot log into the guest OS. AWS recommends a base set of security best practices to include disabling password-only access to their hosts, and utilizing some form of multi-factor authentication to gain access to their instances (or at a minimum certificate-based SSH Version 2 access). Additionally, customers should employ a privilege escalation mechanism with logging on a per-user basis. For example, if the guest OS is Linux, after hardening their instance, they should utilize certificate-based SSHv2 to access the virtual instance, disable remote root login, use command-line logging, and use ‘sudo’ for privilege escalation. Customers should generate their own key pairs in order to guarantee that they are unique, and not shared with other customers or with AWS. \n \nFirewall: Amazon EC2 provides a complete firewall solution; this mandatory inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic. The traffic may be restricted by protocol, by service port, as well as by source IP address (individual IP or Classless Inter-Domain Routing (CIDR) block).\n \n
The Hypervisor\nAmazon EC2 currently utilizes a highly customized version of the Xen hypervisor, taking advantage of paravirtualization (in the case of Linux guests). Because paravirtualized guests rely on the hypervisor to provide support for operations that normally require privileged access, the guest OS has no elevated access to the CPU. The CPU provides four separate privilege modes: 0-3, called rings. Ring 0 is the most privileged and 3 the least. The host OS executes in Ring 0. However, rather than executing in Ring 0 as most operating systems do, the guest OS runs in a lesser-privileged Ring 1 and applications in the least privileged Ring 3. This explicit virtualization of the physical resources leads to a clear separation between guest and hypervisor, resulting in additional security separation between the two. \n \nInstance Isolation\nDifferent instances running on the same physical machine are isolated from each other via the Xen hypervisor. Amazon is active in the Xen community, which provides awareness of the latest developments. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance's virtual interface. All packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms. \n
Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customer’s data are never unintentionally exposed to another. AWS recommends customers further protect their data using appropriate means. One common solution is to run an encrypted file system on top of the virtualized disk device. \n
Network Security\nThe AWS network provides significant protection against traditional network security issues and the customer can implement further protection. The following are a few examples:\n \nDistributed Denial Of Service (DDoS) Attacks\nAWS Application Programming Interface (API) endpoints are hosted on large, Internet-scale, world-class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer. Proprietary DDoS mitigation techniques are used. Additionally, AWS’s networks are multi-homed across a number of providers to achieve Internet access diversity. \n \nMan In the Middle (MITM) Attacks \nAll of the AWS APIs are available via SSL-protected endpoints which provide server authentication. Amazon EC2 AMIs automatically generate new SSH host certificates on first boot and log them to the instance’s console. Customers can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time. Customers are encouraged to use SSL for all of their interactions with AWS.\n \nIP Spoofing\nAmazon EC2 instances cannot send spoofed network traffic. The AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.\n \nPort Scanning \nUnauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. Violations of the AWS Acceptable Use Policy are taken seriously, and every reported violation is investigated. Customers can report suspected abuse via the contacts available on our website at: http://aws.amazon.com/contact-us/report-abuse/ When unauthorized port scanning is detected it is stopped and blocked. Port scans of Amazon EC2 instances are generally ineffective because, by default, all inbound ports on Amazon EC2 instances are closed and are only opened by the customer. The customer’s strict management of security groups can further mitigate the threat of port scans. If the customer configures the security group to allow traffic from any source to a specific port, then that specific port will be vulnerable to a port scan. In these cases, the customer must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan. For example, a web server must clearly have port 80 (HTTP) open to the world, and the administrator of this server is responsible for the security of the HTTP server software, such as Apache. Customers may request permission to conduct vulnerability scans as required to meet their specific compliance requirements. These scans must be limited to the customer’s own instances and must not violate the AWS Acceptable Use Policy. Advanced approval for these types of scans can be initiated by submitting a request via the website at: https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/AWSSecurityPenTestRequest \n \nPacket sniffing by other tenants\nIt is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice customers should encrypt sensitive traffic.\n\nConfiguration Management \nEmergency, non-routine, and other configuration changes to existing AWS infrastructure are authorized, logged, tested, approved, and documented in accordance with industry norms for similar systems. Updates to AWS’ infrastructure are done to minimize any impact on the customer and their use of the services. AWS will communicate with customers, either via email, or through the AWS Service Health Dashboard (http://status.aws.amazon.com/) when service use is likely to be adversely affected. \n \nSoftware\nAWS applies a systematic approach to managing change so that changes to customer impacting services are thoroughly reviewed, tested, approved and well communicated. \n \nAWS’ change management process is designed avoid unintended service disruptions and to maintain the integrity of service to the customer. Changes deployed into production environments are: \nReviewed: Peer reviews of the technical aspects of a change\nTested: being applied will behave as expected and not adversely impact performance\nApproved: to provide appropriate oversight and understanding of business impact \n \nChanges are typically pushed into production in a phased deployment starting with lowest impact areas. Deployments are tested on a single system and closely monitored so impact can be evaluated. Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies. These metrics are closely monitored with thresholds and alarming in place. Rollback procedures are documented in the Change Management (CM) ticket. \n \nWhen possible, changes are scheduled during regular change windows. Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and approved as appropriate.\n \nPeriodically, AWS performs self-audits of changes to key services to monitor quality, maintain high standards and to facilitate continuous improvement of the change management process. Any exceptions are analyzed to determine the root cause and appropriate actions are taken to bring the change into compliance or roll back the change if necessary. Actions are then taken to address and remediate the process or people issue.\n \nInfrastructure\nAmazon’s Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third-party software delivery, internally developed software and configuration management. The Infrastructure team maintains and operates a UNIX/Linux configuration management framework to address hardware scalability, availability, auditing, and security management. By centrally managing hosts through the use of automated processes that manage change, the Company is able to achieve its goals of high availability, repeatability, scalability, robust security and disaster recovery. Systems and Network Engineers monitor the status of these automated tools on a daily basis, reviewing reports to respond to hosts that fail to obtain or update their configuration and software.\n \nInternally developed configuration management software is installed when new hardware is provisioned. These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host. This configuration management software also helps to regularly update packages that are already installed on the host. Only approved personnel enabled through the permissions service may log in to the central configuration management servers. \n
\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.\n\nAWS (”orange cloud"): What everybody knows of AWS today.\n\nCustomer’s Network (“blue square”): The customer’s internal IT infrastructure.\n\nVPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.\n\nCloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.\n\nCloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.\n\nVPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).\n
Multiple Levels of Security\nVirtual Private Cloud: Each VPC is a distinct, isolated network within the cloud. At creation time, an IP address range for each VPC is selected by the customer. Network traffic within each VPC is isolated from all other VPCs; therefore, multiple VPCs may use overlapping (even identical) IP address ranges without loss of this isolation. By default, VPCs have no external connectivity. Customers may create and attach an Internet Gateway, VPN Gateway, or both to establish external connectivity, subject to the controls below.\n \nAPI: Calls to create and delete VPCs, change routing, security group, and network ACL parameters, and perform other functions are all signed by the customer’s Amazon Secret Access Key, which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM. Without access to the customer’s Secret Access Key, Amazon VPC API calls cannot be made on the customer’s behalf. In addition, API calls can be encrypted with SSL to maintain confidentiality. Amazon recommends always using SSL-protected API endpoints. AWS IAM also enables a customer to further control what APIs a newly created user has permissions to call. \n \nSubnets: Customers create one or more subnets within each VPC; each instance launched in the VPC is connected to one subnet. Traditional Layer 2 security attacks, including MAC spoofing and ARP spoofing, are blocked.\n \nRoute Tables and Routes: Each Subnet in a VPC is associated with a routing table, and all network traffic leaving a subnet is processed by the routing table to determine the destination.\n \nVPN Gateway: A VPN Gateway enables private connectivity between the VPC and another network. Network traffic within each VPN Gateway is isolated from network traffic within all other VPN Gateways. Customers may establish VPN Connections to the VPN Gateway from gateway devices at the customer premise. Each connection is secured by a pre-shared key in conjunction with the IP address of the customer gateway device.\n \nInternet Gateway: An Internet Gateway may be attached to a VPC to enable direct connectivity to Amazon S3, other AWS services, and the Internet. Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance. Additionally, network routes are configured (see above) to direct traffic to the Internet Gateway. AWS provides reference NAT AMIs that can be extended by customers to perform network logging, deep packet inspection, application-layer filtering, or other security controls.\n \nThis access can only be modified through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the Internet Gateway, therefore enabling the customer to implement additional security through separation of duties.\n \nAmazon EC2 Instances: Amazon EC2 instances running with an Amazon VPC contain all of the benefits described above related to the Host Operating System, Guest Operating System, Hypervisor, Instance Isolation, and protection against packet sniffing.\n \nTenancy: VPC allows customers to launch Amazon EC2 instances that are physically isolated at the host hardware level; they will run on single tenant hardware. A VPC can be created with ‘dedicated’ tenancy, in which case all instances launched into the VPC will utilize this feature. Alternatively, a VPC may be created with ‘default’ tenancy, but customers may specify ‘dedicated’ tenancy for particular instances launched into the VPC.\n \nFirewall (Security Groups): Like Amazon EC2, Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance. The default group enables inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, as well as source/destination IP address (individual IP or Classless Inter-Domain Routing (CIDR) block). \n \nThe firewall isn’t controlled through the Guest OS; rather it can be modified only through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall, therefore enabling the customer to implement additional security through separation of duties. The level of security afforded by the firewall is a function of which ports are opened by the customer, and for what duration and purpose. Well-informed traffic management and security design are still required on a per-instance basis. AWS further encourages customers to apply additional per-instance filters with host-based firewalls such as IPtables or the Windows Firewall.\n \nNetwork Access Control Lists: To add a further layer of security within Amazon VPC, customers can configure Network ACLs. These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within VPC. These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol, by service port, as well as source/destination IP address.\n \nLike security groups, network ACLs are managed through Amazon VPC APIs, adding an additional layer of protection and enabling additional security through separation of duties.\n
\n
\n
Amazon Simple Data Base (SimpleDB) Security\nAmazon SimpleDB APIs provide domain-level controls that only permit authenticated access by the domain creator, therefore the customer maintains full control over who has access to their data. \n \nAmazon SimpleDB access can be granted based on an AWS Account ID. Once authenticated, an AWS Account has full access to all operations. Access to each individual domain is controlled by an independent Access Control List that maps authenticated users to the domains they own. A user created with AWS IAM only has access to the operations and domains for which they have been granted permission via policy. \n \nAmazon SimpleDB is accessible via SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2. Data stored within Amazon SimpleDB is not encrypted by AWS; however the customer can encrypt data before it is uploaded to Amazon SimpleDB. These encrypted attributes would be retrievable as part of a Get operation only. They could not be used as part of a query filtering condition. Encrypting before sending data to Amazon SimpleDB helps protect against access to sensitive customer data by anyone, including AWS.\n\nAmazon SimpleDB Data Management \nWhen a domain is deleted from Amazon SimpleDB, removal of the domain mapping starts immediately, and is generally processed across the distributed system within seconds. Once the mapping is removed, there is no remote access to the deleted domain. \n \nWhen item and attribute data are deleted within a domain, removal of the mapping within the domain starts immediately, and is also generally complete within seconds. Once the mapping is removed, there is no remote access to the deleted data. That storage area is then made available only for write operations and the data are overwritten by newly stored data. \n