This document provides an overview of why enterprises choose AWS and best practices for migrating applications to AWS. It discusses AWS design principles like designing for failure and implementing elasticity. It also covers topics like calculating total cost of ownership, customer migration lessons learned, and next steps to optimize applications in AWS.
Enterprise DevOps is different then DevOps in startups and smaller companies. This session how AWS/CSC address this. How AWS IaaS level automation via CloudFormation, UserData, Console, APIS and some PaaS OpsWorks/Beanstalk is complimented by CSC Agility Platform. CSC Agility adds application compliance and security to the AWS infrastructure compliance and security. CSC Agility allows for the creation of architecture blueprints for predefined application offerings.
This presentation from the AWS Lab at Cloud Expo Europe 2014 explores the solutions, support options and software licensing approaches that you can use if you chose to run your enterprise workloads on Amazon Web Services.
Enterprises, mid-market, and SMBs all have one thing in common: their business applications are critical. Companies of all sizes are running SAP, Oracle, Exchange, and many other business applications in the cloud to simplify infrastructure management, deploy more quickly, and lower cost. However, migrating your business applications from your on-site or co-located datacenters to the AWS Cloud takes some planning, and a phased approach.
This webinar looks at migration framework and patterns from an architectural perspective and what tools and techniques are available to you to run any type of business application, from small departmental solutions to mission-critical applications in a secure and robust environment.
Reasons to attend:
Learn about planning your cloud migration strategy.
This webinar will help you select the workloads that can easily be moved to the cloud.
Evaluate the conditions and metrics required for a successful and cost effective migration.
Simplify Your Database Migration to AWS | AWS Public Sector Summit 2016Amazon Web Services
Migrating a database from one platform to another has been a pain point for many organizations for a long time. Often times, it involves weeks of careful planning and a migration strategy to minimize impact to the business. Many organizations are locked into a database platform even when there are better options available because they don’t want to take up the migration challenge. AWS Data Migration Service helps with live migration of databases across homogenous or heterogeneous database platforms. The service supports homogenous migrations such as Oracle to Oracle, and also heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. The AWS Schema Conversion Tool is a desktop application that makes heterogeneous database migrations easy by automatically converting the source database schema to a format compatible with the target database. The tool helps with conversion of a database schema from an Oracle or Microsoft SQL Server database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster. Join us in this session to explore how these capabilities can simplify your database migration challenge.
Cloud Migration, Application Modernization, and Security Tom Laszewski
As AWS continues to expand, enterprise customers are looking to our partner ecosystem to assist in migrating their workloads to the cloud. This session describes the challenges, lessons learned and best practices for large scale application migrations. We will use real examples from our consulting partners and AWS Professional Services to illustrate how to move workloads to the cloud while modernizing the associated applications to take advantage of AWS’ unique benefits. We will also dive into how to use an array of AWS services and features to improve a customer’s security posture as they are migrating and once they are up and running in the cloud
This session, gives an insider view of some the innovations that help make the AWS Cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking rotocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing.
This session provides an overview of how organizations can migrate workloads to the AWS cloud at scale. We will go through available migration frameworks and best practices with common use case examples during this session. After migrating the initial workloads, understand how to migrate at scale to the AWS cloud. Hear about real life experiences from the AWS Professional Services team and learn about common use case examples, frameworks, and best practices. Hear about what to avoid when migrating applications at scale to AWS and understand the tools and partner services that can assist you when migrating applications to AWS.
Enterprise DevOps is different then DevOps in startups and smaller companies. This session how AWS/CSC address this. How AWS IaaS level automation via CloudFormation, UserData, Console, APIS and some PaaS OpsWorks/Beanstalk is complimented by CSC Agility Platform. CSC Agility adds application compliance and security to the AWS infrastructure compliance and security. CSC Agility allows for the creation of architecture blueprints for predefined application offerings.
This presentation from the AWS Lab at Cloud Expo Europe 2014 explores the solutions, support options and software licensing approaches that you can use if you chose to run your enterprise workloads on Amazon Web Services.
Enterprises, mid-market, and SMBs all have one thing in common: their business applications are critical. Companies of all sizes are running SAP, Oracle, Exchange, and many other business applications in the cloud to simplify infrastructure management, deploy more quickly, and lower cost. However, migrating your business applications from your on-site or co-located datacenters to the AWS Cloud takes some planning, and a phased approach.
This webinar looks at migration framework and patterns from an architectural perspective and what tools and techniques are available to you to run any type of business application, from small departmental solutions to mission-critical applications in a secure and robust environment.
Reasons to attend:
Learn about planning your cloud migration strategy.
This webinar will help you select the workloads that can easily be moved to the cloud.
Evaluate the conditions and metrics required for a successful and cost effective migration.
Simplify Your Database Migration to AWS | AWS Public Sector Summit 2016Amazon Web Services
Migrating a database from one platform to another has been a pain point for many organizations for a long time. Often times, it involves weeks of careful planning and a migration strategy to minimize impact to the business. Many organizations are locked into a database platform even when there are better options available because they don’t want to take up the migration challenge. AWS Data Migration Service helps with live migration of databases across homogenous or heterogeneous database platforms. The service supports homogenous migrations such as Oracle to Oracle, and also heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. The AWS Schema Conversion Tool is a desktop application that makes heterogeneous database migrations easy by automatically converting the source database schema to a format compatible with the target database. The tool helps with conversion of a database schema from an Oracle or Microsoft SQL Server database to an Amazon RDS MySQL DB instance or an Amazon Aurora DB cluster. Join us in this session to explore how these capabilities can simplify your database migration challenge.
Cloud Migration, Application Modernization, and Security Tom Laszewski
As AWS continues to expand, enterprise customers are looking to our partner ecosystem to assist in migrating their workloads to the cloud. This session describes the challenges, lessons learned and best practices for large scale application migrations. We will use real examples from our consulting partners and AWS Professional Services to illustrate how to move workloads to the cloud while modernizing the associated applications to take advantage of AWS’ unique benefits. We will also dive into how to use an array of AWS services and features to improve a customer’s security posture as they are migrating and once they are up and running in the cloud
This session, gives an insider view of some the innovations that help make the AWS Cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking rotocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing.
This session provides an overview of how organizations can migrate workloads to the AWS cloud at scale. We will go through available migration frameworks and best practices with common use case examples during this session. After migrating the initial workloads, understand how to migrate at scale to the AWS cloud. Hear about real life experiences from the AWS Professional Services team and learn about common use case examples, frameworks, and best practices. Hear about what to avoid when migrating applications at scale to AWS and understand the tools and partner services that can assist you when migrating applications to AWS.
(BIZ305) Case Study: Migrating Oracle E-Business Suite to AWS | AWS re:Invent...Amazon Web Services
With the maturity and breadth of cloud solutions, more enterprises are moving mission-critical workloads to the cloud. American Commercial Lines (ACL) recently migrated their Oracle ERP to AWS. ERP solutions such as Oracle E-Business Suite require specific knowledge in mapping AWS infrastructure to the specific configurations and needs of running these workloads. In this session, Apps Associates and ACL walk through the considerations for running Oracle E-Business Suite on AWS, including deployment architectures, concurrent processing, load balanced forms and web services, varying database transactional workloads, and performance requirements, as well as security and monitoring aspects. ACL shares their experiences and business drivers in making this transition to AWS.
Pragmatic Approach to Workload Migrations - London Summit Enteprise Track RePlayAmazon Web Services
Migrating a portfolio of legacy applications to AWS cloud infrastructure requires careful planning as each phase needs balancing between risk tolerance and the speed of migration. This session will present a set of successful best practices, tools and techniques that help migration speed of delivery and increase success rate. We will also cover the complete lifecycle of an application portfolio migration including a special focus on how to organise and conduct the assessment and identify elements that can benefit from cloud architecture.
Join AWS and BlueMetal, a technology architecture firm and a member of the Amazon Partner Network, for this live webinar where we will discuss modernizing your applications when moving your data center to the AWS Cloud. Microsoft has announced that July 30, 2015, is the end of support for Windows Server 2003. This will affect customers since there will be no patches or security updates, putting applications and business at risk. Attend this webinar to learn about considerations and best practices for creating a composed solution when moving off of Windows Server 2003 and migrating your data center and applications to the cloud.
Using Amazon RDS to Power Enterprise Applications (DAT202) | AWS re:Invent 2013Amazon Web Services
Amazon RDS makes it cheap and easy to deploy, manage, and scale relational databases using a familiar MySQL, Oracle, or Microsoft SQL Server database engine. Amazon RDS can be an excellent choice for running many large, off-the-shelf enterprise applications from companies like JD Edwards, Oracle, PeopleSoft, and Siebel. In this session, you learn how to best leverage Amazon RDS for use with enterprise applications and learn about best practices and data migration strategies.
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)Amazon Web Services
High performance computing in the cloud is enabling high scale compute- and graphics-intensive workloads across industries, ranging from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications, in areas such as large-scale fluid and materials simulations, 3D content rendering, financial computing, and deep learning. This session provides an overview of HPC capabilities on AWS, describes the newest generations of accelerated computing instances (including P2), as well as highlighting customer and partner use-cases across industries.
Attendees learn about best practices for running HPC workflows in the cloud, including graphical pre- and post-processing, workflow automation, and optimization. Attendees also learn about new and emerging HPC use cases: in particular, deep learning training and inference, large-scale simulations, and high performance data analytics.
AWS re:Invent 2016: From Dial-Up to DevOps - AOL’s Migration to the Cloud (DE...Amazon Web Services
AOL originally provided dial-up service to millions of people. Today, AOL powers advertising and media experiences for the web’s top destinations. How do you maintain observability and reliability to both business and technical teams for high-traffic services in a dynamic infrastructure? Join us as we discuss AOL’s DevOps journey. We will dive into its engineering culture, automation, and monitoring best practices that have allowed AOL to successfully reinvent their infrastructure, as they moved from globally distributed data centers to the AWS Cloud. Session sponsored by Datadog.
AWS Competency Partner
Did you know 52% of today’s organizations are planning to leverage a hybrid-cloud approach? With eight years’ experience running Windows workloads in the cloud, AWS provides the perfect platform to modernize your Microsoft applications.
This webinar will demonstrate how AWS ensures customization, high availability and scalability for most of your Microsoft applications on a hybrid-cloud model and learn how to reduce cost. We will also offer you an understanding of how these workloads are licensed and monitored, and share best practice reference architectures.
Key Outcomes:
• How to get the most out of your Microsoft Applications
• How do you start Migrating Applications to AWS?
• Hybrid cloud deployments using AWS
• Licensing Considerations
Session is suitable for
• Technical Decision Makers
• Senior IT Managers and Specialist
• DBA’s
• Solution Architects and Engineers
Accenture Oracle on AWS Jumpstart ProgramTom Laszewski
The Oracle Technical Jumpstart program is a development environment and support team “in a box.” This solution allows project teams to remove infrastructure from the critical path, enabling the team to begin conference room pilot and baseline configuration activities.
Deploy, scale, and manage your Microsoft workloads on AWS. We start our session by discussing why customers want to deploy Microsoft Windows applications on AWS as a cloud platform. We talk about reference architectures and best practices for implementing Microsoft products and technologies including Active Directory, Remote Desktop Gateway, Exchange, SharePoint, and Lync in the AWS cloud. We conclude with best practices for managing and monitoring Microsoft technologies in the AWS cloud.
Speaker: Andy Reay, Solutions Architect, Amazon Web Services
Ask The Architect: RightScale & AWS Dive Deep into Hybrid ITRightScale
With the increased use of cloud services, organizations are faced with finding the most efficient way to use existing IT infrastructure alongside cloud-based compute, storage and networking resources. This has resulted in the rise of hybrid IT whereby companies leverage both on-premises and cloud resources to drive increased agility, stability and accessibility.
Enterprise Cloud Architecture Best PracticesDavid Veksler
Introduction to cloud service models - IAAS, SAAS, PAAS.
Best practices for enterprise cloud service architecture, with a focus on Western companies operating in the China market.
Comparison of Azure and AWS from cost and feature perspective.
In this full-day workshop, you will learn strategies for planning and migrating existing workloads to the AWS Cloud, including basic knowledge of planning for a migration, Application Discovery Service, AWS Migration Hub, Migration Tools e.g. CloudEndure, how to do data transfer, and last but not least, AWS Database Migration Services. There are altogether 5 modules, each represents a deep dive on the topics suggested. The first half provides an overview of migration planning principles and best practices, and the second part focuses on migration design, tools and implementation, with hands-on labs to reinforce concepts.
Design, Deploy, and Optimize SQL Server on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to build applications on AWS from a strong foundation on SQL Server
- Learn when to deploy SQL Server on Amazon EC2 versus Amazon RDS
- Learn how to take advantage of the latest features in SQL Server 2016 when running on AWS
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
Uses, considerations, and recommendations for AWSScalar Decisions
From an information session on Amazon Web Services (AWS), looking at uses, considerations, and recommendations for leveraging AWS in your organization.
Topics covered:
- AWS Services Overview
- Some ideal use cases: Disaster Recovery, Backup and Archive, Test/Dev
- Data residency and security considerations
(BIZ305) Case Study: Migrating Oracle E-Business Suite to AWS | AWS re:Invent...Amazon Web Services
With the maturity and breadth of cloud solutions, more enterprises are moving mission-critical workloads to the cloud. American Commercial Lines (ACL) recently migrated their Oracle ERP to AWS. ERP solutions such as Oracle E-Business Suite require specific knowledge in mapping AWS infrastructure to the specific configurations and needs of running these workloads. In this session, Apps Associates and ACL walk through the considerations for running Oracle E-Business Suite on AWS, including deployment architectures, concurrent processing, load balanced forms and web services, varying database transactional workloads, and performance requirements, as well as security and monitoring aspects. ACL shares their experiences and business drivers in making this transition to AWS.
Pragmatic Approach to Workload Migrations - London Summit Enteprise Track RePlayAmazon Web Services
Migrating a portfolio of legacy applications to AWS cloud infrastructure requires careful planning as each phase needs balancing between risk tolerance and the speed of migration. This session will present a set of successful best practices, tools and techniques that help migration speed of delivery and increase success rate. We will also cover the complete lifecycle of an application portfolio migration including a special focus on how to organise and conduct the assessment and identify elements that can benefit from cloud architecture.
Join AWS and BlueMetal, a technology architecture firm and a member of the Amazon Partner Network, for this live webinar where we will discuss modernizing your applications when moving your data center to the AWS Cloud. Microsoft has announced that July 30, 2015, is the end of support for Windows Server 2003. This will affect customers since there will be no patches or security updates, putting applications and business at risk. Attend this webinar to learn about considerations and best practices for creating a composed solution when moving off of Windows Server 2003 and migrating your data center and applications to the cloud.
Using Amazon RDS to Power Enterprise Applications (DAT202) | AWS re:Invent 2013Amazon Web Services
Amazon RDS makes it cheap and easy to deploy, manage, and scale relational databases using a familiar MySQL, Oracle, or Microsoft SQL Server database engine. Amazon RDS can be an excellent choice for running many large, off-the-shelf enterprise applications from companies like JD Edwards, Oracle, PeopleSoft, and Siebel. In this session, you learn how to best leverage Amazon RDS for use with enterprise applications and learn about best practices and data migration strategies.
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)Amazon Web Services
High performance computing in the cloud is enabling high scale compute- and graphics-intensive workloads across industries, ranging from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications, in areas such as large-scale fluid and materials simulations, 3D content rendering, financial computing, and deep learning. This session provides an overview of HPC capabilities on AWS, describes the newest generations of accelerated computing instances (including P2), as well as highlighting customer and partner use-cases across industries.
Attendees learn about best practices for running HPC workflows in the cloud, including graphical pre- and post-processing, workflow automation, and optimization. Attendees also learn about new and emerging HPC use cases: in particular, deep learning training and inference, large-scale simulations, and high performance data analytics.
AWS re:Invent 2016: From Dial-Up to DevOps - AOL’s Migration to the Cloud (DE...Amazon Web Services
AOL originally provided dial-up service to millions of people. Today, AOL powers advertising and media experiences for the web’s top destinations. How do you maintain observability and reliability to both business and technical teams for high-traffic services in a dynamic infrastructure? Join us as we discuss AOL’s DevOps journey. We will dive into its engineering culture, automation, and monitoring best practices that have allowed AOL to successfully reinvent their infrastructure, as they moved from globally distributed data centers to the AWS Cloud. Session sponsored by Datadog.
AWS Competency Partner
Did you know 52% of today’s organizations are planning to leverage a hybrid-cloud approach? With eight years’ experience running Windows workloads in the cloud, AWS provides the perfect platform to modernize your Microsoft applications.
This webinar will demonstrate how AWS ensures customization, high availability and scalability for most of your Microsoft applications on a hybrid-cloud model and learn how to reduce cost. We will also offer you an understanding of how these workloads are licensed and monitored, and share best practice reference architectures.
Key Outcomes:
• How to get the most out of your Microsoft Applications
• How do you start Migrating Applications to AWS?
• Hybrid cloud deployments using AWS
• Licensing Considerations
Session is suitable for
• Technical Decision Makers
• Senior IT Managers and Specialist
• DBA’s
• Solution Architects and Engineers
Accenture Oracle on AWS Jumpstart ProgramTom Laszewski
The Oracle Technical Jumpstart program is a development environment and support team “in a box.” This solution allows project teams to remove infrastructure from the critical path, enabling the team to begin conference room pilot and baseline configuration activities.
Deploy, scale, and manage your Microsoft workloads on AWS. We start our session by discussing why customers want to deploy Microsoft Windows applications on AWS as a cloud platform. We talk about reference architectures and best practices for implementing Microsoft products and technologies including Active Directory, Remote Desktop Gateway, Exchange, SharePoint, and Lync in the AWS cloud. We conclude with best practices for managing and monitoring Microsoft technologies in the AWS cloud.
Speaker: Andy Reay, Solutions Architect, Amazon Web Services
Ask The Architect: RightScale & AWS Dive Deep into Hybrid ITRightScale
With the increased use of cloud services, organizations are faced with finding the most efficient way to use existing IT infrastructure alongside cloud-based compute, storage and networking resources. This has resulted in the rise of hybrid IT whereby companies leverage both on-premises and cloud resources to drive increased agility, stability and accessibility.
Enterprise Cloud Architecture Best PracticesDavid Veksler
Introduction to cloud service models - IAAS, SAAS, PAAS.
Best practices for enterprise cloud service architecture, with a focus on Western companies operating in the China market.
Comparison of Azure and AWS from cost and feature perspective.
In this full-day workshop, you will learn strategies for planning and migrating existing workloads to the AWS Cloud, including basic knowledge of planning for a migration, Application Discovery Service, AWS Migration Hub, Migration Tools e.g. CloudEndure, how to do data transfer, and last but not least, AWS Database Migration Services. There are altogether 5 modules, each represents a deep dive on the topics suggested. The first half provides an overview of migration planning principles and best practices, and the second part focuses on migration design, tools and implementation, with hands-on labs to reinforce concepts.
Design, Deploy, and Optimize SQL Server on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn how to build applications on AWS from a strong foundation on SQL Server
- Learn when to deploy SQL Server on Amazon EC2 versus Amazon RDS
- Learn how to take advantage of the latest features in SQL Server 2016 when running on AWS
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
Uses, considerations, and recommendations for AWSScalar Decisions
From an information session on Amazon Web Services (AWS), looking at uses, considerations, and recommendations for leveraging AWS in your organization.
Topics covered:
- AWS Services Overview
- Some ideal use cases: Disaster Recovery, Backup and Archive, Test/Dev
- Data residency and security considerations
Migrating Enterprise Applications to AWS: Best Practices & Techniques (ENT303...Amazon Web Services
This session discusses strategies, tools, and techniques for migrating enterprise software systems to AWS. We consider applications like Oracle eBusiness Suite, SAP, PeopleSoft, JD Edwards, and Siebel. These applications are complex by themselves; they are frequently customized; they have many touch points on other systems in the enterprise; and they often have large associated databases. Nevertheless, running enterprise applications in the cloud affords powerful benefits. We identify success factors and best practices.
AWS Summit Stockholm 2014 – B2 – Migrating enterprise applications to AWSAmazon Web Services
This session discusses strategies, tools, and techniques for migrating enterprise software systems to AWS. These applications are complex by themselves; they are frequently customized; they have many touch points on other systems in the enterprise; and they often have large associated databases. Nevertheless, running enterprise applications in the cloud affords powerful benefits. We identify success factors and best practices.
ARC205 Building Web-scale Applications Architectures with AWS - AWS re: Inven...Amazon Web Services
As both new and established businesses work to increase their customer numbers, revenue and relevance to the market – they are working to deliver software that scales larger than ever before. The challenge of being the "victim of your own success" be it from viral marketing, social media or simply dramatic uptake of a new service; is something that troubles the minds of CIOs and Engineers alike. This session will focus on ways to avoid creating "technical debt" during initial development, and will share well established practices and approaches to building applications that can tolerate and revel in the challenges of scaling to "web scale". Working through a range of architectural dimensions, patterns and pithy examples – attendees will leave this session with useful ideas on how to design new applications, as well as the "retro-fitting" that can be done to existing applications to enable them to scale on AWS.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
An overview of running Oracle Database, Fusion Middleware and Oracle Applications on AWS. Covers licensing, pricing, support, security, networking, Amazon VPC, Amazon EC2, Amazon EBS, use cases, and customer successes.
Enterprises, mid-market, and SMBs all have one thing in common: their business applications are critical. Companies of all sizes are running SAP, Oracle, Exchange, and many other business applications in the cloud to simplify infrastructure management, deploy more quickly, and lower cost. AWS offers a reliable and flexible cloud infrastructure platform that enables customers to run any type of Windows or Linux based business application, from small departmental solutions to world-wide mission-critical production ERP (remove) systems in a secure, scalable and robust environment. Come along to this session to learn how large scale systems like SAP, Oracle, Microsoft and others are being used by enterprise customers of all shapes and sizes. In this session you will discover some of the challenges and approaches that will make you successful in deploying and operating these systems on AWS. This is a must session for enterprise customers that are looking at moving material workloads into the cloud.
Speaker: Nam Je Cho, Solutions Architect, Amazon Web Services
Amazon Web Services (AWS) can make hosting scalable, highly-available websites and web applications easier and less expensive for the Enterprise Education customers. Join us for an informative webinar on tools AWS provides to elastically scale your architecture to avoid underutilized resources while reducing complexity with templates, partners, and tools to do much of the heavy lifting of creating and running a website for you.
In this session, we will discuss strategies, tools, and techniques for migrating and running off-the-shelf Oracle packages on AWS. We'll consider applications like Oracle eBusiness Suite, PeopleSoft, JD Edwards, Endeca, and Siebel. These applications are complex by themselves, they are frequently customized, they have many touch points on other systems in the enterprise, and they often have large associated databases. Therefore, they may not seem good candidates for the cloud at first look. Nevertheless, running enterprise applications in the cloud affords powerful benefits, and we'll identify the factors and best practices that most influence success.
In this session, you will learn the best practices in identifying, assessing, selecting and migrating your first workload to AWS. The next logical step is a large scale “All in” migration approach to enable enterprises become truly DevOps and Cloud First organization. We will present the building blocks and programs for such large migrations with the AWS Migration Assessment Readiness and Migration Acceleration Program.
Speaker: Ekta Parashar
Enterprise Solution Architect, Amazon India
Understand how to architect an infrastructure to handle going from zero to millions of users. From leveraging highly scalable AWS services to making smart decisions on building out your application, you'll learn a number of best practices for scaling your infrastructure in the cloud.
Nuts and bolts of running a popular site in the aws cloudDavid Veksler
I will share how we develop and host a popular publishing platform in the cloud with a limited budget and technology team.
We'll cover architecture, including a variety of services at Amazon Web Services such as elastic load balancing, S3, Elastic Beanstalk, and RDS in the context of a real site.
We'll cover how we control costs with Spot and burstable instances and scale up with distributed caching.
Finally we'll discuss continuous deployment strategies for Windows and Linux-based cloud applications in the context of a distributed team using an agile process.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
Learn how AWS customers save money, time and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
The AWS Private Equity organization utilizes the Recognized Cloud Transformation Leader (RCTL) program and Transformation Advisor role to enable portfolio company executives to successfully execute a cloud or digital transformation - accelerate migrations/modernization, remove transformation impediments and mitigate risk.
AWS PE Transformation Advisor program overview
Assigns a dedicated PE Transformation Advisor to the executive cloud sponsor (CxO or similar) for an 8-to-12-week engagement that can be further extended as needed. The PE Transformation Advisor aids the executive in value creation by removing transformation blockers, securing buy-in from the executive team, influencing the board, adapting business processes in support of cloud, and preparing the broader organization for the digital transformation.
During the engagement, the PE Transformation Advisor provides prescriptive guidance to define the transformation tenets and guiding principles, assist developing the business case, produce the cloud journey map, establish the Cloud Center of Excellence (CCoE), document KPIs, identify partners, and define the Cloud Operating Model (COM).
Organizing for faster innovation - People, process, culture, and technologyTom Laszewski
Organizing for faster innovation through people, process, culture, and technology transformation. Best practices, lesson learned, and a prescriptive approach to evolving and disrupting a company's people, process, culture, and technology during a digital and cloud transformation.
Creating an Operating Model to enable a high frequency organizationTom Laszewski
Establishing an appropriate cloud operating model is critical to forming your organization’s successful adoption of cloud, and delivering greater business agility, increasing the cloud migration Return on Investment, and deliver a more secure, performant, reliable, and cost effective cloud computing environment. The impact of the cloud will be felt across your entire organization, including processes and people - not just Information technology. It will significantly affect, and be affected by, your organizational culture and Information technology delivery structures. This session will provide prescriptive guidance regarding the best approaches to evolving an operating model from projects to products, manual, process intensive governance to a ‘trust but verify’ model, long development cycles to continuous integration and deployment, silos between business and IT into a collaborative organizational structure, self-service processes, and continuous improvement. The recommendations in the presentation are based upon lesson learned, best practices, and anti-patterns from thousands of customer’s cloud transformation journeys.
AWS Cloud Center Excellence Quick Start Prescriptive GuidanceTom Laszewski
This presentation is a practical playbook for defining, establishing, and implementing a Cloud Enablement Engine (CEE). It collates and summarizes the lessons learned and anti-patterns gathered from the CEE journeys successfully navigated at Amazon and other large enterprise companies. A lot has been written about the need to establish a CEE, the benefits of moving to a productization mindset, and the business value of tribes, guilds, and two-pizza teams. However, larger organizations are still struggling with a CEE 30-60-90 day plan, and the essential components of the CEE during its first six months in existence.
The prescriptive guidance in this presentation provides pragmatic and tactical advice for establishing a Cloud Enablement Engine (CEE) – also referred to as a Cloud Center of Excellence (CCoE) or Cloud Enablement Team. This presentation serves as a step-by-step guide for the initial setup activities, and the top ten best practices that have been extrapolated from working across a large number of customers. What not to do is as important as what to do. Therefore, the top ten anti-patterns are discussed.
A key focus of the CEE is transforming the IT organization from an on-premise operating model to a Cloud Operating Model (COM). The transformation to COM and the charter of a CEE are highly correlated and interconnected. During the nascent stage of the CEE, the focus of the CEE will be on the infrastructure components of a COM. This includes the operations, security & control, platform architecture & governance, and infrastructure provisioning & configuration management functions. AWS understands that enterprise (on-premises) operating models are based on ITIL. Therefore, the cloud transformation from an on-premises operating model to a COM will include mapping ITIL to a cloud, agile, and DevOps based capabilities and processes. Fortunately, ITIL 4.0 embraces DevOps, cloud, and agile.
AWS Technical Due Diligence Workshop Session TwoTom Laszewski
First session in the one day Technical Due Diligence workshop. Overview the of AWS offerings, mechanisms, tools, and services that can be leveraged during a TDD. Review the AWS playbooks and runbooks.
AWS Technical Due Diligence Workshop Session OneTom Laszewski
First session in the one day Technical Due Diligence workshop. Understand the AWS approach to TDD along with the common use cases]/ hypothesis. Cover the AWS TDD case studies, and outputs from TDDs.
Once a Technical Due Diligence has been completed, the real work happens after the acquisition has closed. Here is a post Transaction value creation presentation that details the roadmap, programs, offerings, and resources to develop a 100 day plan and beyond.
Private Equity Technical Due Diligence Value CreationTom Laszewski
Utilizing AWS to achieve value creation during Technical Due Diligence. Covers the AWS tools, mechanisms, offerings, solutions, and services that are included in the AWS TDD playbooks and runbooks. The presentation covers the most common TDD use cases and hypothesis, along with case studies.
Cloud Enablement Engine Role Definition and MappingTom Laszewski
Question: How do traditional roles map to cloud roles. As an operations person, what things will I do when the cloud is deployed.
Answer: The following slides provide an example of mapping of traditional roles to cloud roles. The content is a bit generic and was initially intended for a larger global enterprise, but the roles, skills and concepts may be helpful for discussion.
Private Equity Value Creation Carve Outs, Divestitures and mergersTom Laszewski
Who to utilize AWS 'cloud in a box' offerings (AWS Quick Starts and solutions) to rapid deploy and configure an AWS foundational solution. The session covers landing zones, security, database, identity and access management, remote workers, and cloud operations.
AWS Technical Due Diligence Executive Overview Tom Laszewski
Overview of the TDD process, roadmap, tools, offerings, playbooks,use cases, and case studies. Covers all the resources, assets, tools, and offerings AWS utilizes for a successful acquisitions, mergers, divestitures, or carve out technical due diligence.
AWS Techical Due Diligence to post transaction execution for M&A Tom Laszewski
Overview of the TDD and post transaction process, roadmap, tools, offerings, playbooks,use cases, and case studies. Covers all the resources, assets, tools, and offerings AWS utilizes for a successful acquisitions, mergers, divestitures, or carve out (M&A activity) technical due diligence and post transaction execution.
Hybrid Cloud on AWS: Foundational Layers and AWS ServicesTom Laszewski
Networking, Security, Data Integration, Fleet Management, and compute are foundational to instantiating and operating a hybrid or multi-cloud. This presentation describes a functional view utilizing these five foundational layers, and outlines the AWS Services that align to these five layers.
Operating and Managing Hybrid Cloud on AWSTom Laszewski
Operating in a hybrid architecture is a necessary component of an enterprise cloud adoption journey. Security, provisioning, change management, and monitoring are all key aspects of managing any hybrid cloud environment. This session will cover the AWS Services, open source tools, and AWS partners that can provide enterprises with a secure, well-governed, performant, reliable, and well-operated hybrid cloud environment. Infrastructure and application continuous delivery and improvement solutions, along with best practices to automate hybrid cloud provisioning and operations activities will be covered.
AWS Cloud Adoption Framework and WorkshopsTom Laszewski
The presentation covers the AWS Cloud Adoption Framework (CAF). AWS CAF helps organization accelerate their cloud adoption journey. The framework includes six perspectives - business, people, governance, security, operations, and platform. These six perspectives are used during CAF Envision, Alignment, and Cloud Capability Assessment workshops to enable the art of the possible, identify and mitigate organizational and technology impediments, and score the cloud capabilities of an organization.
DevOps, CI/CD, cost management, and security on AWSTom Laszewski
DevOps pipelines – how does one think about choosing between some legacy tools (such as Terraform versus CloudFormation. Build Pipeline, Code Pipeline versus Jenkins etc. ) versus going all in the AWS stack , what are companies doing, best practices.
Cost management – strategies , role intermediaries such as Cloudreach can play in rolling our efficient cost strategies
Security - industry specific capabilities, shared responsibility model a good framework , depending on the industry you need more sometimes in terms of access to AWS resources
Hybrid Cloud on AWS : Provisioning, Operations, Management, and Monitoring Tom Laszewski
How do I provision infrastructure and applications, manage systems, and operate and monitor a Hybrid Cloud on AWS is one of the first questions I get from enterprise customers as they start their cloud adoption journey. This presentations covers the tools, technologies, and AWS Services that can be used to manage, operate, and monitor a hybrid cloud. It also covers CI/CD in a hybrid cloud environment.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
Migrating enterprise workloads to AWS
1.
2.
3. •
•
•
•
•
•
•
•
Why Enterprises Choose AWS
Enterprise Applications Architectures
Seven design principals for AWS
Best Practices
Migration Approach
Calculating Total Cost of Ownership (TCO)
Customer Project : Migration lessons learned
Next steps
4.
5. No Up-Front
Capital Expense
Low Cost
Pay Only for
What You Use
Self-Service
Infrastructure
Easily Scale
Up and Down
Improve Agility &
Time-to-Market
Deploy
6.
7. Technology
stack
On premise solution
AWS
Network
VPN, MPLS
AWS VPC, VPN, AWS Direct Connect
Security
Firewalls, NACLs, routing tables, disk
encryption, SSL, IDS, IPS
AWS Security Groups, AWS
CloudHSM,
NACLs, routing tables, disk encryption,
SSL, IDS, IPS
Storage
DAS, SAN, NAS, SSD
Computer
Hardware, Virtualization
Content Delivery
CDN Solutions
Databases
DB2, MS SQL Server, MySQL,
Oracle, PostgresSQL, MongoDB,
Couchbase
Load Balancing
Hardware and software load
balancers, HA Proxy
Scaling
Hardware and software clustering,
Apache ZooKeeper
Domain Name
Services
DNS providers
AWS EBS, AWS S3, AWS EC2
Instance storage (SSD), GlusterFS
AWS EC2
AWS CloudFront
AWS RDS, AWS DynamoDB, DB2, MS
SQL Server, MySQL,PostgesSQL,
Oracle, MongoDB, Couchbase
AWS Elastic Load Balancer, software
load balancers, HA Proxy
AWS Auto Scaling, software clustering,
Apache ZooKeeper
AWS Route 53
8. Technology
stack
On premise solution
AWS
Analytics
Hadoop, Cassandra
Data Warehousing
Specialized hardware and software
solutions
AWS RedShift
Messaging and
workflow
Messaging and workflow software
AWS Simple Queuing Service, AWS
Simple Notification Server, AWS Simple
Workflow Service
Caching
Memcached, SAP Hana
Archiving
Tape library, off site tape storage
Email
Email software
Identity Management
LDAP
Deployment
Chef, Puppet
AWS AMIs, AWS CloudFormation, AWS
OpsWorks, AWS Elastic Beanstalk, Chef,
Puppet
Management and
Monitoring
CA, BMC, Rightscale
AWS CloudWatch, CA, BMC, Rightscale
AWS Elastic MapReduce, Hadoop,
Cassandra
AWS ElastiCache, Memcached, SAP
Hana
AWS Glacier
AWS Simple Email Service
AWS IAM, LDAP
9.
10.
11.
12. 1. Design for failure and nothing fails
2. Loose coupling sets you free
3. Implement elasticity
13. 4. Build security in every layer
5. Don’t fear constraints
6. Think parallel
7. Leverage different storage options
14. Design for failure
– Avoid single points of failure
– Assume everything fails and design backwards
• Goal: Applications should continue to function even if the underlying physical hardware
fails or is removed/replaced.
Automatic
failover
App
Server
Database
Server
(Primary)
Database
Server
(Secondary
)
15. Loose coupling sets you free
– Use a queue to pass messages between components
Web
Servers
Queue
Video
Processing
Servers
App
Servers
Decouple tiers
with a queue
16. Implement elasticity
– Elasticity is a fundamental property of the cloud
– Don’t assume the health, availability, or fixed location of components
– Use designs that are resilient to reboot and re-launch
– Bootstrap your instances
• When an instance launches, it should ask ―Who am I and what is my role?‖
– Favor dynamic configuration
17. Build security in every layer
Security is a shared responsibility. You decide how to:
– Encrypt data in transit and at rest
– Enforce principle of least privilege
– Create distinct, restricted Security Groups for each application role
• Restrict external access via these security groups
– Use multi-factor authentication
18. Don’t fear constraints
– Need more RAM?
• Horizontal : Consider distributing load across machines or a shared cache
• Vertical : Stop and restart instance
– Need better IOPS for database?
• Instead, consider multiple read replicas, sharding, or DB clustering
– Hardware failed or config got corrupted?
• ―Rip and replace‖—Simply toss bad instances and instantiate replacement
19. Think parallel
– Experiment with parallel architectures
Same cost (i.e., 4
instance hours), but
parallel is 4x faster
Hour 1
Hour 2
Hour 3
Hour 4
20.
21. Auto Scaling and Elasticity
―AWS enables Netflix to quickly
deploy thousands of servers and
terabytes of storage within
minutes. Users can stream Netflix
shows and movies from anywhere
in the world, including on the
web, on tablets, or on mobile
devices such as iPhones.‖
From 40 EC2 instances to 5k
instances after launching the
Facebook application
22. High Availability
Within Amazon EC2, Airbnb is
using Elastic Load
Balancing, which automatically
distributes incoming traffic between
multiple Amazon EC2 instances.
HA using Elastic Load Balancer
with Apache-WLS, Oracle
WebLogic and Oracle RAC in a
multi-AZ configuration
23. Disaster Recovery
Washington Trust Bank and AWS
Advanced Consulting Provider ITLifeline use the AWS cloud to cut
disaster recovery costs, reduce
overhead, and improve recovery time
in a compliance-driven industry.
DiskAgent protects their healthcare
industry customers against physical
systems damage by storing backedup records offsite, in multiple
Amazon data centers.
24. VPC
• Use it…VPC by default for new
accounts
• Database in private subnet
IDS/IPS
• Trend Micro, AlertLogic, Snort
• Host based
• Conduct penetration test : prior
approval from AWS
VPN
• Redundant connections
• Consider two Customer Gateways
• Dynamic routing (BGP) over static
(ASA)
Dedicated, secure connection
• Direct Connect - 1 Gbps or 10 Gbps
NAT
• Set up multi-AZ NAT
Fail over
• ELB : Multi-AZ
• Route 53 : Geo/region
26. Storage
• Use Instance storage for
temporary storage or database
EBS
• PIOPS (applies to I/O with a block size
of 16KB)
• Stripe using RAID 0, 10, LVM, or ASM
• RAID 10 (can decrease performance)
• Snapshot often : Single volume DB
• 20 TB DB size (potential max) :
Depends upon IOPS and instance type
(1 Gbps or 10 Gbps)
File system
• ext3/4, XFS (less mature)
• Try different block sizes : start
with 64K
Stripping
• Stripe multiple volumes for more
IOPS (e.g., (20) x 2,000 IOPS
volumes in RAID0 for 40,000 IOPS)
• ASM (Oracle) with external
redundancy
• More difficult to Snapshot : Use
OSB, database backup solution
Tuning
• Maintain an average queue
length of 1 for every 200
provisioned IOPS in a minute
• Pre-warm $ dd of=/dev/md0
if=/dev/null
• fio, Oracle ORION
• Database Compression
27. AMIS
• Use vendor provided
• Build your own AMI
EC2
• EBS optimized, cluster compute
and storage optimized instances
• SSD backed for high performance
IO : hi1.4xlarge has 2 TB of SSD
attached storage
•
EBS
• Install software binaries on a separate
EBS volume
SSD backed, high memory instance for
cached database using Oracle Smart
Flash Cache: cr1.8xlarge has 240 GB of
SSD plus 244 GB of memory and 88
ECUs
•
Boot Strapping
• User data/scripts
• CloudFormation
• Consider Chef, Puppet,
OpsWorks
Turn off (stop) when not using
https://s3.amazonaws.com/cloudformationexamples/BoostrappingApplicationsWithAWSCloudFormation.pdf
28. Scaling
• Vertical Scaling with EC2 : stop instance and change instance type
• Horizontal scaling for web and application severs : auto scaling
• Horizontal Scaling for Database with Read Replicas and multi-AZ
• This will need to be configured using Oracle Active Data Guard, Oracle
GoldenGate, 3rd party technology
• Amazon CloudWatch : detailed monitoring, custom metrics
• Amazon Route 53 : Latency based routing to route traffic to region closest
to the user Requires replicated, sharded, or geo dispersed databases
HA
• Elastic IPs and Elastic Network Interfaces (ENIs)
• Active-passive multi-AZ using Oracle Data Guard or other replication
solutions
• Active-Active multi-AZ using Oracle GoldenGate or other replication solutions
• Amazon Route 53 : Now supports health checks for multi-region HA
• ELB : Web and Application Server for multi-AZ HA. Health checks (HTML file)
to see if Oracle DB is up and running. Associate ENI / Elastic IP to new
Oracle DB.
31. Questions to ask?
Existing
Applications
―No-brainer to
move‖ Apps
Planned
Phased
Migration
•
•
•
Is it a technology fit?
Is there a pressing business
need the migration would
address?
Is there an immediate or
potentially big business
impact the migration may
have?
Examples
• Dev/Test applications
• Self-contained Web Applications
• Social Media Product Marketing Campaigns
• Customer Training Sites
• Video Portals (Transcoding and Hosting)
• Pre-sales Demo Portal
• Software Downloads
• Trial Applications
32. Proof of
concept will
answer tons
of questions
quickly
• Get your feet wet with Amazon
Web Services
– Learning AWS
– Build reference architecture
– Be aware of the security features
• Build a Prototype/Pilot
– Build support in your organization
– Validate the technology
– Test legacy software in the cloud
– Perform benchmarks and set
expectations
33. • Select apps
• Test platform
• Plan migration
Plan
Deploy
• Migrate data
• Migrate components
• Cutover
• Embrace AWS
services
• Re-factor
architecture
Optimize
34. Data Velocity Required
GBs
One-time upload w/
constant delta updates
TBs
UDP Transfer Software
(e.g., Aspera, Tsunami, …)
Attunity Cloudbeam
AWS Storage Gateway
Riverbed
Hours
Days
Transfer to S3
Over Internet
AWS Import / Export
Data Size*
* relative to internet bandwidth and latency
35. Forklift
Effort
Forklift
Embrace
Scalability
Optimize
Operational Burden
Embrace AWS
• May be only option for • Minor modifications to
some apps
improve cloud usage
• Run AWS like a virtual • Automating servers
co-lo (low effort)
can lower operational
• Does not optimize for
burden
on-demand (over• Leveraging more
provisioned)
scalable storage
Optimize for AWS
• Re-design with AWS
in mind (high effort)
• Embrace scalable
services (reduce
admin)
• Closer to fully utilized
resources at all times
36. ELB
Forklift steps:
Match resources and build AMIs
• Thinks about application
needs not server specs
• Build out custom AMI for
application roles
AMI-1 @
C1.Medium
AMI-1 @
C1.Medium
AMI-2 @
M2.XLarge
AMI-6 @
M2.XLarge
AMI-3 @
C1.Medium
AMI-2 @
M2.XLarge
AMI-5 @
M2.2XLarge
AMI-4 @
M1.Large
Convert appliances:
• Map appliances to AWS
services or virtual appliance
AMIs
Deploy supporting components:
• NAS replacements
• DNS
• Domain controllers
Secure the application
components:
• Use layered security groups to
replicate firewalls
37. App Tier
Auto-scaling Group
Web Tier
Auto-scaling Group
ELB
Web
Web
Server
Server
Web
Web
Server
Server
App Server
App Server
App Server
App Server
Master
Database
Network
Filesystem
Network
Filesystem
DNS
Config
Management
Server
Domain
Controller
Steps to Embrace AWS:
Rethink storage:
• Leverage S3 for scalable
storage
• Edge cache with CloudFront
• Consider RDS for HA RDBMS
Parallelize processing:
• Bootstrap AMIs for autodiscovery
• Pass in bootstrapping
parameters
• Leverage configuration
management tools for
automated build out
Scale out and in on-demand:
• Use CloudWatch and Autoscaling to auto-provision the
fleet
38. Web Tier
Auto-scaling Group
App Tier
Auto-scaling Group
A Phased Migration to AWS - Optimize
Steps to Optimize for AWS:
Web
Web
Server
Server
Web
Web
Server
Server
App
Server
App Server
SQS
App
Server
Re-Rethink storage:
• Break up datasets across
storage solutions based on
best fit and scalability
App Server
App
Server
Parallelize processing:
• Spread load across multiple
resources
• Decouple components for
parallel processing
EMR
Use Spot where possible to reduce
costs
Config
Management
Server
Network
Filesystem
DNS
Route
53
Domain
Controller
Embrace scalable on-demand
services
• Scale out systems with
minimal effort
• Route53
• SES, SQS, SNS
• …
39.
40. #1 Start with a use case or an application – compare apples to apples, capacity
utilization, networking, availability, peak to average, DR costs, power etc.
#2 Take all the fixed costs into consideration
(Don’t forget administration, maintenance and redundancy costs)
#3 Use Updated Pricing (compute, storage and bandwidth)
Price cuts, Tiered Pricing and Volume Discounts
#4 Use variable capacity & reserved instances where they fit the business needs
#5 Intangible Costs – Take a closer look at what is built in with AWS –
security, elasticity, innovation, flexibility
41. DOs
DON’Ts
3 or 5 Year Amortization
Use 3-Year Heavy RIs or Fixed RIs
Use Volume RI Discounts
Ratios (VM:Physical, Servers:Racks, People:Servers)
Mention Tiered Pricing
(Less expensive at every Tier : network IO, storage)
Cost Benefits of Automation (Auto
scaling, APIs, Cloud Formation, OpsWorks, Trusted
Advisor, Optimization)
BONUS
43. Time from ordering to procurement
DOs
DON’Ts
(Releasing early = Increased Revenue)
Cost of ―capacity on shelf‖ (top of step)
Incremental cost of adding an on-premises
server when physical space is maxed out
Real cost of resource shortfalls (bottom of step)
Cost of disappointed or lost customers when
unable to scale fast enough
BONUS
44.
45. • Trusted Advisor: Draws upon best practices learned from AWS’
aggregated operational history of serving hundreds of thousands of
AWS customers. The AWS Trusted Advisor inspects your AWS
environment and makes recommendations when opportunities exist to
save money, improve system performance, or close security gaps.
• Apptio: Leader in technology business management (TBM), a new
category and discipline backed by global IT leaders that helps you
understand the cost, quality, and value of the services you provide.
• CloudHealth: Delivers business insight for your cloud ecosystem.
Designed for management and executive teams to to optimize AWS
performance and costs.
48. DNS Provider
(R53, DNSMade
Easy)
Internet
Apache+
HAProxy
2
Apache+
HAProxy
1
Auto scaling Group
JBoss
Node 1
Primary
Oracle
11g DB
From
RETS
system
3rd Party
protocol
Windows
Downloader
Server
PREPROC
Oracle
11g DB
Availability Zone
JBoss
Node n
JBoss
Node 3
JBoss
Node 2
Active
Standby
Oracle
11g DB
Redo Log
Shipping
From
RETS
system
Windows
Downloader
Server
PREPROC
Oracle 11g
DB
Availability Zone
Application
Code
Bucket
Daily
Database
Backup
EBS
Snapshots
49. • Choose great partners
• Understand the cloud capabilities trajectory (rapid pace of
innovation)
• Have a strong methodology
• Implement rich and detailed monitoring
• Plan for, and perform as many launch rehearsals as possible
• EBS provisioned IOPS works as promised
• AWS continues to rapidly improve services (4K IOPS now
available) and reduce costs
• Multi-AZ implementation
• Rehearsed DB restorations
50. • The cloud-based system operates as expected in terms
of performance and cost
• Cloud costs as per our projection (with the use of
reserved instances)
• Project delivered on budget
• Operational staff requirements reduced
• Incidentally, physical infrastructure failed on 07/10/13 –
would have resulted in a total service outage
• Lower overall incident rate
• Application and storage performance highly consistent
• Infrastructure now a selling point for the business
51. Here are some additional resources:
• Get started with a free trial
– http://aws.amazon.com/free
• White papers
– http://aws.amazon.com/whitepapers/
• Reference Architectures
– http://aws.amazon.com/architecture/
• Enterprise on AWS
– http://aws.amazon.com/enterprise-it/
• Executive level Overview : Extending Your Infrastructure to the AWS Cloud (4 minutes)
– http://www.youtube.com/watch?v=CsGqu5L_PFI
• Simple Monthly Pricing Calculator
– http://calculator.s3.amazonaws.com/calc5.html
• TCO Calculator for Web Applications
– http://aws.amazon.com/tco-calculator/
tomlasz@amazon.com
Editor's Notes
The Module ObjectivesBy the end of this training you will be able to do the following:Identify the Oracle and AWS alliance timeline. Describe how to identify opportunities that can be solved by AWS products and services and what other customers have done before. Verify some common best practices using Oracle and AWS product and services. Describe the support and licensing polices and other online resources.
Now that you know some of the main problems our customers are solving on AWS, we’d like to talk a bit about why they choose AWS cloud
Cloud computing is a better way to run your business. The cloud helps companies of all sizesbecome moreagile. Instead of running your applications yourself you can run them on the cloud where IT infrastructure is offered as a service like a utility. With the cloud, your company saves money: there are no up-front capital expenses as you don’t have to buy hardware for your projects. The massive scale and fast pace of innovation of the cloud drive the costs down for you. In the cloud, you pay only for what you use just like electricity.The cloud can also help your company save time and improve agility – it’s faster to get started: you can build new environments in minutes as you don’t need to wait for new servers to arrive. The elastic nature of the cloud makes it easy to scale up and down as needed. At the end of the day you have more resources left for innovation which allows you to focus on projects that can really impact your businesses like building and deploying more applications. “With the high growth nature of our business, we were looking for a cloud solution to enable us to scale fast. Think twice before buying your next server. Cloud computing is the way forward.” - Sami Lababidi, CTO, Playfish
Amazon Web Services provides highly scalable computing infrastructure that enables organizations around the world to requisition compute power, storage, and other on-demand services in the cloud. These services are available on demand so a customer doesn’t need to think about controlling them, maintaining them or even where they are located. Our approach has always been to be a customer focused company. We constantly look to develop services in line with the needs of our customers to make sure they get the flexibility and usability out of the service that they need to be successful.
Without getting into the industry debate about public vs. private cloud it’s clear that most cloud benefits cannot be realized with on-premise virtualization technologies. In the on-premise virtualization model, you often have to buy expensive hardware and software which virtually eliminates the cost benefits of cloud computing. Although on-premise virtualization allows you to quickly provision new servers, your ability to scale up is limited to your physical infrastructure. You still need to buy physical servers to grow. If you want to scale down you won’t see significant cost-savings as you already paid for the hardware. These limitations of the on-premise virtualization model impact your ability to innovate fast and free up money to invest in new projects.NAS is file based, SAN is block based.Short for Multiprotocol Label Switching, an IETF initiative that integrates Layer 2 information about network links (bandwidth, latency, utilization) into Layer 3 (IP) within a particular autonomous system--or ISP--in order to simplify and improve IP-packet exchange.MPLS gives network operators a great deal of flexibility to divert and route traffic around link failures, congestion, and bottlenecks.
Without getting into the industry debate about public vs. private cloud it’s clear that most cloud benefits cannot be realized with on-premise virtualization technologies. In the on-premise virtualization model, you often have to buy expensive hardware and software which virtually eliminates the cost benefits of cloud computing. Although on-premise virtualization allows you to quickly provision new servers, your ability to scale up is limited to your physical infrastructure. You still need to buy physical servers to grow. If you want to scale down you won’t see significant cost-savings as you already paid for the hardware. These limitations of the on-premise virtualization model impact your ability to innovate fast and free up money to invest in new projects.
Many architecture diagrams have all the latest and greatest services in them along with a fully scalable, available, loosely coupled, fault tolerant, and multi-tier design. In some cases, customers are moving a very basic implementation with 5 to 20 users. This is the case for the architecture shown above. It is an Oracle PeopleSoft implementation with minimal availability and DR requirements. It is a light weight and low cost solution for hosting PeopleSoft on AWS. The things that stand out about the architecture are: 1. No load balancing as there are only 5 concurrent online users. 2. No long term archiving as there are no regularity compliance needs. 3. No auto scaling for application tier as the application server can be recovered manually using the Amazon EC2 instance snapshots. 4. No automatic HA/multi-AZ for database tier as RDS backups can be used to recover the Oracle database. 5. No session recover as there are limited online transactions and the users can resubmit a failed session.PeopleSoft is hosted on an Amazon EC2 Instance. This is an Amazon Elastic Block Storage (EBS) based Amazon EC2 large Instance with 7.5 GB of memory and 4 Amazon EC2 Compute Units. The database is hosted on an Amazon RDS Oracle Instance. This is an Amazon EBS based Amazon RDS large Instance with 7.5 GB of memory and 4 Amazon EC2 Compute Units. Amazon RDS is backed up automatically. The frequency of the backups can be set automatically. A backup snapshot can be take at anytime but I/O will be suspended for a few minutes unless multi-AZ is set for Amazon RDS. Amazon EBS Snapshots will be used for Application Server high availability and potentially disaster recovery. The snapshots can be located in the same region in a different AZ or snapshot to another region for additional protection. AWS spot instances, spare Amazon EC2 instances that you bid on, can be used when there are extreme large batch files to process and load into the database.
On the other end of the spectrum from the minimal PeopleSoft configuration is highly available and scalable Oracle E-Business Suite implementation. These implementations can be complex and expensive. There are typically dense peak periods and wild swings in traffic patterns result in low utilization rates of expensive hardware. Amazon Web Services provides the reliable, scalable, secure, and high-performance infrastructure required for Oracle E-Business Suite while enabling an elastic, scale out and scale down infrastructure to match IT costs in real time as customer traffic fluctuate.The database server is a High-Memory Quadruple Extra Large Instance with 68.4 GB of memory and 8 virtual cores,26 EC2 Compute Units. The application server instances are also high memory as a minimum of 6 GB of memory is recommended for Oracle E-Business Suite. We will use the High-CPU extra large instances which have 7 GB of memory and 8 virtual cores. The HTTP Servers can be High-CPU Medium instances with 1.7 GB of memory and 2 virtual cores. The user's DNS requests are served by Amazon Route 53, a highly available Domain Name System (DNS) service. Network traffic is routed to infrastructure running in Amazon Web Services. The HTTP requests are first handled by the Elastic Load Balancing, which automatically distributes incoming application traffic across multiple Amazon EC2 instances across AZs. It enables even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. The Oracle Web, application and database servers are deployed on Amazon EC2 instances. This will be a custom AMIusing Oracle Enterprise Linux 5.3 and Oracle E-Business Suite 12.1.3. Amazon Spot Instances or Auto Scaling can be used to support batch processing.Web and application servers are deployed in an Auto Scaling group. Auto Scaling automatically adjusts your capacity according to conditions you define. This ensures that the number of Amazon EC2 instances increases seamlessly during demand spikes. Oracle database backups and the batch flat files for integration with the corporate data center are stored on Amazon S3.The storage volumes for the Applications Servers will be standard Amazon EBS volumes.The Oracle database storage volumes will be Amazon EBS PIOPS volumes. These provide up to 1000 IOPS per volume. These will be stripped using Oracle ASM. Spot instances can be used to handle large batch loads.
Now that you know some of the main problems our customers are solving on AWS, we’d like to talk a bit about why they choose AWS cloud
6. IDS : An intrusion detection system (IDS) is a device or software application that monitors network or system activities for malicious activities or policy violations and produces reports to a management station. Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of a monitoring system.7. IPS : Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about this activity, attempt to block/stop it, and report it. Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity.A host-based intrusion detection system (HIDS) is an intrusion detection system that monitors and analyzes the internals of a computing system, and in some cases the network packets on its network interfaces (just like an NIDS). A host-based IDS monitors all or parts of the dynamic behavior and the state of a computer system. HIDS was first designed for the mainframe. HIDS uses sensors (agents) located on each host. These host-based agents, which are sometimes referred to as sensors (or agents), would typically be installed on a machine that is deemed to be susceptible to possible attacks. The term “host” refers to an individual computer/virtual host. This means that separate sensor would be needed for every machine/virtual host. Sensors/agents work by collecting data about events taking place on the system being monitored. This data is recorded by operating system in audit trails. Therefore, HIDS is very log intensive.Network-based intrusion detection systems offer a different approach. NIDS collects information from the network itself rather than from each separate host. They operate essentially based on a “wiretapping concept" (network taps). Information is collected from the network traffic stream, as data travels on the network. The intrusion detection system checks for attacks or irregular behavior by inspecting the contents and header information of all the packets moving across the network. The network sensors come equipped with “attack signatures” that are rules on what will constitute an attack, and most network-based systems allow advanced users to define their own signatures. this method is also known as packet sniffing, and allows the sensor to identify hostile traffic.I still don't believe that we are injecting a 0/0 route, but I haven't personally tried setting up a no-BGP tunnel to an ASA, I will try and find one to test and reach out to the VPC team to ask. On the HIPS/HIDS question, the typical FUD is around additional resources being used by the HIPS agent, aka Amazon wants you to run HIPS so you need to run more instances (and pay more $) because the IPS agent will use a bunch of resources. In fact the HIPS solution we recommend, Trend Micro Deep Security, is really lightweight because it only loads the signatures that are required for that instance based on the software and OS that is running plus it has the advantage of being able to stop attacks as well as reducing false positives since the signature set is automatically tuned for that particular instance. This is a huge benefit in my opinion because typical NIDS create a crapton of noise and thus typically no one ever looks the output, resulting in a lower security posture in many cases. Also if they really want NIDS the Alert Logic Threat Manager product is also fairly lightweight, though it does impact network performance, and since few instances are really ever 100% network bound the additional bandwidth has a negligible impact. CISCO ASA and SonicWall dedicated device for AWS VPC. Configure VPN on AWS side it generates an ACL that tunnel is requesting needs to be 0.0.0.0/0 on both device then all traffic on that device will only go to AWS. BGP is available this is not an issue. Only an issue when using ASA (specific routes).Migrate R5 Demo ApplicationWhat is required to be Active/Active : How to use shopping cart session data (DynamoDB), AZ to AZ using ELB, Auto Scaling, Route 53. Database only running in one AZ. How do they manage?· How should specific application design be modified to utilize AWS such as shared data, shopping carts and content delivery (S3) · Requires Application architect resource to provide direction to the THG development team to modify application code to be Active/Active
Physical SecurityAmazon has many years of experience in designing, constructing, and operating large-scale datacenters. This experience has been applied to the AWS platform and infrastructure. AWS datacenters are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication a minimum of two times to access datacenter floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff. AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his or her access is immediately revoked, even if they continue to be an employee of Amazon or Amazon Web Services. All physical access to datacenters by AWS employees is logged and audited routinely.Network SecurityDistributed Denial of Service (DDoS)Standard mitigation techniques in effectMan in the Middle (MITM)All API endpoints protected by SSLIP SpoofingProhibited at host OS levelNetwork SecurityUnauthorized Port ScanningViolation of TOSDetected, stopped and blockedPacket SniffingPromiscuous mode ineffectiveProtection at hypervisor levelStorage Device DecommissioningUses techniques from:DoD 5220.22-M (“National Industrial Security Program Operating Manual “)NIST 800-88 (“Guidelines for Media Sanitization”)Ultimately, all devices are:degaussedphysically destroyedVirtual Memory and Local DiskProprietary disk management prevents one instance from reading disk contents of anotherDisk is wiped upon creationDisks can be encrypted by customerAWS Third-Party Attestations, Reports, and CertificationsAWS EnvironmentService Organization Controls (SOC) ReportsSOC 1 Type II (SSAE 16/ISAE 3402/formerly SAS70)SOC 2 Type IISOC 3Payment Card Industry Data Security Standard (PCI DSS) Level 1 CertificationISO 27001 CertificationFedRAMPSMDIACAP and FISMAITARFIPS 140-2Additional information available at https://aws.amazon.com/compliance/. Customers have deployed various compliant applications:Sarbanes-Oxley (SOX) HIPAA (healthcare)FedRAMPSM (US Public Sector)FISMA (US Public Sector)ITAR (US Public Sector)DIACAP MAC III Sensitive IATO
Oracle ASM disk groups provide three types of redundancy: normal, high, and external. With normal and high redundancy, files are replicated within the disk group. With external redundancy, ASM does not provide any redundancy for the disk group. When creating setting up ASM for a group of volumes, we recommend using external redundancy since Amazon EBS volumes are already redundant within an availability zone.Oracle ASM best practices like having different disk groups for data and log files, work and recovery areas, also apply in Amazon EBS.Because this architecture is targeted at a medium-sized enterprise class database, we recommend using fewer than 10 total volumes. To provide a benefit, a provisioned IOPS volume must maintain an average queue length (rounded up to the nearest whole number) of 1 for every 200 provisioned IOPS per minute. If you set the queue length to less than 1 per 200 IOPS provisioned, your volume will not consistently deliver the IOPS that you've provisioned. Setting the queue length too far above the recommended setting won't affect the IOPS your volume delivers, however per-request latencies will increase. For a Provisioned IOPS volume of 500, the queue length average must be 3. If the average queue length is less than 3 for this volume, you aren't consistently sending enough I/O requests.Instance StoreZero network overhead; local, direct attached resource.No network variabilityNot optimized for random I/OGenerally better for sequential I/ORoot volume and data volume are lost on physical disk failure, stopping, or terminating of instanceIdeal for storing temporary data like buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.Maintain a number of pending I/O requests to get the most out of your Provisioned IOPS volume. The volumes must maintain an average queue length of 1 (rounded up to the nearest whole number) for every 200 provisioned IOPS in a minute Maintain a queue depth of 10 for a 2,000 Provisioned IOPS volumeMaintain a queue depth of 3 for a 500 Provisioned IOPS volumeExample: a 2000 Provisioned IOPS volume can handle:2000 16KB read/write per second, or 1000 32KB read/write per second, or 500 64KB read/write per second You will get consistent 32 MB/sec throughput (with 16KB or higher IOs)Perform an index creation action and sends I/O of 32K, IOPS becomes 1000, you still get 32MB/sec throughputOn best effort, you may get up to 40 MB/sec throughput fioLinux, WindowsFor benchmarking I/O performance. (Note that fio has a dependency on libaio-devel.)Oracle ORIONLinux, WindowsFor calibrating the I/O performance of storage systems to be used with Oracle databases.SQLIOWindowsFor calibrating the I/O performance of storage systems to be used with Microsoft SQL Server.We like ext3/4, but we love XFSHigh performance, consistentRobust and lots of options for tweaking/adjusting as neededOur favorite mount options: (your mileage may vary)inode64, noatime, nodiratime, attr2, nobarrier, logbufs=8, logbsize=256k, osyncisdsync, nobootwait, noautoYields great performance, reduces unnecessary writes, stableWe like ZFS a lot too, but we want to see more runtime on linux firstBut FreeBSD/ZFS would be a fine choiceHowever: test your workload!File systems behave differently under different workloadsAn EC2 instance comes with a certain amount of “local” storage, which is ephemeral. Any data placed on those devices will not be available after that instance is terminated by the customer, or if the underlying hardware fails which would cause an instance restart to happen on a different server. This characteristic makes instance storage ill-suited for database persistent storage. AWS offers a storage service called Amazon EBS (Elastic Block Storage), which provides persistent block-level storage volumes. Amazon EBS volumes are off-instance storage that persists independently from the life of an instance. Amazon EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone (datacenter) to prevent the loss of data from the failure of any single component. For all these reasons, we recommend to use EBS for data files, log files and for the flash recovery area. Using ephemeral storage intelligently can boot performance. This can be used for many kind of temp files and regularly backup static files.For high I/O workloads, an alternative to Provisioned IOPS EBS volumes is to use High I/O instances, which contain SSD drives as internal storage and address the most demanding database workloads. The High I/O Quadruple Extra Large instance can provide up to 120,000 random read IOPS and 85,000 random write IOPS. The High Memory Cluster Eight Extra Large Instance offers 244 GB of memory in addition to 240 GB of local SSD storage. Note however that this SSD storage is internal to the instance and will be lost if the instance is stopped or if the underlying hardware fails. When using this type of storage for databases, you should make sure that you have a solid strategy to avoid loss of data, for example by frequently backing up your data to Amazon S3. In addition to storage performance, High I/O and High Memory Cluster Instances also have very high I/O performance via 10 Gigabit Ethernet, which allows for increased EBS performance.EBS Optimized M3.2Xlarge instance has 1 Gb/s bandwidth dedicated to EBS, more than 12 PIOPS volumes at 500 IOPS each will saturate the 1 GB/s network16 KB per IO = up to 64 MB/sec. It can burst up to 40 MB/sec on best effort basis.High-performance SSD optionhi1.4xlarge EC2 instance type(2) x 1TB SSD local to instance~120,000 random read IOPS (4 KB blocks)~10,000-85,000 random write IOPS (4 KB blocks)
AMIS : You need to use an AMI (Amazon Machine Image) to start an EC2 instance. There are a lot of options. We recommend using the AMIs that are published by Oracle, available at http://aws.amazon.com/amis/Oracle. There are AMIs containing Oracle Enterprise Linux and Oracle database 11g release 2 with the following versions: Standard Edition One, Standard Edition and Enterprise Edition. You get the benefit of having a fully pre-installed Oracle database. Alternatively, our customers can start an EC2 instance running the operating system of your choice, and install Oracle manually, just like they would do on an internal server at their company . As the number of Oracle supplied AMIs have need kept up with demand and as Oracle has not been providing AMIs for the latest and greatest releases, it might be a good idea to give options to the users.Sizing: The amount of CPU and memory, as well as the network bandwidth available to the database depends on the type of instance on which it is deployed. If migrating an existing database from on-prem to EC2, you can pick the closest instance type and use that as the starting point and then monitor the performance to determine whether it is a good match or if you need to pick a bigger/smaller instance type.When running constant-on high-performance databases, it is best to choose the high-memory instance class as this allows you to maximize the amount of memory available to the SGA of the database. Larger instance types may also have the added benefit of providing higher throughput to the attached EBS volumes. Mention advantages of ne CC and Hi I/O instances.Instance Type: Increasing the performance of a database requires an understanding of which of the server’s resources is the performance constraint. If the database performance is limited by CPU or memory, users can scale up the memory, compute, and network resources by choosing a larger instance type. The three architectures we've discussed cover most Oracle database use cases on the AWS platform. In the rare case that you want to run an OLTP application, your database would need very high IOPS, in the range of 100,000 – 200,000. To attain those high IOPS in this architecture, we use local SSD-based volumes that are available in the Amazon EC2 instance itself. Because these are ephemeral disks, there is the potential to lose the entire database if the instance fails. To prevent any potential for data loss and ensure reliability, this architecture employs a second instance in the same Availability Zone. It uses Oracle Data Guard to replicate data to this instance from the primary instance. We may also want to introduce the Oracle Flash Cache feature to extend database performance on High memory instance types with SSD disks.In short we can utilize the Oracle Flash Cache feature on Oracle 11g to extend database Buffer cache to 240G of SSD above the existing 240G of RAM. This is useful for high memory database requirements and also in-memory database requirements.For simple bootstrapping, user data text/scripts may be adequate. Keep in mind the limit on size is 16K for user data.s3cmd is often used to load the bootstrap scripts for S3. More on this can be found here:http://s3tools.org/s3cmdhttps://github.com/s3tools/s3cmdA very good document on using user data, CloudFormation, Chef, Puppet and other tools to bootstrap EC2 instances can be found here:https://s3.amazonaws.com/cloudformation-examples/BoostrappingApplicationsWithAWSCloudFormation.pdf
Use of Route 53 to manage Oracle database endpoints as seen by applications - this makes it easier to maintain HA in an environment where the Oracle instances themselves may be transient.Vertical Scaling : For many customers, increasing the performance of a single DB instance is the easiest way to increase the performance of their application overall. In the Amazon EC2 or Amazon RDS environments, you can simply stop an instance, increase the instance size, and restart the instance. This is particularly true if you have a set maintenance window and can tolerate system downtime. This technique is often referred to as scaling up.Advanced setups can benefit from the elastic nature of Amazon Web Services. By monitoring the usage of the primary database with Amazon CloudWatch, you can receive notifications indicating that a heavy load threshold has been met or exceeded. In this situation, you can create on-demand new stand-by databases to lower the load on the primary. Once this heavy usage period is finished, stand-by instances and the resources they consume can be disposed . DataGuard can be used only with EE. There are many third party solutions that provide the same functionality even for Standard and Standard one (like SharePlex, Dbvisit). Would be a good idea to mention those too.Active-Active replicationCommercially available active-active database replication technologies can also be used to boost the overall throughput of an application. This can be especially useful if there is way to divide up the workload between multiple DB instances such that even when they share the same schema, the updates they make are mostly exclusive to each other. For instance handling customer orders based on the location of the customer with all US based orders going into one database and non-US orders going to a second database. However the application would need to handle any conflict resolution scenarios, for instance if there is a total count of number of orders being maintained, then it needs to be updated outside of these replicated databases . Would be good to explain Multi-Master setups too. Oracle Golden Gate also can be used for this purpose, so can be streams. Oracle is putting emphasis on GG on their Road map so it would be a good idea to cover that tooAWS specific tactics for implementing HA best practices:1. Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures2. Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure highly availability. Utilize Amazon RDS Multi-AZ [21] deployment functionality to automatically replicate database updates across multiple Availability Zones.3. Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.4. Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.5. Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.6. Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups.This implementation sets up Data Guard for Fast Start Failover, so that the failover to standby instance can be achieved quickly. In this architecture the primary instance uses Elastic Network Interface (ENI), which can be leveraged for an even faster failover by swapping the ENI from the primary instance to the standby instance, because both instances are in the same Availability Zone. This requires a third observer instance to monitor the primary instance and swap the ENI in case of a failure.Oracle Active Data Guard is an Oracle Database add-on, which allows you to set up standby databases that can be open for read-only requests, while continuing to archive transactions from the primary database. The standby databases can be used as read replicas of your primary database. The replication between the primary and the standby databases can be configured to be synchronous. This allows you to scale your database layer horizontally by adding read replicas and to offload read-only queries from the primary database. This setup is often valuable because most applications generate more reads to the database than writes. Also, read-heavy clients like business intelligence applications can be executed against a standby instance, with no impact on the primary production database.You can use Active Data Guard to build an elastic database infrastructure. By monitoring the usage of the primary database with Amazon CloudWatch, you can receive notifications indicating that a heavy load threshold has been met or exceeded. In this situation, you can create on-demand new standby databases to lower the load on the primary. Once this heavy usage period is over, standby instances and the resources they consume can be disposed.Note: Oracle Active Data Guard is only available for Oracle Database Enterprise Edition, not for Standard Edition and Standard Edition One.It is also possible to use active-active replication to boost performance. In this scenario, you creates one or more database replicas that can be both written to and read from, in effect implementing a distributed database where all replicas are synchronized. These technologies are covered in the “High Availability” section.
The Blueprint offers a step by step approach to cloud migration and has been proven successful. When customers will follow this blueprint and focus on creating a proof of concept, they will immediately see value in their proof of concept projects and see tremendous potential in the AWS cloud. After they move their first application to the cloud, they will get new ideas and will want to move them into the cloud.
Applications that are very interesting, easy to experiment with, simple sel
We have noticed some of our SMBs and startup companies in our ecosystem skipped the classification and other stages I discussed above and dove right into a proof of concept. There is no doubt that a proof of concept will answer tons of questions very quickly. During the proof of concept it is important that you get your feet wet with Amazon Web Services, get trained from Amazon (we have AWS University and have launched a training course in Seattle). Andy started multiple projects in parallel. He regularly focused on Proof of concept.
Talk about relative Costs but highlight that this is about getting data their fast…Rectangle not ovals. Border line in size (GB vs TB) and speed (Hours vs Days)Backup…can use storage gateway if less than 5 TB a day as this is max with storage gateway (also need a backup software to get data from disk to storage gateway), Riverbed is a great solution as they offer 2 TB an hour and no back up storage needed. CommVault is another
Add more lines for operating costs and flexibility
Add SecurityGroupdefinitions---Storage area network (SAN) – Access to SAN is at a block levelNAS is practically an array of hard disc drives with network interface. Volumes of HDD NAS are treated by network users as shared network resources. Access to NAS-stored data is provided at the file-level.A NAS (see Figure 2) is typically composed of networked file servers that make use of Ethernet and TCP/IP, handling data at the file level. You attach NAS devices to an existing TCP/IP network (usually Ethernet) to add additional storage.A simple way to remember the difference between SAN and NAS is to think about how each technology is implemented. NAS is commonly found in server farms--application servers, e-mail servers, and so on--where increasing storage volume is as easy as attaching another system to the network. A SAN is usually deployed for e-commerce applications, data backup and other cases in which large amounts of data must be stored and transmitted over a network; a SAN lets you offload such high-volume traffic, sparing your Ethernet network from congestion.http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ConfigWindowsHPC.htmlhttp://aws.amazon.com/hpc-applications/ High Performance Computing (HPC)Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to a heterogeneous group of clients. NAS not only operates as a file server, but is specialized for this task either by its hardware, software, or configuration of those elements. NAS is often manufactured as a computer appliance – a specialized computer built from the ground up for storing and serving files – rather than simply a general purpose computer being used for the role.[nb 1]As of 2010 NAS devices are gaining popularity, as a convenient method of sharing files among multiple computers.[1] Potential benefits of network-attached storage, compared to file servers, include faster data access, easier administration, and simple configuration.[2]NAS systems are networked appliances which contain one or more hard drives, often arranged into logical, redundant storage containers or RAID. Network-attached storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS, or AFP.NAS vs. SAN[edit]Visual differentiation of NAS vs. SAN use in network architectureNAS provides both storage and a file system. This is often contrasted with SAN (Storage Area Network), which provides only block-based storage and leaves file system concerns on the "client" side. SAN protocols include Fibre Channel, iSCSI, ATA over Ethernet (AoE) and HyperSCSI.One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system and mounted.Despite their differences, SAN and NAS are not mutually exclusive, and may be combined as a SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system. An example of this is Openfiler, a free software product running on Linux-based systems. A shared disk file system can also be run on top of a SAN to provide filesystem services.We provide an Amazon DNS server. To use your own DNS server, update the DHCP options set for your VPC. For more information, see DHCP Options Sets.To enable an EC2 instance to be publicly accessible, it must have a public IP address, a DNS hostname, and DNS resolution.http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html
What period are you amortizing hardware across? Are you using the same RI term? Are you comparing vs. heavy utilization RIs?How much buffer capacity are you planning on carrying? If small, what is your plan if you need to add more? What if you need less capacity? What is your plan to be able to scale down costs?Are you taking labor into account? What about maintenance (broken disks, patching hosts, servers going offline, etc).What are you assuming for network gear? What if you need to scale beyond a single rack?What about availability? Are you accounting for 2N power? If not, what happens when you have a power issue to your rack?What is your bandwidth peak to average ratio? Have you modeled in AWS lowering prices over time? Your purchased gear will never get cheaper – and hosting (power and cooling) is not getting cheaper
Smarter Agent powers more highly rated and downloaded real estate app titles in the Android, iPhone and Blackberry marketplaces than any other in the real estate vertical. This includes the #1 and #2 downloaded and rated large franchisor apps, the most highly downloaded and rated independent brokerage office app and many of the top downloaded Multi Service Listings (MLS) apps.
As many metrics as you can manageOS level, database, and application metricsTime long running activities, and clearly note runtimes in launch plan
The Module ObjectivesBy the end of this training you will be able to do the following:Identify the Oracle and AWS alliance timeline. Describe how to identify opportunities that can be solved by AWS products and services and what other customers have done before. Verify some common best practices using Oracle and AWS product and services. Describe the support and licensing polices and other online resources.