The document discusses dynamic infrastructure and keeping applications running at scale in the cloud. It begins with an introduction of the speaker, Lee Atchison, and his background in cloud computing. It then discusses various challenges of maintaining application availability, both obvious challenges like outages as well as more subtle challenges like performance degradation. The rest of the document discusses strategies for monitoring applications in dynamic cloud environments, approaches for migrating applications to the cloud, and general strategies for successful cloud adoption.
The New Normal: Benefits of Cloud Computing and Defining your IT StrategyAmazon Web Services
The standard business model is changing rapidly changing. Companies used to be built for the long haul. But now, success is powered by rapid-paced innovation and the ability to get disruptive products to market first.
You’re used to balancing resources between keeping things running and the development of new initiatives. But merely keeping the lights on doesn't differentiate you from your competitors.
Chris Munns takes us on a journey to Innovation. He presents AWS' latest and greatest announcements with a particular focus on Serverless - Amazon Lambda, and Automation - AWS Step Functions. Presented in Montreal at the AWS Innovate event.
This document discusses AWS services that can be used for various enterprise workloads. It provides an overview of how companies like NTT DOCOMO, Kellogg, and Hess have used AWS for applications, databases, disaster recovery, and development/testing. It also discusses the benefits of AWS including scalability, availability, security and cost savings. The document aims to provide information on how AWS can help enterprises deploy business critical applications.
AWS Enterprise Summit London 2013- Andy Jassy- AWS KeynoteAmazon Web Services
1) The document discusses Amazon Web Services (AWS) and how it provides cloud computing services for enterprises.
2) It outlines the broad range of computing, storage, database, analytics and other services AWS offers, as well as its global infrastructure footprint and certifications.
3) The document argues that AWS offers lower long term costs compared to building private data centers, citing its economies of scale, pricing models and security capabilities.
Understand the core concepts of Cloud Computing. Whether you want to run applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low cost IT resources.
This document discusses different architectural approaches that can be used when deploying workloads on AWS like startups. It summarizes virtual machine-based n-tier architectures, container-based architectures using ECS, and serverless architectures using Lambda. It also discusses how these architectures impact cost, performance, reliability and other factors. The document recommends letting development teams choose the right tools for their needs and adopting a microservices approach to scale complexity over time.
"
Enterprises today are relying on the cloud to improve security, be more agile, and save money. By moving desktops to the cloud, companies are realizing many of these same benefits for end user computing. Amazon WorkSpaces is a secure, managed Windows desktop service running on the AWS cloud, and offers an enhanced security posture, a compelling user experience for a modern mobile workforce, and the flexibility to scale globally. In this session, you'll learn more about how Amazon WorkSpaces can benefit your organization. The session will also address how Amazon WorkSpaces integrates with your existing IT infrastructure, covering identity and access management, network access and design, and end user applications. This session is for enterprise IT professionals interested in learning more about Amazon WorkSpaces in an enterprise environment.
"
Dell EMC: Protect Your Workloads on AWS With Increased Scale & PerformanceAmazon Web Services
Learn how Dell EMC is helping innovative enterprise companies, like DXC, to provide customers with comprehensive protection, no matter where their data lives. As a leading independent, end-to-end IT services and solutions company, DXC needed a solution that could offer the performance needed to meet strict data protection SLAs across both on-premises and on the AWS Cloud. Dell EMC's unique cloud optimized data protection architecture provided superior performance and scalability at the lowest total cost of ownership. Find out how you too can experience the benefits of Dell EMC data protection on the AWS cloud.
The New Normal: Benefits of Cloud Computing and Defining your IT StrategyAmazon Web Services
The standard business model is changing rapidly changing. Companies used to be built for the long haul. But now, success is powered by rapid-paced innovation and the ability to get disruptive products to market first.
You’re used to balancing resources between keeping things running and the development of new initiatives. But merely keeping the lights on doesn't differentiate you from your competitors.
Chris Munns takes us on a journey to Innovation. He presents AWS' latest and greatest announcements with a particular focus on Serverless - Amazon Lambda, and Automation - AWS Step Functions. Presented in Montreal at the AWS Innovate event.
This document discusses AWS services that can be used for various enterprise workloads. It provides an overview of how companies like NTT DOCOMO, Kellogg, and Hess have used AWS for applications, databases, disaster recovery, and development/testing. It also discusses the benefits of AWS including scalability, availability, security and cost savings. The document aims to provide information on how AWS can help enterprises deploy business critical applications.
AWS Enterprise Summit London 2013- Andy Jassy- AWS KeynoteAmazon Web Services
1) The document discusses Amazon Web Services (AWS) and how it provides cloud computing services for enterprises.
2) It outlines the broad range of computing, storage, database, analytics and other services AWS offers, as well as its global infrastructure footprint and certifications.
3) The document argues that AWS offers lower long term costs compared to building private data centers, citing its economies of scale, pricing models and security capabilities.
Understand the core concepts of Cloud Computing. Whether you want to run applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low cost IT resources.
This document discusses different architectural approaches that can be used when deploying workloads on AWS like startups. It summarizes virtual machine-based n-tier architectures, container-based architectures using ECS, and serverless architectures using Lambda. It also discusses how these architectures impact cost, performance, reliability and other factors. The document recommends letting development teams choose the right tools for their needs and adopting a microservices approach to scale complexity over time.
"
Enterprises today are relying on the cloud to improve security, be more agile, and save money. By moving desktops to the cloud, companies are realizing many of these same benefits for end user computing. Amazon WorkSpaces is a secure, managed Windows desktop service running on the AWS cloud, and offers an enhanced security posture, a compelling user experience for a modern mobile workforce, and the flexibility to scale globally. In this session, you'll learn more about how Amazon WorkSpaces can benefit your organization. The session will also address how Amazon WorkSpaces integrates with your existing IT infrastructure, covering identity and access management, network access and design, and end user applications. This session is for enterprise IT professionals interested in learning more about Amazon WorkSpaces in an enterprise environment.
"
Dell EMC: Protect Your Workloads on AWS With Increased Scale & PerformanceAmazon Web Services
Learn how Dell EMC is helping innovative enterprise companies, like DXC, to provide customers with comprehensive protection, no matter where their data lives. As a leading independent, end-to-end IT services and solutions company, DXC needed a solution that could offer the performance needed to meet strict data protection SLAs across both on-premises and on the AWS Cloud. Dell EMC's unique cloud optimized data protection architecture provided superior performance and scalability at the lowest total cost of ownership. Find out how you too can experience the benefits of Dell EMC data protection on the AWS cloud.
Casi reali di Mass Migration nel Cloud: benefici tangibili ed intangibiliAmazon Web Services
The document discusses mass migration to the cloud, including common triggers for migration, stages of adoption, benefits both tangible and intangible, examples of real migrations, patterns of migration, and how to plan and execute a mass migration. It provides details on readiness assessments, executing application migrations, partners that can assist, and available migration tools.
Seeing More Clearly: How Essilor Overcame 3 Common Cloud Security Challenges ...Amazon Web Services
IT security teams are increasingly pressured to accomplish more, with fewer resources. Trend Micro Deep Security helps organizations understand and overcome their most common cloud security challenges, without having to expand their cloud tool set. Join the upcoming webinar to learn how Essilor, a world leader in the design and manufacturing of corrective lenses, has enabled their IT teams to apply, maintain and scale security across their AWS environments by overcoming these common challenges in cloud migrations.
We will discuss how Essilor managed, and overcame, the pace of change when adopting a cloud environment, the transformation of their traditional IT security roles, and how they chose the right security tools and technology to achieve their business goals.
The document provides an overview of best practices for getting started with AWS. It recommends choosing development and testing as a first use case. It also discusses laying foundations such as creating account structures, enabling billing reports, deciding on key management strategies, using IAM groups and roles, and focusing on security. The document recommends leveraging AWS services rather than software, optimizing costs, using tools and frameworks, and getting support.
Andy Jassy Illuminates Amazon Web ServicesMichael Skok
Andy Jassy, senior vice president of Amazon Web Services, provides an overview of AWS at the May 8, 2013 Startup Secrets session at Harvard innovation lab.
ENT201 A Tale of Two Pizzas: Accelerating Software Delivery with AWS Develope...Amazon Web Services
Software release cycles are now measured in days instead of months. Cutting edge companies are continuously delivering high-quality software at a fast pace. In this session, we will cover how you begin your DevOps journey by sharing best practices and tools by the "two pizza" engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous integration and delivery workflows. We will also cover an introduction to AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy, the services inspired by Amazon's internal devloper tools and DevOps practice.
This document discusses how cloud computing is transforming enterprise IT by allowing companies to focus on their core business. It provides an overview of traditional on-premises IT structures and how companies are migrating to cloud-first models using AWS. The summary discusses establishing a Cloud Center of Excellence to lead the migration effort and building hybrid cloud architectures to break dependencies on legacy systems over time.
Understand the core concepts of Cloud Computing. Whether you want to run applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low cost IT resources.
AWS Summit 2014 Melbourne - Breakout 5
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
This document discusses implementing Windows workloads on AWS. It provides a brief history of Windows support on AWS since 2008. It describes how line of business applications and corporate applications can be deployed on AWS through self-managed EC2 instances or managed RDS. It discusses why customers choose AWS for security, availability, performance, familiar environment, cost effectiveness and licensing options. The document concludes with next steps to get started with Windows workloads on AWS.
2014년 10월 29일에 열린 AWS Enterprise Summit에서의 발표자료입니다. 아마존 웹서비스의 APAC Principal Technology Evangelist인 Markku Lepisto가 진행한 강연입니다.
강연 요약: 클라우드 컴퓨팅은 기업들의 IT 서비스 소비와 생산 방식을 빠르게 변화시키고 있습니다. 일반적으로 큰 규모의 기업들은 클라우드 컴퓨팅의 가치를 잘 인식하고 있지만, 자신들의 사업에 적합하게 클라우드 컴퓨팅을 평가하고 도입할 방법에 대해서는 확실히 파악하고 있지 못한 기업들이 많습니다. 이 세션에서는 클라우드 도입의 여러가지 전략과 각 단계에 대해 살펴보도록 하겠습니다.
Keeping Security In-Step with Your Application Demand CurveAmazon Web Services
Protecting dynamically scaled cloud compute resources can be challenging, especially for organizations that lack the time or money it takes to maintain dynamic security. Fortinet’s auto scaling security solution addresses this issue by providing the resources to help with deployment in order to optimize organizations’ AWS networks. Join the upcoming webinar hosted by Fortinet and AWS to learn how to leverage Fortinet for auto scaling complex security policies in your Amazon VPC. Fortinet has a broad set of capabilities that when combined with AWS services creates truly a complete security architecture.
AWS re:Invent 2016: Governance Strategies for Cloud Transformation (WWPS302)Amazon Web Services
This document provides an overview of cloud governance strategies for cloud transformation. It defines cloud governance and discusses the benefits of governance. It also discusses the role of a Cloud Center of Excellence and describes common stages of cloud governance maturity. The presentation provides examples from Monash University and University of Maryland on their cloud governance approaches and lessons learned. It concludes with a question and answer section.
Best Practices in Planning a Large-Scale Migration to AWS - May 2017 AWS Onli...Amazon Web Services
Learning Objectives:
- Understand what encompasses a large-scale migration and the key business drivers for this change
- Learn the stages of adopting the AWS Cloud and key activities to complete before considering a large-scale migration
- Learn how to analyze your application portfolio and classify it against common migration patterns
- Discover the tools and techniques to help streamline your migration activities
- Learn program management and governance techniques to ensure success
Many businesses have a large portfolio of existing applications running on-premises today and are interested in moving those workloads to AWS in order to achieve cost savings and enable business agility. Planning a large-scale migration to the cloud takes time and effort, as well as expertise and tools to ensure success along the way. AWS has developed a framework to help customers plan and execute large-scale migration programs, consisting of a comprehensive methodology, a set of tools, and partners with deep subject expertise. In this tech talk, you will learn about foundational milestones to achieve in your migration journey, how to analyze your application portfolio, plan and execute your migration project, and enable your organization to operate on the cloud. This framework leverages our experiences and best practices in assisting organization around the world with their migration programs.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
The document provides an agenda for the AWS Summit in July 2016. It includes details of the keynote speaker and times for lunch, breaks and networking reception. It also lists Amazon Web Services offerings in the UK and Ireland such as solutions architects, account managers and technical support. The document promotes the event sponsors and thanks attendees.
Best Practices Scaling Web Application Up to Your First 10 Million UsersAmazon Web Services
If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Cloud computing gives you a number of advantages, such as the ability to scale your web application on demand. Join us in this webinar to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In this series, we will cover:'Introduction to Cloud Computing with Amazon Web Services'.
Microservices? Dynamic Infrastructure? - Adventures in Keeping Your Applicati...Amazon Web Services
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional instances These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as microservices and dynamic infrastructure can help you with this challenge.
Speaker: Lee Atchison, Senior Director, Strategic Architecture, New Relic
ENT310 Microservices? Dynamic Infrastructure? - Adventures in Keeping Your Ap...Amazon Web Services
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as microservices and dynamic infrastructure can help you with this challenge. This session is brought to you by AWS Summit San Francisco Platinum Sponsor New Relic.
Casi reali di Mass Migration nel Cloud: benefici tangibili ed intangibiliAmazon Web Services
The document discusses mass migration to the cloud, including common triggers for migration, stages of adoption, benefits both tangible and intangible, examples of real migrations, patterns of migration, and how to plan and execute a mass migration. It provides details on readiness assessments, executing application migrations, partners that can assist, and available migration tools.
Seeing More Clearly: How Essilor Overcame 3 Common Cloud Security Challenges ...Amazon Web Services
IT security teams are increasingly pressured to accomplish more, with fewer resources. Trend Micro Deep Security helps organizations understand and overcome their most common cloud security challenges, without having to expand their cloud tool set. Join the upcoming webinar to learn how Essilor, a world leader in the design and manufacturing of corrective lenses, has enabled their IT teams to apply, maintain and scale security across their AWS environments by overcoming these common challenges in cloud migrations.
We will discuss how Essilor managed, and overcame, the pace of change when adopting a cloud environment, the transformation of their traditional IT security roles, and how they chose the right security tools and technology to achieve their business goals.
The document provides an overview of best practices for getting started with AWS. It recommends choosing development and testing as a first use case. It also discusses laying foundations such as creating account structures, enabling billing reports, deciding on key management strategies, using IAM groups and roles, and focusing on security. The document recommends leveraging AWS services rather than software, optimizing costs, using tools and frameworks, and getting support.
Andy Jassy Illuminates Amazon Web ServicesMichael Skok
Andy Jassy, senior vice president of Amazon Web Services, provides an overview of AWS at the May 8, 2013 Startup Secrets session at Harvard innovation lab.
ENT201 A Tale of Two Pizzas: Accelerating Software Delivery with AWS Develope...Amazon Web Services
Software release cycles are now measured in days instead of months. Cutting edge companies are continuously delivering high-quality software at a fast pace. In this session, we will cover how you begin your DevOps journey by sharing best practices and tools by the "two pizza" engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous integration and delivery workflows. We will also cover an introduction to AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy, the services inspired by Amazon's internal devloper tools and DevOps practice.
This document discusses how cloud computing is transforming enterprise IT by allowing companies to focus on their core business. It provides an overview of traditional on-premises IT structures and how companies are migrating to cloud-first models using AWS. The summary discusses establishing a Cloud Center of Excellence to lead the migration effort and building hybrid cloud architectures to break dependencies on legacy systems over time.
Understand the core concepts of Cloud Computing. Whether you want to run applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low cost IT resources.
AWS Summit 2014 Melbourne - Breakout 5
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
This document discusses implementing Windows workloads on AWS. It provides a brief history of Windows support on AWS since 2008. It describes how line of business applications and corporate applications can be deployed on AWS through self-managed EC2 instances or managed RDS. It discusses why customers choose AWS for security, availability, performance, familiar environment, cost effectiveness and licensing options. The document concludes with next steps to get started with Windows workloads on AWS.
2014년 10월 29일에 열린 AWS Enterprise Summit에서의 발표자료입니다. 아마존 웹서비스의 APAC Principal Technology Evangelist인 Markku Lepisto가 진행한 강연입니다.
강연 요약: 클라우드 컴퓨팅은 기업들의 IT 서비스 소비와 생산 방식을 빠르게 변화시키고 있습니다. 일반적으로 큰 규모의 기업들은 클라우드 컴퓨팅의 가치를 잘 인식하고 있지만, 자신들의 사업에 적합하게 클라우드 컴퓨팅을 평가하고 도입할 방법에 대해서는 확실히 파악하고 있지 못한 기업들이 많습니다. 이 세션에서는 클라우드 도입의 여러가지 전략과 각 단계에 대해 살펴보도록 하겠습니다.
Keeping Security In-Step with Your Application Demand CurveAmazon Web Services
Protecting dynamically scaled cloud compute resources can be challenging, especially for organizations that lack the time or money it takes to maintain dynamic security. Fortinet’s auto scaling security solution addresses this issue by providing the resources to help with deployment in order to optimize organizations’ AWS networks. Join the upcoming webinar hosted by Fortinet and AWS to learn how to leverage Fortinet for auto scaling complex security policies in your Amazon VPC. Fortinet has a broad set of capabilities that when combined with AWS services creates truly a complete security architecture.
AWS re:Invent 2016: Governance Strategies for Cloud Transformation (WWPS302)Amazon Web Services
This document provides an overview of cloud governance strategies for cloud transformation. It defines cloud governance and discusses the benefits of governance. It also discusses the role of a Cloud Center of Excellence and describes common stages of cloud governance maturity. The presentation provides examples from Monash University and University of Maryland on their cloud governance approaches and lessons learned. It concludes with a question and answer section.
Best Practices in Planning a Large-Scale Migration to AWS - May 2017 AWS Onli...Amazon Web Services
Learning Objectives:
- Understand what encompasses a large-scale migration and the key business drivers for this change
- Learn the stages of adopting the AWS Cloud and key activities to complete before considering a large-scale migration
- Learn how to analyze your application portfolio and classify it against common migration patterns
- Discover the tools and techniques to help streamline your migration activities
- Learn program management and governance techniques to ensure success
Many businesses have a large portfolio of existing applications running on-premises today and are interested in moving those workloads to AWS in order to achieve cost savings and enable business agility. Planning a large-scale migration to the cloud takes time and effort, as well as expertise and tools to ensure success along the way. AWS has developed a framework to help customers plan and execute large-scale migration programs, consisting of a comprehensive methodology, a set of tools, and partners with deep subject expertise. In this tech talk, you will learn about foundational milestones to achieve in your migration journey, how to analyze your application portfolio, plan and execute your migration project, and enable your organization to operate on the cloud. This framework leverages our experiences and best practices in assisting organization around the world with their migration programs.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
The document provides an agenda for the AWS Summit in July 2016. It includes details of the keynote speaker and times for lunch, breaks and networking reception. It also lists Amazon Web Services offerings in the UK and Ireland such as solutions architects, account managers and technical support. The document promotes the event sponsors and thanks attendees.
Best Practices Scaling Web Application Up to Your First 10 Million UsersAmazon Web Services
If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Cloud computing gives you a number of advantages, such as the ability to scale your web application on demand. Join us in this webinar to understand best practices for scaling your resources from zero to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In this series, we will cover:'Introduction to Cloud Computing with Amazon Web Services'.
Microservices? Dynamic Infrastructure? - Adventures in Keeping Your Applicati...Amazon Web Services
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional instances These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as microservices and dynamic infrastructure can help you with this challenge.
Speaker: Lee Atchison, Senior Director, Strategic Architecture, New Relic
ENT310 Microservices? Dynamic Infrastructure? - Adventures in Keeping Your Ap...Amazon Web Services
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as microservices and dynamic infrastructure can help you with this challenge. This session is brought to you by AWS Summit San Francisco Platinum Sponsor New Relic.
Microservices? Dynamic Infrastructure? - Adventures in Keeping Your Applicati...New Relic
Presented by Lee Atchison at the Amazon Web Services Summit in San Francisco on April 18, 2017.
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as microservices and dynamic infrastructure can help you with this challenge.
ENT317 Dynamic Infrastructure? Migrating? Adventures in Keeping Your Applicat...Amazon Web Services
"Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Cloud-based dynamic infrastructures can help you keep your application running with high availability, even during times of extreme scaling. We will discuss some of the best practices we’ve learned working with New Relic customers on how you can manage your applications running at scale, and how technologies such as dynamic infrastructure can help you with this challenge. Joining us on stage will be Appboy, the global leader in lifecycle engagement technology, to discuss their experiences with dynamic infrastructure and the cloud and how it has impacted their ability to scale.
This session is brought to you by AWS Summit New York City sponsor, New Relic."
ENT317 Migrating with Morningstar: The Path To Dynamic CloudAmazon Web Services
Keeping an application running at scale in the cloud is fundamentally different than keeping your applications running in your own data centers. Cloud technologies are different, the way you scale is different, the way you troubleshoot is different, and the monitoring you need is different. From static compute to dynamic autoscaling to serverless services and microservices, combined with the demands of creating new digital businesses, cloud services provide new opportunities and challenges. In this session, New Relic’s Lee Atchison and Morningstar, a global investment research company, will discuss the differences in cloud technologies impact on monitoring and architectural strategies when migrating and scaling applications on AWS.
This session is brought to you by AWS Summit Chicago sponsor, New Relic.
Application Architecture Summit - Monitoring the Dynamic Cloud New Relic
How do you apply modern application to your digital business? Hear from New Relic's Sr Director, Strategic Architecture, Lee Atchison, at the Application Architecture Summit. Learn more here: https://newrelic.com/partner/aws
11 Ways Microservices & Dynamic Clouds Break Your MonitoringAbner Germanow
Every software team has its moments of truth. How does this impact the way agile developers, site reliability engineers, and IT operations teams work together? We'll break down the intricacies of modern monitoring and show you what to look for, particularly when monitoring microservices and dynamic clouds. With examples from New Relic customers, you'll learn what to look out for when preparing to conquer your digital moments of truth, master microservices, using cloud services for autoscaling, and getting your teams to work together. I also added a quick bit on quickly evaluating the security of a cloud service provider before you engage your infosec team.
Startups benefit greatly from using cloud computing infrastructure over traditional systems. The cloud provides rapid deployment and scalability with lower upfront costs. It allows startups to focus on their product instead of worrying about infrastructure. Animoto successfully scaled their system from 25,000 to 250,000 users in just 3 days by leveraging the elastic capabilities of the cloud. Other startups like EnTrip have also moved to the cloud to handle heavy processing loads and database searches needed to power their online travel applications. Cloud platforms and services help startups manage scalable, redundant infrastructure so they can grow quickly.
1) Cloud computing allows you to pay for infrastructure as needed rather than upfront, which can lower costs. AWS passes these savings to customers in the form of low prices.
2) AWS provides a variety of compute, storage, database, analytics and other services that can be used to build applications. Popular services include EC2, S3, DynamoDB, and EMR.
3) There are a number of strategies for using AWS, such as using it for development/testing, building new apps, augmenting existing apps, hybrid apps, and full migration. Existing tools can often be used to manage AWS resources.
1. The document discusses cloud computing and Amazon Web Services (AWS). It describes the benefits of cloud computing like pay for only what you use, lower costs, ability to scale easily.
2. It then explains AWS products like compute, storage, database and analytics services that can be used to build applications. It provides examples of how companies use AWS.
3. The document concludes by suggesting strategies for using AWS, from using it for development to fully migrating to the cloud, and encourages the reader to try AWS free tier and contact support.
SRV205 Architectures and Strategies for Building Modern Applications on AWSAmazon Web Services
Rapid growth of technology and tooling in the cloud has enabled us to build modern applications that are more secure, scalable, and focused on our business. In this session, we cover the key compute primitives that enable us to accelerate towards building and running modern, cloud-native applications. We highlight what we’ve learned from customers running applications with AWS Lambda and AWS Fargate, two modern compute technologies for running applications in the cloud. In addition, we cover architecture patterns of modern application, key primitives required for building modern systems, steps you can take to start building and monitoring modern applications today, and secrets to fearlessly going faster and farther in the cloud.
ARC207_Monitoring Performance of Enterprise Applications on AWSAmazon Web Services
"Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally.
New Relic helps manage these applications without sacrificing simplicity.
In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we’ve learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing.
Session sponsored by New Relic"
AWS Summit - Chicago 2016 - New Relic - Monitoring the Dynamic CloudLee Atchison
Lee Atchison gave a presentation on monitoring dynamic cloud environments. He explained that cloud resources are now highly dynamic, with containers starting and stopping within minutes. This requires monitoring not just servers but the entire lifecycle of cloud components. Both operations and development teams are impacted by this change, as cloud architecture is now integral to application design. Traditional monitoring is insufficient - tools are needed that provide full stack visibility across servers, applications, and provisioning in dynamic cloud environments.
Webinar - Life's Too Short for Cloud without AnalyticsLee Atchison
The document discusses monitoring applications in dynamic cloud environments. It notes that cloud infrastructure is monitored by services like CloudWatch, but these don't provide visibility into application performance. New Relic is described as monitoring both the server infrastructure and applications to provide a more complete view. The document also discusses how applications are becoming more dynamic with microservices and containers that have very short lifecycles, making them challenging to monitor using traditional approaches.
Ingest, Transform & Visualize w Amazon Web ServicesBigDataCamp
This document outlines a presentation about building flexible data lake architectures on AWS. It discusses challenges with traditional data warehousing and siloed data sources. A data lake architecture using AWS services like S3, Athena, and Glue is presented as a way to ingest and analyze all data in one centralized location. The presentation also covers real-time analytics platforms using Kinesis and batch vs streaming data processing. It concludes with a demo of visualizing data in S3 using Athena and Amazon QuickSight.
This document discusses modern application architectures on AWS. It covers key concepts like containers, serverless computing using AWS Lambda, and managed Kubernetes with Amazon EKS. Specific services are highlighted, like Amazon ECS for container orchestration and Amazon Fargate for serverless containers without managing infrastructure. Case studies are presented on companies like FINRA and McDonald's using these architectures on AWS for speed, scale, and cost efficiency. The principles of cloud native applications are also summarized, focusing on pay-as-you-go models, self-service, elasticity, and other advantages over traditional data center architectures.
Migrating Microsoft Applications to AWS like an Expert - AWS Summit Sydney 2018Amazon Web Services
Migrating Microsoft Applications to AWS like an Expert
With Microsoft applications making up 60% of most enterprises, this session will illustrate mechanisms to migrate to AWS ensuring your migration strategy encompasses a strong, security focused foundation with a focus on removing undifferentiated heavy lifting. Join us on a journey of migrating the Unicornshop.lol core infrastructure to AWS. This includes business productivity systems, Microsoft SQL server, and a number of .NET applications that require seamless migration with minimal downtime. We will walk through the journey of creating our landing zone and fully automated compliance controls before embarking on our migration journey together!
Danny Jenkins, Solutions Architect, Amazon Web Services
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
The document discusses security at scale on AWS. It covers several topics:
- AWS security controls including over 70 services, 7,710 audit artifacts and 3,030 audit requirements.
- How AWS handles security at scale through automation, ubiquitous logging and encryption, and rapid detection and response times of under 10 minutes on average.
- AWS services that can help with security including IAM, CloudTrail, GuardDuty, and AWS Config rules.
- Reference architectures that show how to scale infrastructure securely including using multiple availability zones and services like Route 53, S3, CloudFront, and Lambda.
Risk Management - Avoiding Availability Disasters in Service-based ApplicationsLee Atchison
Bringing down an application is easy. All it takes is the failure of a single service and the entire set of services that make up the application can come crashing down like a house of cards. Just one minor error from a non-critical service can be disastrous to the entire application. There are, of course, many ways to prevent dependent services from failing. However, adding extra resiliency in non-critical services also adds complexity and cost, and sometimes it is not needed.
Application availability is best served by focusing your energies and processes on your most critical systems while working to minimize the impact of non-critical systems. Service Tiers are a way to accomplish this.
In this talk, we will learn what service tiers are and how they can be applied to service based applications. Then we will show how to utilize service tiers to keep your application available and functioning as designed. We will use example service definitions to illustrate how service tiers can help you keep your application working.
The document discusses how modern applications require modern monitoring and processes to stay performing. It notes that modern applications operate on dynamic cloud infrastructures with constant changes, requiring monitoring of business success, application performance, and customer experience. It emphasizes the importance of managing risk through understanding and mitigating risks rather than removing risks. It also discusses how DevOps is a cultural change involving team-level responsibility and ownership. The presentation aims to explain how instrumentation, infrastructure management, risk management, and DevOps culture can help keep modern applications running effectively.
Temperature probes monitoring crops? Micro drones monitoring wind speed in the atmosphere? You don’t have to turn to these novel uses to see edge computing in action, look no further than the Point of Sale device at your local grocery store or the app on your mobile phone that is letting you order a cup of coffee.
Edge computing is all about taking the specific timing-sensitive parts of your application and moving them closer to where they are needed…whether that need is an end user or a source of interesting data, it’s all the same thing.
What really is the edge and how do we deal with it? How do we decide what computing should occur at the edge and what computing should occur in the cloud? How do you verify that your application is doing what it is expected to do? How do you know if you are meeting your performance expectations in the edge? How do you keep visibility in your entire application, whether it’s in the cloud or at the edge?
Keeping Modern Applications PerformingLee Atchison
It’s your big day, the day of the year your company either makes it, or breaks it. Your customers expect your system to work, always. Excuses are unacceptable.
To meet this new challenge, your application must use modern tools and techniques. Serverless, containers, and cloud technologies are working with new DevOps processes and risk management concepts in order to build a dynamic, highly scalable, highly available application that meets your customers needs.
And central to all of this is the modern analytics necessary to determine how your system is running and what you need to do to keep it running...at scale.
Your customers demand modern applications, and modern applications demand modern tools and modern analytics.
Are you ready to meet these modern challenges?
Architecting for scale - dynamic infrastructure and the cloudLee Atchison
The document discusses dynamic infrastructure and how cloud technologies enable scaling and availability. It describes how a dynamic infrastructure allows applications to allocate and consume resources on demand. It provides examples of how Docker containers can scale dynamically and how cloud technologies like EC2 auto scaling support this. Finally, it outlines progressive stages companies go through in adopting cloud technologies from initial experimentation to fully mandating cloud usage.
Migrating to the Cloud - What to do when things go sidewaysLee Atchison
The document discusses best practices for migrating applications to the cloud. It recommends instrumenting applications early in the migration process to gain visibility and identify issues. A methodical approach is suggested that involves planning the strategy, priorities, and baseline metrics upfront. The migration should then be executed gradually with validation checks to ensure performance and functionality are maintained. Ongoing monitoring is also important after migration to account for the dynamic cloud environment.
Monitoring the Dynamic Nature of Cloud ComputingLee Atchison
The document discusses the challenges of monitoring dynamic cloud applications where resources are constantly changing. Traditional monitoring of servers is not sufficient, as resources may not exist for long periods. Effective monitoring requires tracking how resources are provisioned and utilized over time, as well as both static and dynamic monitoring from the application to infrastructure layers. This allows visibility into how dynamic resources are working and being used.
Future Stack NY - Monitoring the Dynamic Nature of the CloudLee Atchison
1) The document discusses how Docker and cloud computing allow applications to be more dynamic and take advantage of ephemeral resources.
2) It notes that in the cloud, resources can be provisioned and deprovisioned quickly, unlike traditional data centers, allowing applications to scale up and down easily.
3) Monitoring dynamic cloud environments poses unique challenges because infrastructure components like containers may have extremely short lifecycles, appearing and disappearing rapidly, requiring monitoring tools that can track ephemeral resources and their lifecycles.
Velocity - cloudy with a chance of scalingLee Atchison
The document discusses techniques for achieving high availability in cloud applications. It provides an overview of key concepts like maintaining redundancy, handling failures, and ensuring recovery plans are robust. Examples are given to illustrate the importance of anticipating different failure modes and dependencies to "stay two mistakes high." The space shuttle software system is presented as an example of a highly redundant and recoverable system through its use of multiple independent computing units and deadlock handling.
Cloud Expo (Keynote) - Static vs DynamicLee Atchison
The document discusses how cloud computing provides a "better data center" that allows for faster provisioning of resources and improved application availability through redundancy. It also describes how the cloud can function as a "dynamic tool" that allows applications to dynamically allocate and deallocate resources as needed. Effective monitoring of cloud applications requires solutions like New Relic that can monitor application performance in addition to lower-level infrastructure metrics provided by AWS CloudWatch. Together these solutions provide full-stack visibility of dynamic cloud environments.
This document discusses the importance of planning for failures when building highly available, scalable applications. It uses the analogy of "flying two mistakes high" when piloting radio controlled planes to emphasize that systems should be designed to handle at least two failures without crashing. The document provides examples of how extra capacity is needed to maintain availability during failures like node outages, rolling upgrades, and unknown dependencies between infrastructure components. It stresses the need to thoroughly analyze all potential failure modes and ensure recovery plans are robust enough to handle compounding issues.
AWS Summit Sydney: Life’s Too Short...for Cloud without AnalyticsLee Atchison
The document discusses monitoring applications in dynamic cloud environments. It notes that traditional server monitoring is insufficient for dynamic cloud applications that use technologies like Docker containers and AWS Lambda. It advocates monitoring the full stack, from code to AWS services, to gain accountability. New Relic monitoring is presented as enabling this type of full stack visibility for applications using dynamic cloud technologies. Monitoring needs to focus on application performance and lifecycles rather than just servers. The rate of change is increasing, so past monitoring approaches will not work in the future.
5 keys to high availability applicationsLee Atchison
The document discusses 5 keys to building high availability web applications: 1) develop applications with availability in mind by anticipating failures, 2) always plan for scaling to increasing traffic, 3) mitigate risks through redundancy, fallback mechanisms, and rapid failure detection, 4) monitor applications to establish baselines and detect anomalies, and 5) ensure responsive availability through incident response processes, alerting, and escalation procedures.
This document discusses strategies for cloud adoption. It outlines typical progressions that companies follow when adopting the cloud, from experimenting with non-critical services to fully mandating cloud usage. It also discusses parallel progressions that application teams follow, from using peripheral cloud services to building applications committed to unique cloud capabilities. The document emphasizes that different companies and applications will progress at different speeds and have different needs. It provides strategies for successful cloud adoption, including understanding one's culture and needs, monitoring adoption, and driving cultural change. It also discusses how AWS CloudWatch and New Relic can work together to provide monitoring of infrastructure and applications in the cloud.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
2. Who am I?
30 years in industry
5 in New Relic
(Architect Lead, Cloud, Service Migration)
7 in Amazon Retail & AWS
(Built First AppStore, AWS Elastic Beanstalk)
Who Specialize in:
Cloud computing
Services & Microservices
Scalability, Availability
leeatchison@leeatchison
Senior Director Strategic Architecture
9. 9
Keeping Your App Running…At Scale
Availability…
…is more than
you think it is.
10. Does this sound like something you’ve heard recently…
…overheard OPs conversation...
11. The conversation…
“We were wondering how changing a
setting on
our MySQL database might impact
our performance…
12. The conversation…
“We were wondering how changing a
setting on
our MySQL database might impact
our performance…
… but we were worried
that the change may
cause our production
database to fail…”
13. The “scary” overheard conversation…
“… Since we didn’t want to
bring down production,
we decided to make the
change to our backup (replica)
database instead…
Under
Construction
… but we were worried
that the change may
cause our production
database to fail…”
14. The “scary” overheard conversation…
“… Since we didn’t want to
bring down production,
we decided to make the
change to our backup (replica, hot
standby) database instead…
… After all, it wasn’t
being used for anything
at the moment.”
Under
Construction
15. The “scary” overheard conversation…
Until, of course, the backup was
needed…
Under
Construction
X
16. The “ scary” overheard conversation…
Until, of course, the backup was
needed…
This was a true story
Under
Construction
!!!!X
X
25. Need Data at Every Level
Amazon EC2 Instance
BrowserMobile
Server (Virtual)
Hardware
Server OS
Application &
Application
Microservices
Typical Server / Amazon EC2
Instance
• Application & Application
Microservices
• Server OS
• Hardware (virtual)
26. Amazon EC2 Instance
BrowserMobile
Server (Virtual)
Hardware
Server OS
Application &
Application
Microservices
Low Level Monitoring
Amazon
CloudWatch
AWS
CONSOLE
Amazon CloudWatch
Monitors
• EC2 instance
• Virtualization
• Hardware
• [CPU / Disk / Networking]
Doesn’t know about:
• Server OS
• Memory / Filesystem
• Processes
• Configuration
• Application
- Latency
- Error rates
27. Amazon EC2 Instance
BrowserMobile
Server (Virtual)
Hardware
Server OS
Application &
Application
Microservices
DASHBOARDS
Infrastructure / Application Monitoring
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
Amazon
CloudWatch
AWS
CONSOLE
Monitors (Server):
• How O.S. is performing
• Configuration Changes
• Processes
• Hardware
Monitors (Application):
• App health
• App performance
• Microservices
Doesn’t know
• Virtualization
28. Amazon EC2 Instance
BrowserMobile
Server (Virtual)
Hardware
Server OS
Application &
Application
Microservices
Full Stack Monitoring
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
Amazon
CloudWatch
AWS
CONSOLE
Integrations
New Relic
Monitors
CloudWatch
monitors
DASHBOARDS
AWS / CloudWatch
• Visibility into virtualization
• CPU / Disk / Networking
• 14 AWS Services
APM
• CPU / Disk / Networking
• Memory / Filesystem
• Processes
- Infrastructure components
- Configuration inventory
• Application / Microservices:
- Latency
- Error rates
- App insights
35. Cloud as a “Better Data Center”
Resources are allocated to
uses, just like in a data
center
Provisioning process
is faster
Lifetime of components is
relatively long
Capacity planning is
still important and
still applies
36. Why use a “Better Data Center”?
Add new Capacity
(faster)
Improve Application Availability
(redundancy)
Compliance
38. Cloud as a “Dynamic Tool for Dynamic Apps”
Use Only the Resources
you need
Allocate / de-allocate
resources on the fly
Resource allocation is an
integral part of your
application architecture
39. Dynamic Cloud
Resources are: Application in charge:
Allocated Application is aware of and is controlling
traditional OPs resources
Consumed De-allocated
41. Dynamic Usage Example…
Docker Container Age
(by Minute and Hour)
1,200,000
11% underone minute
Container age (minutes)
42. Dynamic Cloud Technologies
Dynamic Cloud is about scaling and availability
EC2 Auto Scaling
Mobile / IoT Dynamic routing
Load balancing
Queues and notifications
Docker
43. Dynamic Cloud Enables Better Applications Faster
Traditional Data Center Cloud Data Center Dynamic Cloud
Good Better Best
The way you’ve done things in the past
won’t work in the future.
44. Dynamic Cloud
Server running application/
processes
Process running
a command
Function performing a
task or operation
EC2 Docker Lambda
Things happen faster because of…
45. Microcomputing & AWS Lambda
• Highly dynamic
• Incredibly scalable
• No infrastructure to provision
• Massively shared infrastructure
Also known as:
• Functions as a Service (FaaS)
• Compute as a Service (CaaS)
• Serverless
51. Dynamic Cloud has unique monitoring requirements…
How do I track what the dynamic cloud is
doing for me (or to me)?
52. What is a Dynamic Cloud Application?
• Application & Application Microservices
Responsible for the parts you care about
• Infrastructure
• Allocation/Provisioning
• Scaling
Let cloud manage rest
Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
53. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
Monitoring Dynamic Cloud Applications
AWS
CONSOLE
CloudWatch
54. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
AWS InfrastructureApplication Performance
CloudWatch
AWS
CONSOLE
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
DASHBOARDS
Integrations
55. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
CloudWatch
AWS
CONSOLE
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
DASHBOARDS
AWS InfrastructureApplication Performance
New Relic
Monitors
CloudWatch &
AWS monitors
Integrations
56. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
How do you monitor this?
?How do you
monitor this?
57. Where did it go? It was just here!!
The thing you monitored 10 minutes ago…
...doesn’t exist anymore!?
58. Monitoring the Dynamic Cloud
Monitor the Cloud Components
themselves
Monitor the lifecycle of the Cloud
Components
Very different than monitoring traditional Data Center components
61. Changing World
Dev
Now - DYNAMIC World
Ops
• We know:
• Change is inevitable
• We must:
• Embrace and drive change
• Enabling:
• Quicker growth
• More reliable growth
62. 62
Keeping Your App Running…At Scale
Dynamic
Cloud…
...make availability
happen.
Migration…
...how do I get my app
to the cloud?
67. Enterprise IT Cloud Adoption Strategy
Experiment
Non-evasive, safe technologies
- S3
- Perhaps: CloudFront, SQS, SES
Stay away from EC2/Servers
Security: Easy as one-offs
No “Policies” implemented yet
“Just seeing what this is all about”
Progressions in Cloud
Adoption
What is this cloud thing?
69. Progressions in Cloud
Adoption
Enterprise IT Cloud Adoption Strategy
Secure the Cloud
IAM (Credentials)
VPC (Secure network)
AWS Direct Connect (just another data center)
Cloud policies begin to be formed
All parts of the company are now involved
Critical evolution point
Can we trust the cloud?
71. Progressions in Cloud
Adoption
Enterprise IT Cloud Adoption Strategy
Enable Servers, Enable SaaS
EC2
- Basic “data center migration”
- Just another server type available…
Multiple AZs/Regions
- Part of multi-datacenter resiliency strategy
Independently: SaaS usage increases
- Non-critical or internal uses first
The cloud seems to work pretty well…
77. Progressions in Cloud
Adoption
Enterprise IT Cloud Adoption Strategy
Mandate Cloud Usage
Cloud as a data center replacement
Company is now “all in” with cloud
Netflix…
Why do we need our own data centers?
78. What is the cloud?
Can we trust the cloud?
The cloud works pretty well…
Dynamic Cloud becomes a thing…
Dynamic Cloud is deeply ingrained…
Why do we need our own data centers?
Progressions in Cloud AdoptionThe steps aren’t easy…
79. Experiment
Secure the Cloud
Enable Servers, Enable SaaS
Enable Value-Added Services
Enable Unique Services
Mandate Cloud Usage
Progressions in Cloud Adoption
Different Companies
Different Speed
Different Needs
86. Adoption Success Strategies
Understand
where your
culture is
Consciously plan
your acceptance
Drive your cultural
change to your
desired level
Monitor
your adoption
Understand
your needs
87. Monitor Your Adoption
Before Migration
Baseline application
(servers, databases,
caches, applications,
microservices)
Determine your steady
state
88. Monitor Your Adoption
During Migration
Incorporate cloud’s
internal monitoring
Continue
application
monitoring
Understand and solve all deviations from steady state…
89. The Biggest Role Monitoring Plays In Migration
Performance Post Migration
& During Optimization
Pre-migration Feasibility & Benchmarking
90. Continue Monitoring…
Infrastructure is
now out of your
control
Some cloud
specific concerns (EC2
instance failures, instance
degradation)
Dynamic Technologies
Impact Our Applications
Understand
application
impact
Ongoing
application &
infrastructure
monitoring is
essential
Monitor Your Adoption
91. 919191919191
Fairfax Media Limited is a leading multi platform media
company in Australasia, reaching 10.6 million
Australians and 2.9 million New Zealanders.
Media/Entertainment
“Because we monitored our on-premises systems with New Relic
before we migrated them to Amazon Web Services, we were
able to identify potential issues and fix them during the
migration process.”
- Cheesun Choong
Head of Product Platforms
Results
Reduced
diagnosis time
from hours to
minutes
Migrated to AWS
with confidence
Identified
underutilized servers
to save money
92. 92
Keeping Your App Running…At Scale
Dynamic
Cloud…
...make availability
happen.
Migration…
...how do I get my app
to the cloud?
Availability…
…is more than
you think it is.
Monitor your application and infrastructure
93. Monitoring just the server
EC2 Instance
Server OS
Server (Virtual)
Hardware
Application &
Application Microservices
AWS
CONSOLE
CloudWatch
Worked when rate of change was low…
95. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
Full Stack Monitoring
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
DASHBOARDS
• Top to bottom monitoring…
• Full stack accountability...
• Dynamic infrastructure control...
You need:
96. Digital Fan Experience for Major League Baseball
New Relic empowers our developers
to experiment and work fast without
compromising on the quality of the
MLB fan experience.
– Sean Curtis
Senior Vice President of Engineering
99. Change is speeding up
Traditional Data Center Cloud Data Center Dynamic Cloud
Dynamic Cloud enables better applications faster.
Good Better Best
The way you’ve done things in the past
won’t work in the future.
100. Server OS
Server (Virtual)
Hardware
Application &
Application
Microservices
Provisioning
Application &
Application
Microservices
Application &
Application
Microservices
BrowserMobile
Full Stack Monitoring
New Relic
Application
Monitoring
New Relic
Infrastructure
Monitoring
DASHBOARDS
101. Thank you
Lee Atchison ∙ Senior Director Strategic Architecture
New Relic
Architecting for Scale
By: Lee Atchison
Published by: O’Reilly Media
www.architectingforscale.com
leeatchison@leeatchison
102. This document and the information herein (including any information that may be incorporated by reference) is provided for informational
purposes only and should not be construed as an offer, commitment, promise or obligation on behalf of New Relic, Inc. (“New Relic”) to sell
securities or deliver any product, material, code, functionality, or other feature. Any information provided hereby is proprietary to New Relic and
may not be replicated or disclosed without New Relic’s express written permission.
Such information may contain forward-looking statements within the meaning of federal securities laws. Any statement that is not a historical fact
or refers to expectations, projections, future plans, objectives, estimates, goals, or other characterizations of future events is a forward-looking
statement. These forward-looking statements can often be identified as such because the context of the statement will include words such as
“believes,” “anticipates,”, “expects” or words of similar import.
Actual results may differ materially from those expressed in these forward-looking statements, which speak only as of the date hereof, and are
subject to change at any time without notice. Existing and prospective investors, customers and other third parties transacting business with New
Relic are cautioned not to place undue reliance on this forward-looking information. The achievement or success of the matters covered by such
forward-looking statements are based on New Relic’s current assumptions, expectations, and beliefs and are subject to substantial risks,
uncertainties, assumptions, and changes in circumstances that may cause the actual results, performance, or achievements to differ materially
from those expressed or implied in any forward-looking statement. Further information on factors that could affect such forward-looking
statements is included in the filings we make with the SEC from time to time. Copies of these documents may be obtained by visiting New Relic’s
Investor Relations website at http://ir.newrelic.com or the SEC’s website at www.sec.gov.
New Relic assumes no obligation and does not intend to update these forward-looking statements, except as required by law. New Relic makes no
warranties, expressed or implied, in this document or otherwise, with respect to the information provided.
Safe Harbor
Editor's Notes
Dynamic Infrastructure and The CloudAdventures in Keeping Your Application Running…at Scale
AWS Summit - Sydney, Australia
Lee Atchison ∙ Senior Director Strategic Architecture at New Relic, Inc.
I’d like to tell you a story. Does this story sound familiar to you?
It’s Sunday.
The day of the big game.
You’ve invited 20 of your closes friends over to watch the game on your new 300” ultra max TV.
Everyone has come, your house is full of snacks and beer. Everyone is laughing. The game is about to start.
And…
…the lights go out……the TV goes dark……the game, for you and your friends, is over.
Obviously disappointed, what happened?
You decide to pick up the phone and call the local power company.
The representative, unsympathetically, says: “We’re sorry, but we only guarantee 95% availability of our power grid.”
They could not understand why you were complaining, after all you had power “most of the time”.
Why is availability important?
* Because your customers expect your service to work…all the time.
* Anything less than 100% availability can be catastrophic to your business.
A hope and a prayer…
Laugh at it, but more people do this than you might expect.
Keeping your application running is possible. I will discuss three points to making it happen.
{c} First, availability…is more than you think it is...
I want to tell you about an overheard OPs conversation. I want you to tell me if this sounds like something you’ve heard yourself in your OPs organizations…
We were wondering how changing a setting on our MySQL database might impact our performance…
… but we were worried that the change may cause our production database to fail…
… Since we didn’t want to bring down production, we decided to make the change to our backup (replica) database instead…
… After all, it wasn’t being used for anything at the moment.
Until…of course...the backup was needed...
Does this story sound familiar? This exact story is a true story, and unfortunately is not uncommon.
Availability issues such as I described here may seem obvious…but many are much more subtle. For example...
Imagine we are a e-commerce website. We’ve got a mobile app that can purchase items in ourshop. {C} Bob uses his phone, buys something, and it takes 300ms. That’s great! {C} Sally logs in, buys something, but the database is slow. It takes much longer. She is not a happy customer.
Availability is not just whether a page responds, but how long it takes to respond.
The customer doesn’t care why a problem occurred, they don’t care why your app is slow. If it doesn’t meet their expectations at a time they expect, nothing else matters…
But keeping your application available can be tough. It may be fuzzy. Performance may be good for some users, and bad for others. But, can you even detect this, or do you just show that, on average, your site is doing fine?
The real answer to how your application is doing is not a hope and a wish. It’s in the details. It’s in the data.
Modern application monitoring can’t be done by simply looking from the outside in. It can’t be done with averaged or sampled data. You must collect data from all areas of your application, and from all transactions. You must collect tons and tons of data.
---
In fact, you typically need to collect more monitoring data than data that is within your application. And it grows continuously, every day, every second. Everything that anyone does on your application, generates performance data.
If anybody is using your application, you must collect data about exactly how they are using it and how the infrastructure behind it works together. All of it is important.
All parts of your application, from your servers thru your apps, to the business outcomes they represent {C} All generate data that you must analyze together.
So, you expect your site to be up. And when it is down, what do you do? Do you look to attach blame? No, you want to find the problem.
You want to know what happened.
To know what happened, we need data. We need data from every level of our application. Here is a typical, simple, web application. It consists of an application and some services. It consists of servers running an operationg system, and they consist of virtual hardware that all that runs on. They may also run in our customers browsers, or in their mobile applications.
Often people think that all they need is low level virtual hardware monitoring. They monitor their instances using tools like CloudWatch. But CloudWatch provides a very limited view of the world. You get virtual hardware level information, but that’s about it. You don’t even get information about the operating system, memory, processes, or system configuration. And you absolutely get no information about your application.
To know how your application is really performing. You need an application performance monitoring tool. You also need to know how the rest of your infrastructure is running (the operating system for instance). You also need to know how your remote application, such as those running on mobile devices or your customer’s browsers are running.
To monitor the application, you need full stack performance monitoring.
Because if you don’t monitor the data you need at the time you need it. You’ll:
1) Waste time fire fighting, 2) Meaningless finger pointing across teams, 3) Lose money, 4) Make customers unhappy, 5) Unhappy customers tell other people…
You also need the right data. You need to know how your application is performing, to answer questions as simple as, “Am I actually open for business?”. But you also want to know how easy it is for your customers to make use of your application. What is their experience? And you need to know how your business is doing.
You need to monitor the right components…and you need to monitor the right data.
Success involves all three types of analytics. Is the software working? Is it meeting the customer’s needs? Is it meeting your business needs? All of these three things are interconnected.
Because, avoiding this is critical to every business.
Point 2, there are technologies that can help you keep your application running…technologies such as the dynamic cloud. How do I mean? Let’s take a look.
How can the cloud help? Well, it turns out that there are two fundamental ways people make use of the cloud. The first is to use the cloud as a “Better Data Center”. The second is to use the “Dynamic Nature” of the cloud to build better apps faster. I’m going to talk about each of these methods.
Let’s first look at using the cloud as a “Better Data Center”.
What do I mean by using the cloud as a “Better Data Center”? I mean:* Resources are allocated to uses, just like in a regular data center <click>
* The provisioning process for new resources, though, is significantly faster <click>
* The lifetime of the resources you create is relatively long…usually measured in days, weeks, months, or years. <click>
* However, even with a faster provisioning process, traditional “capacity planning” is still important and still applies.
Why would we want to use the cloud simply as a “better data center”? What are the benefits to us building applications? Since we can add new capacity faster, we can build and scale our applications easier in the cloud. In addition to adding servers easier and quicker, we can add entire new data centers easier, which can improve our application availability and redundancy. Additionally, this ability to add additional data centers can improve our compliance, especially when it comes to things like EU Safe Harbor laws.
So, now, let’s switch to talking about using the cloud in a dynamic environment.
What do I mean by using the cloud as a “dynamic tool for dynamic applications”? I mean:
Use only the resources you need <click>
* Allocate and deallocate resources on the fly <click>
* Resource allocation becomes an integral part of your application architecture.
In a dynamic application, resources are allocated, consumed, and deallocated on the fly. And the application is aware of and is controlling this management of resources. The application is essentially performing traditional OPs resource management tasks.
New Relic did an analysis recently about how our customers are making use of Docker. The question we wanted to answer was, how long do docker containers live? This diagram shows the answer to that question. The horizontal axis is the number of hours a docker container has lived for, and the vertical axis is the number of containers in that time bucket. As you can see, there is a long tail, with some docker containers running for well over a year. However, there is a huge number of docker containers that run for less than one hour. In fact, if we zoom in on just that one hour time period…
we can see that most docker containers we run actually only run for less than one minute! Over 11% of all docker containers we run will run for less than 60 seconds.
This is some customer’s application or service, some business logic, that starts up, runs, and shuts down all within 60 seconds. This is very rapid. These are containers that are launched only for a specific business purpose and are terminated when that purpose is completed. This is what we mean by dynamic infrastructure.
And there are lots of different cloud technologies that can be used in this dynamic manner…from queues to routing to auto scaled EC2 instances. Many resources in the cloud can be used in this dynamic fashion.
The dynamic cloud allows you to build better applications, faster. The way you’ve done things in the past won’t work in the future.
Change happens faster in the cloud. This is because of dynamic servers, dynamic infrastructure, and, more recently, {c} the cloud is even more dynamic due to technologies such as AWS Lambda.
What is Lambda? Lambda is one of many technologies that implement what’s called “Functions as a Service” or “Compute as a Service”. You might also know it as “Serverless”, but that is not as accurate of a description of it. Lambda allows creating microcomputing environments. This allows creating highly dynamic and incredibly scalable functions that can be executed without the need to provision any infrastructure what-so-ever. They provide automatic scaling using a massively shared infrastructure.
In a nutshell, AWS Lambda simply takes an event from some AWS resource. This is called the “trigger”. This event can be something like an object being updated in an S3 bucket…or a database update in DynamoDB, or a call to an API Gateway. Some sort of event within the AWS ecosystem.
Lambda takes that event and creates an instance of a Lambda function, on the fly, that can process that event.
The processing is usually a very simple action...something like updating another object in S3, or responding to the API Gateway request...whatever action the lambda script was designed to execute in response to that trigger.
Any number of triggers can occur as fast as possible, and multiple instances of the lambda function will automatically be created to handle all of the concurrent events, instantly scaling the function to as many instances as is necessary to handle all events as quickly as possible. This automatic scaling is designed to be transparent to everyone, including the customer who created the script. This is the definition of near infinite scaling.
Building dynamic infrastructures in the cloud allows you to {c} scale your applications better. {c} It also allows you to make changes to your application faster and easier. {c} Both of these ultimately result in higher availability…
But only if you know what your application is actually doing…
(But only if you know what your application is actually doing…)
This brings up an interesting concern. In a dynamic cloud, you have dynamic resources. Resources that are coming and going fast. Instances are starting and stopping. Containers are coming and going. And functions are executing and terminating.
If resources are coming and going so fast, how can you monitor them? How do you monitor a dynamic application in a dynamic cloud?
Here is an example of a dynamic application. It looks much like the static application. It might have more services and microservices that compose the application, this is typical of a more modern application.
We still have AWS CloudWatch monitoring the low level cloud infrastructure.
And we still have traditional application performance monitoring that monitors the static nature of the application components.
Overall, this provides **almost** top to bottom monitoring of the entire application.
But what about this piece? How do you monitor the provisioning process itself? Given that resources are coming and going regularly, how do you monitor that?
How do you monitor components that are there one moment, but less than 60 seconds later, they are gone?
<click>
Remember the docker information…
It turns out that monitoring a dynamic application in a dynamic cloud is very different than monitoring traditional data center components.
You must of course still monitor each of the cloud components themselves…each of the services and resources and components that make up your application.
{c}
But you also must monitor the lifecycle of the cloud components. This is because it matters not only **that** a resource was used, it matters **when** that resource was used. Because just looking at the resources running right now is inadequate when trying to diagnose a problem from even a few minutes ago. The resources that were in use when the problem occurred are **not** the same resources in use now.
So, in the old world, your operations team was comfortable. They knew the resources they controlled, they created them, they managed them. All was simple and manageable.
But in this new world, resources are created and destroyed dynamically. The world of the operations team can no longer be as simple as tracking resources on a spreadsheet. The resources they are responsible for are dynamic and transient. Their world has gotten a lot more complicated.
This change is inevitable. The change is needed because our customers are expecting more and more from our applications. The change is needed because our customers are expecting better and more reliable performance from our applications. The change is inevitable because to meet the needs of our customers, our organizations must grow quickly and build applications that are more reliable than ever before.
The cloud helps achieve this, and this more and more the reason why moving to the cloud is so important for us.
The third point, is getting to the cloud. Migrating to the cloud is easy, right?
How do we move to the cloud? Often, we start our migration to the cloud with lofty expectations. But we find out that moving to the cloud isn’t necessarily as easy as we would like it to me. Problems occur. The cloud doesn’t meet our expectations that have been promised to us.{c}There is pressure to declare ”victory” before we are ready.{c}Promised performance gains are not occurring. Costs run out of control.{c}And schedules just don’t matter anymore.{c}How can we meet our promises to our stakeholders if we can’t get the cloud to do what we want it to do? Most companies moving to the cloud struggle with this. Some struggle more than others. Some fail to overcome the struggle.
But moving to the cloud does not have to be scary or dangerous. It can be done safely, but you must be willing to learn as you go. Learn and adapt the cloud to meet your company’s needs, and learn and adopt your expectations to the reality of what the cloud can offer.
Let’s take a look at how most enterprises figure out how to migrate to the cloud. There are six *typical* steps that most companies take to move to the cloud.
They don’t all use all the steps. Some stop part way up the path.
Some skip steps.
But this is typical…
Let’s look at each of these in turn.
Let’s start with “Experiment”.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
After you’ve done some basic “feet wetting” in the cloud, security typically becomes a concern.
Critical evolution point in the company’s culture
…all displines in the company are involved (Legal, Finance, Security)
…companies that can’t get past this point, can’t be successful in the cloud
Once policies are in place and the cloud can be trusted…you start using other features the cloud has to offer.
Three choices:
...1) Put some workloads in the cloud, some in your own data center
...2) Resiliency - additional data center(s)
...3) Move applications to the cloud, out of existing data centers
Independently: SaaS uage increases (internal apps first)
Now the cloud is important to you, so you start to see what else the cloud can do for us.
”Managed Services”
Now, we start looking at cloud native services…services only available in the cloud.
Point of commitment…now dependent on the cloud
So now we are committed to the cloud…now comes the last step. Mandated use.
Mandate use of the cloud
>>>Typically wanting to get out of the data center business
Netflix, etc
The steps aren’t easy…
But ultimately, these are the steps involved.
Different companies go thru these steps at different speeds.
Different companies find the right “stopping point” that matches their needs
While these are the steps our *company* may go thru.
As we build new and migrate existing applications, our applications go thru a similar learning process…
How can a given application take advantage of the cloud?
This adoption may happen faster or slower for different types of applications.
Let’s take a look at these as two different axis on a chart.
Coporate adoption process on the left, Application adoption process on the bottom
Another way to look at this: based on application types and requirements...
So we can see we are more likely to use the “newer” technologies, such as Lambda, in new applications. But we are much less willing to use these technologies in our more business critical applications.
There exists a sweet spot…
>Corporate adoption is strong, but not “mandated”
>Application adoption is strong, but not “committed”
*This is the destination for a lot of companies and applications
Very near some of the common, core AWS services
So, that’s all great data. I know I need to move to the cloud to keep my company moving forward. But what about the nuts and bolts. What should *I* do to be successful in moving to the cloud?
How can I make sure a cloud migration is successful?
Understand where your culture is
Risk tolerance, Cloud commitment, Expertise
Understand your needs
Redundancy? Cost? New Opportunity?
Consciously plan your acceptance
What level are you?
What level do you need to be?
Drive your culture to where you feel you need to be
Monitor your adoption
Before migration
Baseline application
Servers
Databases
Caches
Applications
Microservices
Determine your steady state
Important before you migrate!
During migration
Incorporate Cloud’s internal monitoring
…provides cloud specific infrastructure monitoring
…AWS CloudWatch
Continue application monitoring
*Here, looking for performance deviations from steady state
Track down & explain all deviations before moving on
Understand all deviations from norm
Solve problematic deviations/problems
Deviation in performance before and after migration give us a clue to migration related issues
Continue monitoring post migration
Should understand: The infrastructure is now out of your control…you need to keep an eye on it
Cloud infrastructure changes can impact your application…you need to keep an eye on it
There are some cloud specific concerns:
EC2 instance failures
Greater part of your availability plans
Often impacts other AWS systems as well
Instance degradation (more common than you’d think)
Ongoing application & infrastructure monitoring is essential
APM, Insights, Browser, Synthetics
So, that’s the third point in keeping your application running at scale…successful cloud migration.
{c}Together, these three points can keep your application highly available and running at scale.
{c}And underlying all three is monitoring your application and your infrastructure.
It used to be, long ago, that all it took to make sure an application was running was to look at the server. Did the amount of CPU or memory utilization change recently? If it did, there might be a problem. Everything was static, everything was smooth. Everything was constant. A change indicated a problem.
But in this new world, resources are created and destroyed dynamically. The world of the operations team can no longer be as simple as tracking resources on a spreadsheet. The resources they are responsible for are dynamic and transient. Their world has gotten a lot more complicated.
In order to monitor your dynamic applications in the dynamic cloud, you must monitor all aspects of your application, top to bottom, using a full stack monitoring solution, a solution such as New Relic.
Dynamic applications require dynamic scaling and use of dynamic technologies.
(how many streams during each day?)
Our customers won’t stand by waiting for us to solve availability problems.
And panic is not the solution. Nor is blame.
The dynamic cloud has caused significant change to our world. Our world has sped up, and the rate of change in application development has increased. The cloud alone has speed things up, and the dynamic cloud has sped things up even more. The way you’ve done things in the past just won’t work in the future.
This is good…but it is also scary.
In order to monitor your dynamic applications in the dynamic cloud, you must monitor all aspects of your application, top to bottom, using a full stack monitoring solution, a solution such as New Relic.