Temperature probes monitoring crops? Micro drones monitoring wind speed in the atmosphere? You don’t have to turn to these novel uses to see edge computing in action, look no further than the Point of Sale device at your local grocery store or the app on your mobile phone that is letting you order a cup of coffee.
Edge computing is all about taking the specific timing-sensitive parts of your application and moving them closer to where they are needed…whether that need is an end user or a source of interesting data, it’s all the same thing.
What really is the edge and how do we deal with it? How do we decide what computing should occur at the edge and what computing should occur in the cloud? How do you verify that your application is doing what it is expected to do? How do you know if you are meeting your performance expectations in the edge? How do you keep visibility in your entire application, whether it’s in the cloud or at the edge?
The document discusses how scale enables security at Amazon Web Services. It notes that at AWS's level of scale, with a trillion requests per year, even extremely small failure rates mean multiple failures per day. However, turning problems into standardized issues allows for aggregation of solutions and increased investment, leading to large teams that can deliver security benefits for customers and the industry. Scale also enables approaches like network intrusion detection and IoT security.
[NEW LAUNCH!] Introduction to AWS Global Accelerator (NET330) - AWS re:Invent...Amazon Web Services
This session introduces AWS Global Accelerator, a new global service that enables you to optimally route traffic to your multi-regional endpoints via static Anycast IP addresses that are announced from the expansive AWS edge network. This session walks through the various features and customer use cases for Global Accelerator. Several example use cases demonstrate how you can use Ubiquity to achieve near-zero application downtime and reduce latency for your global applications. We will walk you through the architecture and will also include a demo of the workflow. Attend this session if you are looking at ways to accelerate performance of your global applications, achieve high availability for your mission critical applications or easily manage multiple IP addresses through a static Anycast IP that fronts your applications.
Run Production Workloads on Spot, Save up to 90% (CMP306-R1) - AWS re:Invent ...Amazon Web Services
Amazon EC2 Spot Instances enable you to use spare EC2 computing capacity— capacity that is often 90% less than On-Demand prices. In this session, learn how to effectively harness Spot Instances for production workloads. We explore application requirements for using Spot Instances, best practices learned from thousands of customers, and the services that make it easy to use. Finally, we run through practical examples of how to use Spot for the most common production workloads, the common pitfalls customers run into, and how to avoid them.
첫 서버를 AWS 서울 리전에 시작한 2년차 개발자로서, AWS Fargate를 비롯 여러 클라우드 매니지드 서비스의 도움을 받아 모놀리틱 애플리케이션을 마이크로서비스 아키텍처로 옮겼던 과정 및 경험 등을 공유합니다. MyMusicTaste는 서비스를 전 세계 리전 배포 방법, 싱글페이지 애플리케이션(SPA)에서 흔히 겪는 SEO 및 CORS 이슈 등을 Lambda@Edge 등으로 손쉽게 대응했던 사례를 공유합니다.
Zendesk: Building a World-Class Cloud Center of Excellence (ENT309-S) - AWS r...Amazon Web Services
When it comes to doing the cloud right, no one size fits all. Yet sometimes organizations become distracted by the day-to-day management of cost, security, and overall operations. They can lose sight of the reasons they chose to embrace the cloud in the first place. How can you possibly manage it all and stay focused on the business outcomes that are most important? In this session, learn how forming a Cloud Center of Excellence (CCoE) has become an increasingly common way to address many of these challenges. When implemented well, the CCoE acts as a bridge, connecting all departments that use, measure, or fund your cloud operation. This session is brought to you by AWS partner, CloudHealth Technologies.
Globalizing Player Accounts at Riot Games While Maintaining Availability (ARC...Amazon Web Services
The Player Accounts team at Riot Games needed to consolidate the player account infrastructure and provide a single, global accounts system for the League of Legends player base. To do this, they migrated hundreds of millions of player accounts into a consolidated, globally replicated composite database cluster in AWS. This provided higher fault tolerance and lower latency access to account data. In this talk, we discuss this effort to migrate eight disparate database clusters into AWS as a single composite database cluster replicated in four different AWS regions, provisioned with terraform, and managed and operated by Ansible.
Advanced Serverless Data Processing (GPSWS406) - AWS re:Invent 2018Amazon Web Services
In this hands-on workshop, you learn best practices and architectural patterns for building streaming data processing pipelines without servers. Using Amazon Kinesis, AWS Lambda, and other services, you have the opportunity to build, deploy, and monitor an application to ingest and process high-velocity data at scale. This advanced workshop assumes that you have experience writing Lambda functions and understand the basics of the AWS serverless platform, so come ready to dive into the deep end. Bring your laptop with a full keyboard. We provide a sandbox AWS account for you to use during the workshop.
The document appears to be a technical paper on graph databases and Amazon Neptune. It discusses challenges in building applications with highly connected data and how Neptune can help by allowing storage and querying of graph data through traversal of relationships between entities. It provides examples of using Neptune to represent social network data and querying graph data through Gremlin.
The document discusses how scale enables security at Amazon Web Services. It notes that at AWS's level of scale, with a trillion requests per year, even extremely small failure rates mean multiple failures per day. However, turning problems into standardized issues allows for aggregation of solutions and increased investment, leading to large teams that can deliver security benefits for customers and the industry. Scale also enables approaches like network intrusion detection and IoT security.
[NEW LAUNCH!] Introduction to AWS Global Accelerator (NET330) - AWS re:Invent...Amazon Web Services
This session introduces AWS Global Accelerator, a new global service that enables you to optimally route traffic to your multi-regional endpoints via static Anycast IP addresses that are announced from the expansive AWS edge network. This session walks through the various features and customer use cases for Global Accelerator. Several example use cases demonstrate how you can use Ubiquity to achieve near-zero application downtime and reduce latency for your global applications. We will walk you through the architecture and will also include a demo of the workflow. Attend this session if you are looking at ways to accelerate performance of your global applications, achieve high availability for your mission critical applications or easily manage multiple IP addresses through a static Anycast IP that fronts your applications.
Run Production Workloads on Spot, Save up to 90% (CMP306-R1) - AWS re:Invent ...Amazon Web Services
Amazon EC2 Spot Instances enable you to use spare EC2 computing capacity— capacity that is often 90% less than On-Demand prices. In this session, learn how to effectively harness Spot Instances for production workloads. We explore application requirements for using Spot Instances, best practices learned from thousands of customers, and the services that make it easy to use. Finally, we run through practical examples of how to use Spot for the most common production workloads, the common pitfalls customers run into, and how to avoid them.
첫 서버를 AWS 서울 리전에 시작한 2년차 개발자로서, AWS Fargate를 비롯 여러 클라우드 매니지드 서비스의 도움을 받아 모놀리틱 애플리케이션을 마이크로서비스 아키텍처로 옮겼던 과정 및 경험 등을 공유합니다. MyMusicTaste는 서비스를 전 세계 리전 배포 방법, 싱글페이지 애플리케이션(SPA)에서 흔히 겪는 SEO 및 CORS 이슈 등을 Lambda@Edge 등으로 손쉽게 대응했던 사례를 공유합니다.
Zendesk: Building a World-Class Cloud Center of Excellence (ENT309-S) - AWS r...Amazon Web Services
When it comes to doing the cloud right, no one size fits all. Yet sometimes organizations become distracted by the day-to-day management of cost, security, and overall operations. They can lose sight of the reasons they chose to embrace the cloud in the first place. How can you possibly manage it all and stay focused on the business outcomes that are most important? In this session, learn how forming a Cloud Center of Excellence (CCoE) has become an increasingly common way to address many of these challenges. When implemented well, the CCoE acts as a bridge, connecting all departments that use, measure, or fund your cloud operation. This session is brought to you by AWS partner, CloudHealth Technologies.
Globalizing Player Accounts at Riot Games While Maintaining Availability (ARC...Amazon Web Services
The Player Accounts team at Riot Games needed to consolidate the player account infrastructure and provide a single, global accounts system for the League of Legends player base. To do this, they migrated hundreds of millions of player accounts into a consolidated, globally replicated composite database cluster in AWS. This provided higher fault tolerance and lower latency access to account data. In this talk, we discuss this effort to migrate eight disparate database clusters into AWS as a single composite database cluster replicated in four different AWS regions, provisioned with terraform, and managed and operated by Ansible.
Advanced Serverless Data Processing (GPSWS406) - AWS re:Invent 2018Amazon Web Services
In this hands-on workshop, you learn best practices and architectural patterns for building streaming data processing pipelines without servers. Using Amazon Kinesis, AWS Lambda, and other services, you have the opportunity to build, deploy, and monitor an application to ingest and process high-velocity data at scale. This advanced workshop assumes that you have experience writing Lambda functions and understand the basics of the AWS serverless platform, so come ready to dive into the deep end. Bring your laptop with a full keyboard. We provide a sandbox AWS account for you to use during the workshop.
The document appears to be a technical paper on graph databases and Amazon Neptune. It discusses challenges in building applications with highly connected data and how Neptune can help by allowing storage and querying of graph data through traversal of relationships between entities. It provides examples of using Neptune to represent social network data and querying graph data through Gremlin.
2018 AWS DevDay Seoul community track - 데이터센터 1도 모르는 개발자가 MSA를 만났을 때주은 안
The document discusses microservices architecture (MSA) and related topics like container orchestration, Fargate, service discovery, CI/CD, secret management, X-ray, single-page applications (SPA), and Lambda@Edge. It covers these topics over multiple slides with copyright notices at the start of each. The agenda lists MSA, orchestration, service discovery, CI/CD, secret management, and SPA. The conclusion thanks attendees and asks them to participate in a post-session survey for a gift and to leave comments on social media.
Autonomous DevSecOps: Five Steps to a Self-Driving Cloud (ENT214-S) - AWS re:...Amazon Web Services
In this session, we outline the five levels of cloud operations automation, providing a clear path and maturity model for achieving security, compliance, and architecture best practices. Using real-world case studies from Fortune 100 enterprises, we demonstrate how secure AWS Landing Zones and policy-based, automated guardrails accelerate the safe migration and ongoing operation of hundreds of enterprise applications, putting your team on the road to DevSecOps maturity. This session is brought to you by AWS partner, Turbot HQ, Inc.
Cloud Ops Engineer: A Day in the Life (ENT312-R1) - AWS re:Invent 2018Amazon Web Services
Are you an expert data center operations engineer looking to sharpen your AWS skills? Are you an IT operations manager looking to speed up your team's cloud learning curve for operating in a hybrid cloud environment? Are you a DevOps engineer looking to grow your operations experience? This session follows two AWS operations experts throughout their day as they solve real-life problems in complex, enterprise hybrid cloud AWS environments. Expect to learn actionable hacks and tricks that you won't get in standard training classes, practical advice for solving common and not-so-common issues, and insights in to the top things our experts wish they knew when they were getting started with AWS.
Meet Preston, and Explore Your Digital Twin in Virtual Reality (GPSTEC321) - ...Amazon Web Services
Take a journey with Preston, an Amazon Sumerian host, to diagnose and explore a 3D printed jet engine in virtual reality. Preston follows your commands to control and explore different parts of the engine. Preston can also show you the future state of your machine in virtual reality and provide recommendations by analyzing data collected from a physical jet turbine engine using IoT sensors. Learn how to build your own virtual assistant with Amazon AI services, AWS IoT services, and Amazon Sumerian virtual reality.
About the event:
AWS Transformation Day is designed for enterprise organizations migrating to the cloud to become more responsive, agile and innovative, while staying secure and compliant. Join us for this one-day event and we’ll share our experiences of helping enterprise customers accelerate the pace of migration and adoption of strategic services.
Who should attend?
This event is recommended for IT and business leaders who are looking to create sustainable benefits and a competitive advantage by using the AWS Cloud. CIOs, CTOs, CISOs, CDOs, CFOs, IT leaders and IT professionals, enterprise developers, business decision makers, and finance executives.
The document appears to be a presentation from Amazon Web Services about serverless application development on AWS. It discusses various AWS services for building serverless applications like Lambda, API Gateway, DynamoDB, S3, and Step Functions. It provides examples of creating serverless APIs with Lambda and API Gateway and deploying serverless applications using the Serverless Application Model.
[REPEAT 1] Create and Publish AR, VR, and 3D Applications Using Amazon Sumeri...Amazon Web Services
In this session, learn how Amazon Sumerian can help you create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. Learn how you can use Sumerian to build highly immersive and interactive scenes that run on popular hardware, such as Oculus Go, Oculus Rift, Google Daydream, and HTC Vive as well as on Android and iOS mobile devices.
How to Automate Security Learning at Scale (ANT335-S) - AWS re:Invent 2018Amazon Web Services
The mountain of data generated by your deployment can be a valuable source of insight for your security practice IF you take advantage of some key tools on the AWS Cloud. In this session, learn how to build an analytics process that uses security tools, whether from AWS or the community, to create a continuous feedback loop to maintain and improve the security of your deployments. Additionally, learn when to use AWS security services and other tools in your deployment and how to manage the output, thus creating an analytics workflow alongside a feedback loop in order to streamline the entire process. The end result is an automated feedback loop aimed at making sure that your deployment is doing what you intend ... and only what you intend. This session is brought to you by AWS partner, Trend Micro.
Monitoring Serverless Applications (SRV303-S) - AWS re:Invent 2018Amazon Web Services
Serverless brings many advantages to software development, but it introduces new monitoring challenges as well. Isolated telemetry on individual functions might not provide enough visibility, and instrumentation in a world where 100 ms of extra execution time could cost thousands of dollars might prove prohibitive. In this session, we explore how New Relic enables full observability of the serverless stack, including its executing context, with minimal impact in performance. Learn from customer case studies and real-world examples. This session is brought to you by AWS partner, New Relic.
AWS IoT: enabling responsible water use - AWS Summit Cape Town 2018Amazon Web Services
Speaker: Clive Charlton, AWS
Level: 300
Day zero was the term coined by the City of Cape Town for the day they would have to turn the water off for citizens, potentially making Cape Town the first major city to run out of water. Thankfully day zero was avoided in 2018 but the danger still lingers. Learn how AWS empowers citizens to use water responsibly using AWS IoT, AWS Lambda, Amazon DynamoDB, Amazon S3, Amazon Cognito.
Predictive Scaling for More Responsive Applications (API330) - AWS re:Invent ...Amazon Web Services
Get a jump on traffic surges with Predictive Auto Scaling. AWS Auto Scaling now responds more quickly by analyzing past traffic trends. The new predictive capability looks at your incoming load and forecasts it into the future. Not only can you see ahead of time when and how your resources will scale, your resources are made available ahead of when they are needed to enable faster, more responsive applications. Come learn how Genesys uses Predictive Scaling to scale the infrastructure used to run their popular contact center solution, PureCloud, worldwide.
Role of Central Teams in DevOps Organizations (DEV370) - AWS re:Invent 2018Amazon Web Services
You've migrated your business to the cloud. You've embraced DevOps. All your engineering teams operate the systems they write. You don't need central teams any longer ... or do you? In this talk, we discuss how Netflix balances the need for product teams to stay loosely coupled yet how it maximizes the leverage for productivity and velocity that healthy central teams provide.
How AI is Reimagining Software, Environments, Apps, & Programmatic Interfaces...Amazon Web Services
AI is at the center of a shift in how applications are developed and how users interact with data. Code that writes itself, voice enablement, computer vision, and activation by presence are some of the ways the human-to-computer interfaces are evolving. This chalk talk, we explore this evolution and show you how AWS AI services have impacted specific industries, such as manufacturing and development. Additionally, participants have an opportunity to get hands-on with an interactive art display powered by AWS AI services. Our goal is to show you how AI can help you reimagine your applications, services, interfaces, and business.
Internet of Things e Machine Learning: i principali casi d'usoAmazon Web Services
In questa sessione, approfondiremo i principali casi d'uso di organizzazioni e aziende che hanno reso l'Internet of Things e il Machine Learning elementi centrali delle proprie attività e processi quotidiani. Vedremo come queste aziende hanno ottenuto un maggior livello di efficienza operativa e produttività, analizzando ciascun caso d'uso in termini di: sfide aziendali, metriche per il successo, ritorno dell'investimento (ROI), risorse e competenze.
by Ben Moore, Sr. Product Manager, AWS
Now it is your turn, take the information you’ve learned over the day and create your own AR/VR scene. Amazon Sumerian team members will be available to answer your questions as you build your own scenes.
Are you a developer, operator, or business stakeholder who is banging your head against the wall because you cannot seem to deliver apps that make everyone happy? Have you thrown more tools, containers, or agile methods at this challenge than you can count? Have you tried adopting this thing called "DevOps" for your app delivery problems but still cannot fix your IT mayhem? Then this session is for you. Learn how putting people and process first can pave the way to use the cloud for modern application delivery. With AWS and Red Hat, developers and IT operations have a optimal platform and cloud environment for developing, deploying, and managing traditional and cloud-native applications with a streamlined DevOps process to end the headbanging.
Keynote: What Transformation Really Means for the Enterprise - Virtual Transf...Amazon Web Services
AWS Transformation Day is designed for enterprise organizations looking to make the move to the cloud in order to become more responsive, agile and innovative, while still staying secure and compliant. Join us for this virtual event and we'll share our experiences of helping enterprise customers accelerate the pace of migration and adoption of strategic services.
We recommend this event for IT and business leaders who are looking to create sustainable benefits and a competitive advantage by using the AWS Cloud.
Speaker: Sandy Carter
At AWS, security is job zero. AWS has worked with global enterprises to meet their respective security requirements and has developed a broad portfolio of services to help customers run highly secure workloads in the cloud. This session will describe how Amazon has been managing security of the cloud at hyper-scale and adding new capabilities that help secure customer applications and data such as Inspector, GuardDuty, and Macie. Leave this session with a better understanding of how these services operate and how easy it is to integrate them into your secure cloud environment.
Presenter: Kurt Gray, Global Account Solutions Architect, AWS
Automating Compliance on AWS (HLC302-S-i) - AWS re:Invent 2018Amazon Web Services
Maintaining a compliant environment is critical for regulated industries such as healthcare, but with the advent of GDPR and other regional data privacy frameworks, compliance is becoming just another cost of doing business. In this session, we dive deep into how Cloudticity built the Cloudticity Oxygen managed services framework as an example of what a compliance framework looks like and how maintaining a compliant posture—and being able to prove that to auditors and regulators—can and should be a native part of your infrastructure. Compliance doesn't need to be something you add on. It should be deeply ingrained in your environment. Learn specific AWS services and techniques to track and maintain compliance in a fully automated manner directly from Cloudticity's founder. This session is brought to you by AWS partner, Cloudticity.
AWS Direct Connect: Deep Dive (NET403) - AWS re:Invent 2018Amazon Web Services
AWS Direct Connect provides a more consistent network experience for accessing your AWS resources, typically with greater bandwidth and reduced network costs. This session dives deep into the features of AWS Direct Connect, including public and private virtual Interfaces, Direct Connect Gateway, global access, local preference communities, and more.
Risk Management - Avoiding Availability Disasters in Service-based ApplicationsLee Atchison
Bringing down an application is easy. All it takes is the failure of a single service and the entire set of services that make up the application can come crashing down like a house of cards. Just one minor error from a non-critical service can be disastrous to the entire application. There are, of course, many ways to prevent dependent services from failing. However, adding extra resiliency in non-critical services also adds complexity and cost, and sometimes it is not needed.
Application availability is best served by focusing your energies and processes on your most critical systems while working to minimize the impact of non-critical systems. Service Tiers are a way to accomplish this.
In this talk, we will learn what service tiers are and how they can be applied to service based applications. Then we will show how to utilize service tiers to keep your application available and functioning as designed. We will use example service definitions to illustrate how service tiers can help you keep your application working.
The document discusses how modern applications require modern monitoring and processes to stay performing. It notes that modern applications operate on dynamic cloud infrastructures with constant changes, requiring monitoring of business success, application performance, and customer experience. It emphasizes the importance of managing risk through understanding and mitigating risks rather than removing risks. It also discusses how DevOps is a cultural change involving team-level responsibility and ownership. The presentation aims to explain how instrumentation, infrastructure management, risk management, and DevOps culture can help keep modern applications running effectively.
2018 AWS DevDay Seoul community track - 데이터센터 1도 모르는 개발자가 MSA를 만났을 때주은 안
The document discusses microservices architecture (MSA) and related topics like container orchestration, Fargate, service discovery, CI/CD, secret management, X-ray, single-page applications (SPA), and Lambda@Edge. It covers these topics over multiple slides with copyright notices at the start of each. The agenda lists MSA, orchestration, service discovery, CI/CD, secret management, and SPA. The conclusion thanks attendees and asks them to participate in a post-session survey for a gift and to leave comments on social media.
Autonomous DevSecOps: Five Steps to a Self-Driving Cloud (ENT214-S) - AWS re:...Amazon Web Services
In this session, we outline the five levels of cloud operations automation, providing a clear path and maturity model for achieving security, compliance, and architecture best practices. Using real-world case studies from Fortune 100 enterprises, we demonstrate how secure AWS Landing Zones and policy-based, automated guardrails accelerate the safe migration and ongoing operation of hundreds of enterprise applications, putting your team on the road to DevSecOps maturity. This session is brought to you by AWS partner, Turbot HQ, Inc.
Cloud Ops Engineer: A Day in the Life (ENT312-R1) - AWS re:Invent 2018Amazon Web Services
Are you an expert data center operations engineer looking to sharpen your AWS skills? Are you an IT operations manager looking to speed up your team's cloud learning curve for operating in a hybrid cloud environment? Are you a DevOps engineer looking to grow your operations experience? This session follows two AWS operations experts throughout their day as they solve real-life problems in complex, enterprise hybrid cloud AWS environments. Expect to learn actionable hacks and tricks that you won't get in standard training classes, practical advice for solving common and not-so-common issues, and insights in to the top things our experts wish they knew when they were getting started with AWS.
Meet Preston, and Explore Your Digital Twin in Virtual Reality (GPSTEC321) - ...Amazon Web Services
Take a journey with Preston, an Amazon Sumerian host, to diagnose and explore a 3D printed jet engine in virtual reality. Preston follows your commands to control and explore different parts of the engine. Preston can also show you the future state of your machine in virtual reality and provide recommendations by analyzing data collected from a physical jet turbine engine using IoT sensors. Learn how to build your own virtual assistant with Amazon AI services, AWS IoT services, and Amazon Sumerian virtual reality.
About the event:
AWS Transformation Day is designed for enterprise organizations migrating to the cloud to become more responsive, agile and innovative, while staying secure and compliant. Join us for this one-day event and we’ll share our experiences of helping enterprise customers accelerate the pace of migration and adoption of strategic services.
Who should attend?
This event is recommended for IT and business leaders who are looking to create sustainable benefits and a competitive advantage by using the AWS Cloud. CIOs, CTOs, CISOs, CDOs, CFOs, IT leaders and IT professionals, enterprise developers, business decision makers, and finance executives.
The document appears to be a presentation from Amazon Web Services about serverless application development on AWS. It discusses various AWS services for building serverless applications like Lambda, API Gateway, DynamoDB, S3, and Step Functions. It provides examples of creating serverless APIs with Lambda and API Gateway and deploying serverless applications using the Serverless Application Model.
[REPEAT 1] Create and Publish AR, VR, and 3D Applications Using Amazon Sumeri...Amazon Web Services
In this session, learn how Amazon Sumerian can help you create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. Learn how you can use Sumerian to build highly immersive and interactive scenes that run on popular hardware, such as Oculus Go, Oculus Rift, Google Daydream, and HTC Vive as well as on Android and iOS mobile devices.
How to Automate Security Learning at Scale (ANT335-S) - AWS re:Invent 2018Amazon Web Services
The mountain of data generated by your deployment can be a valuable source of insight for your security practice IF you take advantage of some key tools on the AWS Cloud. In this session, learn how to build an analytics process that uses security tools, whether from AWS or the community, to create a continuous feedback loop to maintain and improve the security of your deployments. Additionally, learn when to use AWS security services and other tools in your deployment and how to manage the output, thus creating an analytics workflow alongside a feedback loop in order to streamline the entire process. The end result is an automated feedback loop aimed at making sure that your deployment is doing what you intend ... and only what you intend. This session is brought to you by AWS partner, Trend Micro.
Monitoring Serverless Applications (SRV303-S) - AWS re:Invent 2018Amazon Web Services
Serverless brings many advantages to software development, but it introduces new monitoring challenges as well. Isolated telemetry on individual functions might not provide enough visibility, and instrumentation in a world where 100 ms of extra execution time could cost thousands of dollars might prove prohibitive. In this session, we explore how New Relic enables full observability of the serverless stack, including its executing context, with minimal impact in performance. Learn from customer case studies and real-world examples. This session is brought to you by AWS partner, New Relic.
AWS IoT: enabling responsible water use - AWS Summit Cape Town 2018Amazon Web Services
Speaker: Clive Charlton, AWS
Level: 300
Day zero was the term coined by the City of Cape Town for the day they would have to turn the water off for citizens, potentially making Cape Town the first major city to run out of water. Thankfully day zero was avoided in 2018 but the danger still lingers. Learn how AWS empowers citizens to use water responsibly using AWS IoT, AWS Lambda, Amazon DynamoDB, Amazon S3, Amazon Cognito.
Predictive Scaling for More Responsive Applications (API330) - AWS re:Invent ...Amazon Web Services
Get a jump on traffic surges with Predictive Auto Scaling. AWS Auto Scaling now responds more quickly by analyzing past traffic trends. The new predictive capability looks at your incoming load and forecasts it into the future. Not only can you see ahead of time when and how your resources will scale, your resources are made available ahead of when they are needed to enable faster, more responsive applications. Come learn how Genesys uses Predictive Scaling to scale the infrastructure used to run their popular contact center solution, PureCloud, worldwide.
Role of Central Teams in DevOps Organizations (DEV370) - AWS re:Invent 2018Amazon Web Services
You've migrated your business to the cloud. You've embraced DevOps. All your engineering teams operate the systems they write. You don't need central teams any longer ... or do you? In this talk, we discuss how Netflix balances the need for product teams to stay loosely coupled yet how it maximizes the leverage for productivity and velocity that healthy central teams provide.
How AI is Reimagining Software, Environments, Apps, & Programmatic Interfaces...Amazon Web Services
AI is at the center of a shift in how applications are developed and how users interact with data. Code that writes itself, voice enablement, computer vision, and activation by presence are some of the ways the human-to-computer interfaces are evolving. This chalk talk, we explore this evolution and show you how AWS AI services have impacted specific industries, such as manufacturing and development. Additionally, participants have an opportunity to get hands-on with an interactive art display powered by AWS AI services. Our goal is to show you how AI can help you reimagine your applications, services, interfaces, and business.
Internet of Things e Machine Learning: i principali casi d'usoAmazon Web Services
In questa sessione, approfondiremo i principali casi d'uso di organizzazioni e aziende che hanno reso l'Internet of Things e il Machine Learning elementi centrali delle proprie attività e processi quotidiani. Vedremo come queste aziende hanno ottenuto un maggior livello di efficienza operativa e produttività, analizzando ciascun caso d'uso in termini di: sfide aziendali, metriche per il successo, ritorno dell'investimento (ROI), risorse e competenze.
by Ben Moore, Sr. Product Manager, AWS
Now it is your turn, take the information you’ve learned over the day and create your own AR/VR scene. Amazon Sumerian team members will be available to answer your questions as you build your own scenes.
Are you a developer, operator, or business stakeholder who is banging your head against the wall because you cannot seem to deliver apps that make everyone happy? Have you thrown more tools, containers, or agile methods at this challenge than you can count? Have you tried adopting this thing called "DevOps" for your app delivery problems but still cannot fix your IT mayhem? Then this session is for you. Learn how putting people and process first can pave the way to use the cloud for modern application delivery. With AWS and Red Hat, developers and IT operations have a optimal platform and cloud environment for developing, deploying, and managing traditional and cloud-native applications with a streamlined DevOps process to end the headbanging.
Keynote: What Transformation Really Means for the Enterprise - Virtual Transf...Amazon Web Services
AWS Transformation Day is designed for enterprise organizations looking to make the move to the cloud in order to become more responsive, agile and innovative, while still staying secure and compliant. Join us for this virtual event and we'll share our experiences of helping enterprise customers accelerate the pace of migration and adoption of strategic services.
We recommend this event for IT and business leaders who are looking to create sustainable benefits and a competitive advantage by using the AWS Cloud.
Speaker: Sandy Carter
At AWS, security is job zero. AWS has worked with global enterprises to meet their respective security requirements and has developed a broad portfolio of services to help customers run highly secure workloads in the cloud. This session will describe how Amazon has been managing security of the cloud at hyper-scale and adding new capabilities that help secure customer applications and data such as Inspector, GuardDuty, and Macie. Leave this session with a better understanding of how these services operate and how easy it is to integrate them into your secure cloud environment.
Presenter: Kurt Gray, Global Account Solutions Architect, AWS
Automating Compliance on AWS (HLC302-S-i) - AWS re:Invent 2018Amazon Web Services
Maintaining a compliant environment is critical for regulated industries such as healthcare, but with the advent of GDPR and other regional data privacy frameworks, compliance is becoming just another cost of doing business. In this session, we dive deep into how Cloudticity built the Cloudticity Oxygen managed services framework as an example of what a compliance framework looks like and how maintaining a compliant posture—and being able to prove that to auditors and regulators—can and should be a native part of your infrastructure. Compliance doesn't need to be something you add on. It should be deeply ingrained in your environment. Learn specific AWS services and techniques to track and maintain compliance in a fully automated manner directly from Cloudticity's founder. This session is brought to you by AWS partner, Cloudticity.
AWS Direct Connect: Deep Dive (NET403) - AWS re:Invent 2018Amazon Web Services
AWS Direct Connect provides a more consistent network experience for accessing your AWS resources, typically with greater bandwidth and reduced network costs. This session dives deep into the features of AWS Direct Connect, including public and private virtual Interfaces, Direct Connect Gateway, global access, local preference communities, and more.
Risk Management - Avoiding Availability Disasters in Service-based ApplicationsLee Atchison
Bringing down an application is easy. All it takes is the failure of a single service and the entire set of services that make up the application can come crashing down like a house of cards. Just one minor error from a non-critical service can be disastrous to the entire application. There are, of course, many ways to prevent dependent services from failing. However, adding extra resiliency in non-critical services also adds complexity and cost, and sometimes it is not needed.
Application availability is best served by focusing your energies and processes on your most critical systems while working to minimize the impact of non-critical systems. Service Tiers are a way to accomplish this.
In this talk, we will learn what service tiers are and how they can be applied to service based applications. Then we will show how to utilize service tiers to keep your application available and functioning as designed. We will use example service definitions to illustrate how service tiers can help you keep your application working.
The document discusses how modern applications require modern monitoring and processes to stay performing. It notes that modern applications operate on dynamic cloud infrastructures with constant changes, requiring monitoring of business success, application performance, and customer experience. It emphasizes the importance of managing risk through understanding and mitigating risks rather than removing risks. It also discusses how DevOps is a cultural change involving team-level responsibility and ownership. The presentation aims to explain how instrumentation, infrastructure management, risk management, and DevOps culture can help keep modern applications running effectively.
Keeping Modern Applications PerformingLee Atchison
It’s your big day, the day of the year your company either makes it, or breaks it. Your customers expect your system to work, always. Excuses are unacceptable.
To meet this new challenge, your application must use modern tools and techniques. Serverless, containers, and cloud technologies are working with new DevOps processes and risk management concepts in order to build a dynamic, highly scalable, highly available application that meets your customers needs.
And central to all of this is the modern analytics necessary to determine how your system is running and what you need to do to keep it running...at scale.
Your customers demand modern applications, and modern applications demand modern tools and modern analytics.
Are you ready to meet these modern challenges?
Architecting for scale - dynamic infrastructure and the cloudLee Atchison
The document discusses dynamic infrastructure and how cloud technologies enable scaling and availability. It describes how a dynamic infrastructure allows applications to allocate and consume resources on demand. It provides examples of how Docker containers can scale dynamically and how cloud technologies like EC2 auto scaling support this. Finally, it outlines progressive stages companies go through in adopting cloud technologies from initial experimentation to fully mandating cloud usage.
Migrating to the Cloud - What to do when things go sidewaysLee Atchison
The document discusses best practices for migrating applications to the cloud. It recommends instrumenting applications early in the migration process to gain visibility and identify issues. A methodical approach is suggested that involves planning the strategy, priorities, and baseline metrics upfront. The migration should then be executed gradually with validation checks to ensure performance and functionality are maintained. Ongoing monitoring is also important after migration to account for the dynamic cloud environment.
Monitoring the Dynamic Nature of Cloud ComputingLee Atchison
The document discusses the challenges of monitoring dynamic cloud applications where resources are constantly changing. Traditional monitoring of servers is not sufficient, as resources may not exist for long periods. Effective monitoring requires tracking how resources are provisioned and utilized over time, as well as both static and dynamic monitoring from the application to infrastructure layers. This allows visibility into how dynamic resources are working and being used.
The document discusses dynamic infrastructure and keeping applications running at scale in the cloud. It begins with an introduction of the speaker, Lee Atchison, and his background in cloud computing. It then discusses various challenges of maintaining application availability, both obvious challenges like outages as well as more subtle challenges like performance degradation. The rest of the document discusses strategies for monitoring applications in dynamic cloud environments, approaches for migrating applications to the cloud, and general strategies for successful cloud adoption.
Future Stack NY - Monitoring the Dynamic Nature of the CloudLee Atchison
1) The document discusses how Docker and cloud computing allow applications to be more dynamic and take advantage of ephemeral resources.
2) It notes that in the cloud, resources can be provisioned and deprovisioned quickly, unlike traditional data centers, allowing applications to scale up and down easily.
3) Monitoring dynamic cloud environments poses unique challenges because infrastructure components like containers may have extremely short lifecycles, appearing and disappearing rapidly, requiring monitoring tools that can track ephemeral resources and their lifecycles.
Velocity - cloudy with a chance of scalingLee Atchison
The document discusses techniques for achieving high availability in cloud applications. It provides an overview of key concepts like maintaining redundancy, handling failures, and ensuring recovery plans are robust. Examples are given to illustrate the importance of anticipating different failure modes and dependencies to "stay two mistakes high." The space shuttle software system is presented as an example of a highly redundant and recoverable system through its use of multiple independent computing units and deadlock handling.
Cloud Expo (Keynote) - Static vs DynamicLee Atchison
The document discusses how cloud computing provides a "better data center" that allows for faster provisioning of resources and improved application availability through redundancy. It also describes how the cloud can function as a "dynamic tool" that allows applications to dynamically allocate and deallocate resources as needed. Effective monitoring of cloud applications requires solutions like New Relic that can monitor application performance in addition to lower-level infrastructure metrics provided by AWS CloudWatch. Together these solutions provide full-stack visibility of dynamic cloud environments.
This document discusses the importance of planning for failures when building highly available, scalable applications. It uses the analogy of "flying two mistakes high" when piloting radio controlled planes to emphasize that systems should be designed to handle at least two failures without crashing. The document provides examples of how extra capacity is needed to maintain availability during failures like node outages, rolling upgrades, and unknown dependencies between infrastructure components. It stresses the need to thoroughly analyze all potential failure modes and ensure recovery plans are robust enough to handle compounding issues.
AWS Summit Sydney: Life’s Too Short...for Cloud without AnalyticsLee Atchison
The document discusses monitoring applications in dynamic cloud environments. It notes that traditional server monitoring is insufficient for dynamic cloud applications that use technologies like Docker containers and AWS Lambda. It advocates monitoring the full stack, from code to AWS services, to gain accountability. New Relic monitoring is presented as enabling this type of full stack visibility for applications using dynamic cloud technologies. Monitoring needs to focus on application performance and lifecycles rather than just servers. The rate of change is increasing, so past monitoring approaches will not work in the future.
AWS Summit - Chicago 2016 - New Relic - Monitoring the Dynamic CloudLee Atchison
Lee Atchison gave a presentation on monitoring dynamic cloud environments. He explained that cloud resources are now highly dynamic, with containers starting and stopping within minutes. This requires monitoring not just servers but the entire lifecycle of cloud components. Both operations and development teams are impacted by this change, as cloud architecture is now integral to application design. Traditional monitoring is insufficient - tools are needed that provide full stack visibility across servers, applications, and provisioning in dynamic cloud environments.
Webinar - Life's Too Short for Cloud without AnalyticsLee Atchison
The document discusses monitoring applications in dynamic cloud environments. It notes that cloud infrastructure is monitored by services like CloudWatch, but these don't provide visibility into application performance. New Relic is described as monitoring both the server infrastructure and applications to provide a more complete view. The document also discusses how applications are becoming more dynamic with microservices and containers that have very short lifecycles, making them challenging to monitor using traditional approaches.
5 keys to high availability applicationsLee Atchison
The document discusses 5 keys to building high availability web applications: 1) develop applications with availability in mind by anticipating failures, 2) always plan for scaling to increasing traffic, 3) mitigate risks through redundancy, fallback mechanisms, and rapid failure detection, 4) monitor applications to establish baselines and detect anomalies, and 5) ensure responsive availability through incident response processes, alerting, and escalation procedures.
This document discusses strategies for cloud adoption. It outlines typical progressions that companies follow when adopting the cloud, from experimenting with non-critical services to fully mandating cloud usage. It also discusses parallel progressions that application teams follow, from using peripheral cloud services to building applications committed to unique cloud capabilities. The document emphasizes that different companies and applications will progress at different speeds and have different needs. It provides strategies for successful cloud adoption, including understanding one's culture and needs, monitoring adoption, and driving cultural change. It also discusses how AWS CloudWatch and New Relic can work together to provide monitoring of infrastructure and applications in the cloud.
Network Security and Cyber Laws (Complete Notes) for B.Tech/BCA/BSc. ITSarthak Sobti
Network Security and Cyber Laws
Detailed Course Content
Unit 1: Introduction to Network Security
- Introduction to Network Security
- Goals of Network Security
- ISO Security Architecture
- Attacks and Categories of Attacks
- Network Security Services & Mechanisms
- Authentication Applications: Kerberos, X.509 Directory Authentication Service
Unit 2: Application Layer Security
- Security Threats and Countermeasures
- SET Protocol
- Electronic Mail Security
- Pretty Good Privacy (PGP)
- S/MIME
- Transport Layer Security: Secure Socket Layer & Transport Layer Security
- Wireless Transport Layer Security
Unit 3: IP Security and System Security
- Authentication Header
- Encapsulating Security Payloads
- System Security: Intruders, Intrusion Detection System, Viruses
- Firewall Design Principles
- Trusted Systems
- OS Security
- Program Security
Unit 4: Introduction to Cyber Law
- Cyber Crime, Cyber Criminals, Cyber Law
- Object and Scope of the IT Act: Genesis, Object, Scope of the Act
- E-Governance and IT Act 2000
- Legal Recognition of Electronic Records
- Legal Recognition of Digital Signatures
- Use of Electronic Records and Digital Signatures in Government and its Agencies
- IT Act in Detail
- Basics of Network Security: IP Addresses, Port Numbers, and Sockets
- Hiding and Tracing IP Addresses
- Scanning: Traceroute, Ping Sweeping, Port Scanning, ICMP Scanning
- Fingerprinting: Active and Passive Email
Unit 5: Advanced Attacks
- Different Kinds of Buffer Overflow Attacks: Stack Overflows, String Overflows, Heap and Integer Overflows
- Internal Attacks: Emails, Mobile Phones, Instant Messengers, FTP Uploads, Dumpster Diving, Shoulder Surfing
- DOS Attacks: Ping of Death, Teardrop, SYN Flooding, Land Attacks, Smurf Attacks, UDP Flooding
- Hybrid DOS Attacks
- Application-Specific Distributed DOS Attacks
EASY TUTORIAL OF HOW TO USE CiCi AI BY: FEBLESS HERNANE Febless Hernane
Cici AI simplifies tasks like writing and research with its user-friendly platform. Users sign up, input queries, customize responses, and edit content as needed. It offers efficient saving and exporting options, making it ideal for enhancing productivity through AI assistance.
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Decentralized Justice in Gaming and EsportsFederico Ast
Discover how Kleros is transforming the landscape of dispute resolution in the gaming and eSports industry through the power of decentralized justice.
This presentation, delivered by Federico Ast, CEO of Kleros, explores the innovative application of blockchain technology, crowdsourcing, and incentivized mechanisms to create fair and efficient arbitration processes.
Key Highlights:
- Introduction to Decentralized Justice: Learn about the foundational principles of Kleros and how it combines blockchain with crowdsourcing to develop a novel justice system.
- Challenges in Traditional Arbitration: Understand the limitations of conventional arbitration methods, such as high costs and long resolution times, particularly for small claims in the gaming sector.
- How Kleros Works: A step-by-step guide on the functioning of Kleros, from the initiation of a smart contract to the final decision by a jury of peers.
- Case Studies in eSports: Explore real-world scenarios where Kleros has been applied to resolve disputes in eSports, including issues like cheating, governance, player behavior, and contractual disagreements.
- Practical Implementation: Detailed walkthroughs of how disputes are handled in eSports tournaments, emphasizing speed, cost-efficiency, and fairness.
- Enhanced Transparency: The role of blockchain in providing an immutable and transparent record of proceedings, ensuring trust in the resolution process.
- Future Prospects: The potential expansion of decentralized justice mechanisms across various sectors within the gaming industry.
For more information, visit kleros.io or follow Federico Ast and Kleros on social media:
• Twitter: @federicoast
• Twitter: @kleros_io
10 Conversion Rate Optimization (CRO) Techniques to Boost Your Website’s Perf...Web Inspire
What is CRO?
Conversion Rate Optimization, or CRO, is the process of enhancing your website to increase the percentage of visitors who take a desired action. This could be anything from purchasing a product to signing up for a newsletter. Essentially, CRO is about making your website more effective in turning visitors into customers.
Why is CRO Important?
CRO is crucial because it directly impacts your bottom line. A higher conversion rate means more customers and revenue without needing to increase your website traffic. Plus, a well-optimized site improves user experience, which can lead to higher customer satisfaction and loyalty.
The edge is monitoring weather and drought conditions on a farm, to ensure optimal crop production.
The edge is an automated drone, flying solo, taking photographs or gathering environmental or geographical data.
The edge is a semi truck, transmitting information about where it is, its load, and its operating condition to a central transportation system.
The edge is a smart home appliance, that automatically knows when you are running low on something and assists you with ordering more.
The edge is smart home monitors that keep us safe, such as shutting off a stove when a fire is detected.
All of these are examples of edge computing. And they are all novel uses in and of themselves.
They are often what we think of when we think of edge computing.
{c}But what exactly is edge computing?
Edge computing is taking part of your application, and moving it closer to where the action is.
{c} By ”the action”, I mean: the source of interesting data you want to process, {D}or the end user of the application, {D}or a system being controlled.
This is what edge computing is all about.
Edge computing is, quite simply…
…putting computation where it belongs.
So, when we are monitoring drought conditions on a farm, we are gathering tons of data from far reaching locations.
And when we are talking about an automated drone, we are talking about keeping it in the air and free from the impact of wind and weather, without a human involved.
And when we are talking about a semi truck, it’s gathering useful information such as where is it located, is it moving at a safe speed, how much fuel is it using, and what are the conditions of its cargo. All automated…
And for the home automation, it’s the intelligence to understand when something dangerous is happening and taking actions to help prevent it from getting worse.
These are all great uses of edge computing, but these are mostly outside of our everyday experiences. We don’t yet see automated drones flying overhead, nor do we see the impact of micro weather reports on farming.
But edge computing is a lot closer to us today than you might think. You don’t need to go this far in order to see edge computing in action.
All you need to do is go to your local grocery store… the scanner is gathering data for the Point of Sale machine to determine how much you owe, before sending the results to the cloud.
Or the FedEx agent that is keeping track of your package so you know where it is, and when it’s arriving.
Or even closer and more personal, when you order a cup of coffee from your smart phone before walking into Starbucks or Dunkin to get it.
In all these cases, you are using an edge application.
Or everyday, every single day, when you are reading your email in a smart web client in your web browser.
Yes, that's edge computing as well.
All of these are edge devices, and all of them are examples of edge computing. Whether you are talking about the autonomous drone, the micro climate weather sensors, or your email inbox and mobile applications. All of these are examples of edge computing.
The edge is nothing to fear, the edge is nothing new or complex. The edge has been with us for a very long time and it is normal application development as we know it today.
So, if all of these things are examples of edge computing.
What exactly makes the edge, the edge?
Why is some computation edge computing and some of it is cloud computing?
The whole purpose of edge computing is to put time sensitive operations closer to where they are needed. It’s about controlling the drone to keep it flying safely. It’s about keeping your browser email application responsive. It’s about keeping home safety systems working even if they aren’t well connected to the cloud. And it’s about keeping your mobile application interacting with you in a timely manner.
This is opposed to the centralized computation that is typical in normal cloud computing.
This centralized computation is where data collection and analysis can be done. It's where order processing occurs. It's where communications with other people and systems happens.
Edge computing is all about putting computation where it should be to operate efficiently…
…as opposed to where it’s convenient for developers and operators.
Because, putting computation out into the edge is harder and riskier than keeping it together in the cloud. So, when we put computation at the edge, we should do it for good reasons.
So, how do we decide whether to put some computation in the cloud or at the edge?
Well, to demonstrate, let’s look at a fun and modern example. An example where both cloud and edge computing are necessary for the application to be successful. And an example that is getting a lot of attention today.
{c}Let’s talk about a driverless car.
A driverless car is a unique beast. It has lots of sensors and lots of controls. It has sensors to detect where obstacles might be located and where the road is located. It’s got cameras to detect if that blob in front of you is the car you are following, or a human crossing the street, or a road closed barrier. ***Or a ball that just might be chased by a small child…
It has controls that make the car perform. It has controls for steering, for braking, and for applying power. But it also has controls and sensors for monitoring the health of the car itself. Is the motor operating efficiently? Is the passenger compartment comfortable? Should we deploy an airbag right now?
Cameras and sensors. Steering and control. Engine health and passenger health. Passenger safety and community safety.Some of this computation has to occur in the car itself, but some of it can occur in the cloud. Which is which?
Some things are natural to perform ***in the car itself***, and are in fact mandatory that they occur **in the car**.
Image recognition (is that a person or another car near me?), Threat detection (is that person running in front of me or is that car in front of me applying its brakes?).
Road management (where is the edge of the road, is that a stop sign?). Collision control (quick, brake! Swerve right!).
All of this is time sensitive calculations that must occur and must occur timely. This processing cannot go offline due to a bad internet connection. It must always be available.
It is computation that must occur in the car itself. This is edge computing for the driverless car.
But there is other computation that the car needs that can and should occur in the cloud.
How do I get from point A to point B? What’s the most optimal route?
Is there road construction or a changed road?
Is there traffic on this route that makes taking another route more preferable?
Can we tune a setting in the car to make it operate more efficiently and, perhaps, save fuel?
Speaking of fuel, do we need to get gas? Where is the nearest gas station? Where is the nearest maintenance facility?
How do we manage a fleet of cars and manage upgrades and track usage of the cars and by whom? (why own a car when you can borrow one that’s nearby … the ultimate Uber).
These things are computation that can and should occur in the cloud. They typically need access to centralized data (such as maps and traffic information), and need to correlate lots of information from other sources to complete the computation.
And, even more importantly, the computation is not highly time sensitive.
These are important things to consider and distinctions that are important to remember.
But how does the computation itself differ between the edge and the cloud?
Well, computation in the edge is typically harder to manage than computation that occurs in the cloud.
Think about upgrading software, diagnosing a problem with the software, or monitoring how it is performing.
All of these are easier when the software is centralized, and harder when it is distributed and remote.
Software scaling is *different* in the edge than it is in the cloud.
Software scaling is important to both cases, but it is very different.
Edge software typically runs thousands and millions of instances of the software in a highly distributed manner, but each instance is typically only doing one thing or managing one device.
Cloud software typically runs a few instances (yes on multiple servers, but fewer in general), but each instance is typically doing actions for thousands of users.
Edge software requires managing thousands or millions of instances running in thousands or millions of locations.Cloud software requires a small number of instances in a small number of locations.
For the edge, load is linear or flat as the number of users increases. For the cloud, load scales upwards as the number of users increases.
For the edge, management difficulty scales upwards as the number of users increases. For the cloud, management of an application isn’t drastically different based on the number of users.
Edge has certain advantages, namely:
It provides more time sensitive processing. This is something I mentioned previously.
It provides a higher responsiveness to stimulus/response.
It allows a reduced reliance on network connectivity, which increases reliability in the face of unknown connectivity.
It allows more dedicated processing to a single specific task.
But edge computing has its challenges, mostly due to the large number of nodes required and how distributed geographically they can be:
Managing deployments across a fleet of edge devices can be very challenging. Whether that’s cell phones or flying drones, it’s still a problem.
Monitoring usage and analyzing how the software is performing is harder.
Debugging problems remotely is difficult.
Understanding when something is going wrong at a system level, versus a single node level, is much more difficult.
So, what criteria do you use to determine edge vs cloud?
There are several:
When computation is timing specific or highly sensitive to delays, use edge.
When you need high responsiveness, consider edge.
But when you need a significant amount of CPU and your use of it is quite bursty and unpredictable, use the cloud.
When you are highly sensitive to network connectivity issues, consider the edge.
If you need access to more global data and less individualized data (traffic patterns vs current car speed), use the cloud.
But, everything else aside, unless you have a compelling reason to use the edge, then use the cloud. When all else is equal, use the cloud.
Why use the cloud instead of the edge? {P} Well…
The edge is harder to manage
The edge is harder to upgrade
The edge has version management issues (are we sure all nodes are running the same version of our software?)
The edge has variable and unique provisioning issues (we’ll talk more about that shortly)
The edge makes monitoring and managing software harder and more complicated
So, edge is more challenging and harder to manage. How can we be successful in using edge computing effectively?
There are eight keys to being successful in building edge computing into your application. They are all simple but very valuable pieces of advice for success in the edge.
We’re going to talk about each in turn.
#1 Be smart on what is in the cloud vs the edge
This is a continuation of what I said earlier. Make sure you make an *active* decision about whether to use the edge or the cloud for your computation and storage.
Remember what the edge is good for, and remember what the cloud is good for.
And remember the disadvantages the edge has over the cloud. When in doubt, use the cloud. Only use the edge for computation that is best optimized for the edge.
#2, Don’t throw away DevOps principles in the edge.
It’s easy to discount DevOps principles when thinking about edge computing. You hear comments like this: “Edge computing is highly specialized computing”, and “New processes and procedures are needed for the edge”. These are common messages.
But remember what DevOps is all about. DevOps is about 1) Ownership and accountability, 2) Distributed decision making, and most importantly 3) People, processes, and tools.
The processes may change, and the tools may change, but there will still be processes and tools and the people are the same. DevOps works well even in the edge.
#3 Nail highly distributed deployments.
Often, when building an application, we don’t think enough about how we will deploy it in a highly automated way. We say “we can fix this later”. But while automated and repeatable deployments are critical for all applications, they are significantly more important for edge applications, due to the remote nature and the huge number of nodes involved.
#4, reduce versioning as much as possible.
Deployments at the edge are hard, so reduce the quantity of deployments you need to make. Deploy ***less*** often.
Reduce the number of deployments. We keep hearing today about the value of increasing the number of deployments. So this advice of reducing the number of deployments seems opposite from that of standard best practice principles and CI/CD strategies. But, it’s not different.
CI/CD says automated deployments are critical, and automated upgrades are critical. It’s all about **automation**, and that is even more *important* for edge computing. It’s just that the scale of nodes demands that we manage expectations for deployments differently than for the cloud.
You should not assume you can deploy to the edge as fast or as often as you can to the cloud.
#5, reduce the provisioning and configuration options available for each node as much as possible.
Given the shear number of nodes involved in a large edge deployment, it is hard to manage the software for these edge devices unless they are all running the same hardware and hardware version. Same configuration and installed options, and same software configuration.
If every remote temperature probe is running on the same hardware, the easier it is to build and manage the software. Of course this isn’t always possible…the best example is mobile apps, which have to run in a large number of varied hardware/software configurations. This is a challenge for managing this software and actually proves my point. Reducing the number of variables makes managing the software much easier.
#6, understand that scaling is still an issue for the edge as it is for the cloud.
Backend (cloud) scaling is about how much each node can handle.
Edge scaling is about how many nodes can you handle.
As such, node management is much harder for the edge.
In the edge, all scaling is horizontal scaling. Vertical scaling (increasing the size of individual nodes to scale) is typically less important.
More nodes…not bigger nodes.
#7 Nail monitoring and analytics.
More nodes and distributed nodes means understanding how each node is performing at any given time is important, but hard to track without good analytics.
System management needs a continuous view into the health of every node in a highly scaled system.
But also high level reports containing analytics of edge node health **tend to be viewed** at higher levels within your organization. How an individual server in the cloud or your data center is performing is typically not of high level management import, but understanding how many automated drones are behaving well vs poorly is of a higher level of visible importance.
And finally, #8, the edge is not magic.
It’s not new, it’s not “special”. We’ve been doing edge computing for years, we’ve just called it something else. We might have called it a “browser application”, or a “mobile application”, or a “Point of Sale” device. But it’s all just edge computing.
The edge is not a new form of computing. The edge is, however, a new way to categorize and label an existing class of computation.
THIS new categorization and labeling is **good** and **encouraging**.
>>It means in the future there will be **better** edge-focused tooling.>>There will be services that will be **tailored** for the edge. >>But **existing** tooling today – non edge specific tooling – is still appropriate and useful.
These are the eight keys to being successful in building edge computing into your application. Together, they are a simple but very valuable strategy for success in the edge.
But remember, edge computing is the same as today’s mobile and browser computing.
It’s all about management of modern applications, and their components, whether they are cloud or edge components.