Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally.
New Relic helps manage these applications without sacrificing simplicity.
In this presentation, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we’ve learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing.
This was presented at AWS re:Invent 2017 by Lee Atchison, Director of Evangelism at New Relic featuring a customer from Scripps Networks.
DEV209 A Field Guide to Monitoring in the Cloud: From Lift and Shift to AWS L...New Relic
Presented by Kevin Downs, Senior AWS Technical Partner Manager of New Relic, at Amazon Web Services re:Invent 2017.
Static applications living on long running servers are becoming history in the cloud along with the monitoring assumptions we used to make. Now, we routinely deploy a range of services from autoscaling compute to decoupled message queues to serverless. These dynamic services, along with microservice architectures both break and need to co-exist with traditional monitoring and instrumentation approaches.
Whether you are building new apps, were told to migrate yesterday, are currently migrating, or are already scaling your apps on AWS, this session will dive into the how, where, and when to monitor your applications and infrastructure, no matter where your apps run. You’ll hear best practices we've learned from our customers and running our own service (1.5 Billion+ metrics per minute) too.
Join us for a little bit of history and a whole lot of now as we show you how and what you need to scale and prove your success in the cloud.
SRV210 Improving Microservice and Serverless Observability with Monitoring DataNew Relic
Hundreds of microservices, millions of AWS Lambda invocations, and dozens of global regions—the way we design, build, and operate cloud infrastructure and applications is increasingly distributed and composed of ephemeral components. From experience, we know a key to success with these systems is the ability to understand them using data. While there is considerable knowledge around how to use metrics and logs to analyze and troubleshoot traditional applications and infrastructure, emerging technology like serverless functions and orchestrated containers require a new observability approach. This is especially true when trying to understand the relationship between new services, like an IoT or mobile backend, and legacy systems.
Presented at AWS re:Invent 2017 by Clay Smith, Developer Advocate at New Relic.
Cloud Migration with Confidence: 7 Keys to SuccessNew Relic
In the span of a decade, cloud services have matured into a multi-billion dollar market. The benefits of migrating to the cloud are also well documented: agility to change faster, get to market faster, rapid scalability, and better customer experiences. But how do you know if and when you’ve achieved a successful migration? Whether you are starting your cloud journey, are in the process of migrating, or looking to scale your applications on AWS, instrumentation, measurement, and insights are key to migrating with confidence.
This presentation walks through the important steps you can take to make sure your teams, partners, and management feel confident in a successful cloud migration.
This was presented on TechRepublic Nov. 8, 2017 by Cloud Migration CBS Interactive Distinguished Lecturer David Gewirtz, author of The Flexible Enterprise, Abner Germanow, Sr. Director, Strategic and Partner Marketing, New Relic, and AWS expert, Kalpan Raval, Global Segment Leader - Migration tooling at Amazon Web Services.
MSC202_Learn How Salesforce Used ADCs for App Load Balancing for an Internati...Amazon Web Services
Organizations use application delivery controllers (ADCs) to ensure that their most important applications receive the best performance across their network. In this session, you learn how and why Salesforce used the F5 BIG-IP platform, an ADC solution from AWS Marketplace, during a migration to AWS. To preserve an existing skillset within their business, Salesforce chose AWS Marketplace to first evaluate the solution on the AWS platform before ultimately selecting it as part of their international rollout. You see how BIG-IP performs application routing and security, and how it works with existing AWS networking solutions to provide a consistent experience for domestic and international rollouts. You also learn how Salesforce successfully used the AWS Marketplace Private Offers program to procure an enterprise license and consolidate the expenditure onto their AWS bill.
Maximizing Your Move to AWS: Five Key Lessons Learned from Vanguard and Cloud...Amazon Web Services
CTP’s Robert Christiansen and Mike Kavis describe how to maximize the value of your AWS initiative. From building a Minimum Viable Cloud to establishing a cloud robust security and compliance posture, we walk through key client success stories and lessons learned. We also explore how CTP has helped Vanguard, the leading provider of investor communications and technology, take advantage of AWS to delight customers, drive new revenue streams, and transform their business.
Session Sponsored by: CTP
Explore and build all the components of a complete connected device workflow. We start with constructing a physical drink dispenser from provided parts and connecting it to AWS IoT. Then we use Amazon Cognito, Amazon DynamoDB, AWS Lambda, Amazon API Gateway, and Amazon S3 to build a serverless application for secure device management and control of your dispenser. Learn how AWS IoT provides flexible communication with physical connected devices and integrates with other AWS services. Also learn how to incorporate a serverless application built with other AWS services to intuitively manage and control devices from a responsive web application. This workshop involves connections to the physical drink dispenser, so bring a laptop with administrative privileges and a working USB port, and have the AWS CLI loaded and configured for your AWS account (with administrative permissions). We provide the physical hardware, USB cable, and network connectivity.
DevOps is everywhere, but too often, people think they can buy “DevOps in a box” and just sprinkle some tools and automation over your broken or slow (or even super-fast AWS) stack. But we all know that software delivery is still hard. So what is this crazy DevOps thing, and why and how does it make things better? In this session, Jez and Nicole talk about what they’ve found working with dozens of organizations and conducting the largest DevOps research studies to date, covering over 23,000 data points across 2,000 organizations around the world. We start with the outcomes that companies care about: organizational performance, software delivery performance, and software quality. We then define what DevOps is, how you measure it, and how the best, most innovative teams and organizations are using it to drive improvements in performance and quality.
Automate Best Practices and Operational Health for AWS Resources with AWS Tru...Amazon Web Services
Notice: This Workshop requires a laptop computer and an active AWS account with Administrator privileges.
It can be challenging to optimize AWS resources across cost, performance, security, and fault tolerance, much less do it automatically. AWS Trusted Advisor, an online resource, provides real-time guidance to help you provision your resources following AWS best practices. AWS Health provides ongoing visibility into the state of your AWS resources and remediation guidance for resource performance or availability issues that may affect your applications. Learn how to safely automate these best practices using Amazon CloudWatch Events and AWS Lambda, with samples for you to use. We also introduce you to AWS Health tools, a community-based source of tools to automate remediation actions and customize health alerts. See how to automate AWS best practices from Trusted Advisor and implement remediation from the AWS Health API on your AWS resources. Attendees should bring their own laptops.
DEV209 A Field Guide to Monitoring in the Cloud: From Lift and Shift to AWS L...New Relic
Presented by Kevin Downs, Senior AWS Technical Partner Manager of New Relic, at Amazon Web Services re:Invent 2017.
Static applications living on long running servers are becoming history in the cloud along with the monitoring assumptions we used to make. Now, we routinely deploy a range of services from autoscaling compute to decoupled message queues to serverless. These dynamic services, along with microservice architectures both break and need to co-exist with traditional monitoring and instrumentation approaches.
Whether you are building new apps, were told to migrate yesterday, are currently migrating, or are already scaling your apps on AWS, this session will dive into the how, where, and when to monitor your applications and infrastructure, no matter where your apps run. You’ll hear best practices we've learned from our customers and running our own service (1.5 Billion+ metrics per minute) too.
Join us for a little bit of history and a whole lot of now as we show you how and what you need to scale and prove your success in the cloud.
SRV210 Improving Microservice and Serverless Observability with Monitoring DataNew Relic
Hundreds of microservices, millions of AWS Lambda invocations, and dozens of global regions—the way we design, build, and operate cloud infrastructure and applications is increasingly distributed and composed of ephemeral components. From experience, we know a key to success with these systems is the ability to understand them using data. While there is considerable knowledge around how to use metrics and logs to analyze and troubleshoot traditional applications and infrastructure, emerging technology like serverless functions and orchestrated containers require a new observability approach. This is especially true when trying to understand the relationship between new services, like an IoT or mobile backend, and legacy systems.
Presented at AWS re:Invent 2017 by Clay Smith, Developer Advocate at New Relic.
Cloud Migration with Confidence: 7 Keys to SuccessNew Relic
In the span of a decade, cloud services have matured into a multi-billion dollar market. The benefits of migrating to the cloud are also well documented: agility to change faster, get to market faster, rapid scalability, and better customer experiences. But how do you know if and when you’ve achieved a successful migration? Whether you are starting your cloud journey, are in the process of migrating, or looking to scale your applications on AWS, instrumentation, measurement, and insights are key to migrating with confidence.
This presentation walks through the important steps you can take to make sure your teams, partners, and management feel confident in a successful cloud migration.
This was presented on TechRepublic Nov. 8, 2017 by Cloud Migration CBS Interactive Distinguished Lecturer David Gewirtz, author of The Flexible Enterprise, Abner Germanow, Sr. Director, Strategic and Partner Marketing, New Relic, and AWS expert, Kalpan Raval, Global Segment Leader - Migration tooling at Amazon Web Services.
MSC202_Learn How Salesforce Used ADCs for App Load Balancing for an Internati...Amazon Web Services
Organizations use application delivery controllers (ADCs) to ensure that their most important applications receive the best performance across their network. In this session, you learn how and why Salesforce used the F5 BIG-IP platform, an ADC solution from AWS Marketplace, during a migration to AWS. To preserve an existing skillset within their business, Salesforce chose AWS Marketplace to first evaluate the solution on the AWS platform before ultimately selecting it as part of their international rollout. You see how BIG-IP performs application routing and security, and how it works with existing AWS networking solutions to provide a consistent experience for domestic and international rollouts. You also learn how Salesforce successfully used the AWS Marketplace Private Offers program to procure an enterprise license and consolidate the expenditure onto their AWS bill.
Maximizing Your Move to AWS: Five Key Lessons Learned from Vanguard and Cloud...Amazon Web Services
CTP’s Robert Christiansen and Mike Kavis describe how to maximize the value of your AWS initiative. From building a Minimum Viable Cloud to establishing a cloud robust security and compliance posture, we walk through key client success stories and lessons learned. We also explore how CTP has helped Vanguard, the leading provider of investor communications and technology, take advantage of AWS to delight customers, drive new revenue streams, and transform their business.
Session Sponsored by: CTP
Explore and build all the components of a complete connected device workflow. We start with constructing a physical drink dispenser from provided parts and connecting it to AWS IoT. Then we use Amazon Cognito, Amazon DynamoDB, AWS Lambda, Amazon API Gateway, and Amazon S3 to build a serverless application for secure device management and control of your dispenser. Learn how AWS IoT provides flexible communication with physical connected devices and integrates with other AWS services. Also learn how to incorporate a serverless application built with other AWS services to intuitively manage and control devices from a responsive web application. This workshop involves connections to the physical drink dispenser, so bring a laptop with administrative privileges and a working USB port, and have the AWS CLI loaded and configured for your AWS account (with administrative permissions). We provide the physical hardware, USB cable, and network connectivity.
DevOps is everywhere, but too often, people think they can buy “DevOps in a box” and just sprinkle some tools and automation over your broken or slow (or even super-fast AWS) stack. But we all know that software delivery is still hard. So what is this crazy DevOps thing, and why and how does it make things better? In this session, Jez and Nicole talk about what they’ve found working with dozens of organizations and conducting the largest DevOps research studies to date, covering over 23,000 data points across 2,000 organizations around the world. We start with the outcomes that companies care about: organizational performance, software delivery performance, and software quality. We then define what DevOps is, how you measure it, and how the best, most innovative teams and organizations are using it to drive improvements in performance and quality.
Automate Best Practices and Operational Health for AWS Resources with AWS Tru...Amazon Web Services
Notice: This Workshop requires a laptop computer and an active AWS account with Administrator privileges.
It can be challenging to optimize AWS resources across cost, performance, security, and fault tolerance, much less do it automatically. AWS Trusted Advisor, an online resource, provides real-time guidance to help you provision your resources following AWS best practices. AWS Health provides ongoing visibility into the state of your AWS resources and remediation guidance for resource performance or availability issues that may affect your applications. Learn how to safely automate these best practices using Amazon CloudWatch Events and AWS Lambda, with samples for you to use. We also introduce you to AWS Health tools, a community-based source of tools to automate remediation actions and customize health alerts. See how to automate AWS best practices from Trusted Advisor and implement remediation from the AWS Health API on your AWS resources. Attendees should bring their own laptops.
DVC201-Build AWS Skills Through Community-Led User Groups.pdfAmazon Web Services
"Did you know that there are over 300 AWS User Groups worldwide? Join this panel discussion featuring AWS community leaders from around the world, and learn the value of attending community-led AWS Meetups in your region. Community leaders share their experiences, talk through how local communities help developers solve problems and achieve their goals, and discuss the benefits of participating in peer-to-peer AWS knowledge sharing and networking activities.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
EUT303_Modernizing the Energy and Utilities Industry with IoT Moving SCADA to...Amazon Web Services
Supervisory Control and Data Acquisition (SCADA) systems are critical real-time software applications used to manage nearly any form of upstream, midstream, and downstream processes in the energy industry. Traditionally, these technologies have been deployed on premises and managed separately from core IT, to ensure security, availability and consistent performance.
As energy and utility companies expand geographically, and the number and types of sensors in each location grow, disparate and growing data streams are becoming increasingly complex and challenging to manage. It is estimated that up to 95% of valuable device and sensor information is left stranded in the field, information that could prove valuable to machine learning, predictive analytics, and process optimization.
In this session, energy and utility customers will learn how easy it is to implement IIoT on AWS, so they can easily extract value from additional devices and sensors, and innovate faster. We will dive into a reference architecture for accessing current mission critical SCADA data as well as previously stranded data into AWS using Kinesis and DynamoDB, ultimately enabling customers to reduce downtime, increase efficiencies, improve reliability, and gain more business insights through connected data.
The threat model for IoT devices is very different from the threat model for cloud applications. Customers must understand what these threats are, prioritize them effectively, and navigate the growing ecosystem of partners that give customers tools to build secure IoT solutions. We showcase how to leverage partner solutions to mitigate threats, explain how to avoid common pitfalls, and make it clear that all IoT solutions must incorporate end-to-end security from the start. We begin with the steps to take in the manufacturing process, how to provision and authenticate devices in the field, and we cover solutions that can help customers comply with IT requirements in the maintenance phase of the product lifecycle.
Mapmaking and Location-Based Systems in the Cloud - ENT339 - re:Invent 2017Amazon Web Services
From 2014 through 2017, TomTom successfully migrated its major business systems to the AWS Cloud. This migration helped TomTom's vision of real-time mapmaking become a reality, and it put TomTom significantly ahead of the competition in the domain of location technology. In this session, we explore the practical aspects of migrating to the cloud, including technological challenges as well as the necessary shifts in mindset to successfully get us through the migration journey. We discuss how the migration was done gradually due to huge risk exposure all the while maintaining 24/7 system uptime. We explore how the expected benefits of the cloud came to life, and we elaborate on the unexpected benefits, such as cost of ownership and increased awareness in the teams, as well as upper-stack benefits and the possibility to fit services and hardware while scaling the system—benefits that we could not have realized with on-premises infrastructure.
You want to go to the cloud, but you are blocked by legacy technical debt. In this session, we guide you into the cloud using trusted application platforms like OpenShift and CloudFoundry. Come learn how to unblock your migration and unwind an otherwise complicated transformation.
AWS Head of Technology Partners, Stanley Chan presents the global market opportunity for Australian software and technology providers, at AWS TechShift Sydney.
NEW LAUNCH! Amazon EC2 Bare Metal Instances - CMP330 - re:Invent 2017Amazon Web Services
When Amazon EC2 launched in 2006 there was a single instance size: m1.small. Over the past eleven years EC2 has evolved to provide an extensive selection of compute resources to customers including specialized resources such as NVMe SSDs, GPUs, and FPGAs. Under the hood, the servers used to host EC2 instances have transformed from off the shelf designs running virtualization software on the host CPUs to purpose built servers with AWS network and storage components implemented in hardware. Now we are happy to announce a new category of EC2 instances: Amazon EC2 Bare Metal Instances. These instances provide customers access to the physical compute resources of the host processors along with the security, scale, and services of EC2. This session will provide an overview of Bare Metal instances, how VMware used EC2 Bare Metal instances to build VMware Cloud on AWS, and other customer use cases for this new EC2 capability.
Explore and build all the components of a complete connected device workflow. We start with constructing a physical drink dispenser from provided parts and connecting it to AWS IoT. Then we use Amazon Cognito, Amazon DynamoDB, AWS Lambda, Amazon API Gateway, and Amazon S3 to build a serverless application for secure device management and control of your dispenser. Learn how AWS IoT provides flexible communication with physical connected devices and integrates with other AWS services. Also learn how to incorporate a serverless application built with other AWS services to intuitively manage and control devices from a responsive web application. This workshop involves connections to the physical drink dispenser, so bring a laptop with administrative privileges and a working USB port, and have the AWS CLI loaded and configured for your AWS account (with administrative permissions). We provide the physical hardware, USB cable, and network connectivity.
DVC303-Technological Accelerants for Organizational TransformationAmazon Web Services
"Developers and management can seem at cross purposes when one group looks at technologies and the other looks at organizational issues. Both groups are looking for ways to deliver value faster, leaner, and at less cost. There are technological avenues for accomplishing these goals, including DevOps and serverless architectures. However, these approaches also have organizational implications, as they change the nature and content of communication between teams. In this session, we cover the technology benefits and organizational transformations involved in DevOps and serverless architectures.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
GPSBUS214-Key Considerations for Cloud Procurement in the Public SectorAmazon Web Services
Interested in understanding best practices for cloud procurement in the public sector? We cover how AWS partners can guide and educate public sector organizations to effectively access the full benefits of the cloud. Topics include best practices for pricing, governance, security, terms and conditions, and buying frameworks.
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...Amazon Web Services
Find out how Citrix built a solution using Matillion ETL for Amazon Redshift from AWS Marketplace to load all data into an Amazon Redshift cluster, allowing them to do their analytics on the entire environment at a single time. We’ll discuss the transition made to consolidate multiple disparate databases in order to run analytic workloads, get a holistic view of all their data sources, and prevent inconsistent data from being captured.
GPSTEC317-From Leaves to Lawns AWS Greengrass at the Edge and BeyondAmazon Web Services
AWS Greengrass provides a wide range of opportunities from IoT gateway applications to building systems like those with microservice architectures. In this session, we first evaluate how AWS Greengrass fits into OEM, ODM, and IT service delivery models. We then wade into a gentle overview of AWS Greengrass and how it interoperates with AWS IoT and other AWS services. We walk through several key AWS Greengrass distributed architectures. Next, to help you accelerate your solution using AWS Greengrass, we discuss how AWS Greengrass fits into the AWS Cloud development and delivery model. The talk wraps up with a demonstration of AWS Greengrass facilitating communication between a closed machine to machine (M2M) network and AWS IoT.
"Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. But how can developers leverage containers to drive innovation for their applications, their team, and organization?
In this session, Asif Khan Technical Business Manager for AWS will discuss how containers are becoming a new cloud native compute primitive, and how your organization can use containers as a building block to accelerate innovation.
WeWork's Christopher Tava, Joshua Davis, and OpsLine's Radek Wierzbicki will show how they adopted containers as discipline in code development, and how they refactored their production architecture into containers running on Amazon ECS in under 8 months."
By packaging software into standardized units, Docker gives code everything it needs to run, ensuring consistency from your laptop all the way into production. But once you have your code ready to ship, how do you run and scale it in the cloud? In this session, you become comfortable running containerized services in production using Amazon EC2 Container Service. We cover container deployment, cluster management, service auto-scaling, service discovery, secrets management, logging, monitoring, security, and other core concepts. We also cover integrated AWS services and supplementary services that you can take advantage of to run and scale container-based services in the cloud.
Many industries are going through a digital transformation as their existing business models are being disrupted and new competitors emerge. The key driver is a need for faster time-to-value as a direct relationship with customers provides analytics that drive personalization and rapid product development. There’s a cultural aspect to the change, as well as new organizational patterns that go along with a migration to cloud native services. Application architectures are evolving from monoliths to microservices and serverless deployments, and they becoming more distributed, highly available, and resilient. The highly automated practices that have built up around DevOps are moving to the mainstream, and some new techniques are emerging around security red teams and chaos engineering.
BAP202_Amazon Connect Delivers Personalized Customer Experiences for Your Clo...Amazon Web Services
Join us for an overview and demonstration of Amazon Connect, a self-service, cloud-based contact center based on the same technology used by Amazon customer service associates worldwide to power millions of conversations. The self-service graphical interface in Amazon Connect makes it easy to design contact flows for self and assisted call-handling experiences, manage agents, and track performance metrics – no specialized skills required. In this session, you will hear from Capital One and T-Mobile on how they are using Amazon Connect to provide their customers with dynamic, natural, and personalized experiences. See how quickly you can get started with Amazon Connect and build your contact center.
RET304_Rapidly Respond to Demanding Retail Customers with the Same Serverless...Amazon Web Services
Today’s retail customers want to set the rules on how and when they buy, receive, and return their product. But many retailers are struggling to unify their sales channels using existing legacy e-commerce software stacks. To consistently serve customers across retail channels, retailers must adopt a modern architecture that is elastic, cost effective, and based on loosely coupled application services. In this session, we dive deep into how retailers can leverage serverless architectures using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Learn how Amazon Fresh quickly responded to customer feedback on the Totes Pickup feature, developing a cost-effective and scalable self-service serverless application to deliver a 1-click experience for the customer, while providing faster insights back to the business.
GPSBUS220-Refactor and Replatform .NET Apps to Use the Latest Microsoft SQL S...Amazon Web Services
Developers and architects migrating Microsoft enterprise applications to AWS can leverage new tools and services to implement DevOps best practices identified and developed by AWS solution architects and service teams. Learn about architectural best practices and AWS services such as AWS CodeBuild and AWS CodeDeploy, focusing on the .NET environment. Get examples of using the latest SQL Server release on Amazon EC2 or Amazon RDS, or on other database offerings native to AWS, like Amazon Aurora or serverless environments. Hear how an APN Partner took a global retail customer’s ecommerce engine and SQL Server–based data platform from on premises to the AWS Cloud in just weeks.
ARC207_Monitoring Performance of Enterprise Applications on AWSAmazon Web Services
"Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally.
New Relic helps manage these applications without sacrificing simplicity.
In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we’ve learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing.
Session sponsored by New Relic"
DVC201-Build AWS Skills Through Community-Led User Groups.pdfAmazon Web Services
"Did you know that there are over 300 AWS User Groups worldwide? Join this panel discussion featuring AWS community leaders from around the world, and learn the value of attending community-led AWS Meetups in your region. Community leaders share their experiences, talk through how local communities help developers solve problems and achieve their goals, and discuss the benefits of participating in peer-to-peer AWS knowledge sharing and networking activities.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
EUT303_Modernizing the Energy and Utilities Industry with IoT Moving SCADA to...Amazon Web Services
Supervisory Control and Data Acquisition (SCADA) systems are critical real-time software applications used to manage nearly any form of upstream, midstream, and downstream processes in the energy industry. Traditionally, these technologies have been deployed on premises and managed separately from core IT, to ensure security, availability and consistent performance.
As energy and utility companies expand geographically, and the number and types of sensors in each location grow, disparate and growing data streams are becoming increasingly complex and challenging to manage. It is estimated that up to 95% of valuable device and sensor information is left stranded in the field, information that could prove valuable to machine learning, predictive analytics, and process optimization.
In this session, energy and utility customers will learn how easy it is to implement IIoT on AWS, so they can easily extract value from additional devices and sensors, and innovate faster. We will dive into a reference architecture for accessing current mission critical SCADA data as well as previously stranded data into AWS using Kinesis and DynamoDB, ultimately enabling customers to reduce downtime, increase efficiencies, improve reliability, and gain more business insights through connected data.
The threat model for IoT devices is very different from the threat model for cloud applications. Customers must understand what these threats are, prioritize them effectively, and navigate the growing ecosystem of partners that give customers tools to build secure IoT solutions. We showcase how to leverage partner solutions to mitigate threats, explain how to avoid common pitfalls, and make it clear that all IoT solutions must incorporate end-to-end security from the start. We begin with the steps to take in the manufacturing process, how to provision and authenticate devices in the field, and we cover solutions that can help customers comply with IT requirements in the maintenance phase of the product lifecycle.
Mapmaking and Location-Based Systems in the Cloud - ENT339 - re:Invent 2017Amazon Web Services
From 2014 through 2017, TomTom successfully migrated its major business systems to the AWS Cloud. This migration helped TomTom's vision of real-time mapmaking become a reality, and it put TomTom significantly ahead of the competition in the domain of location technology. In this session, we explore the practical aspects of migrating to the cloud, including technological challenges as well as the necessary shifts in mindset to successfully get us through the migration journey. We discuss how the migration was done gradually due to huge risk exposure all the while maintaining 24/7 system uptime. We explore how the expected benefits of the cloud came to life, and we elaborate on the unexpected benefits, such as cost of ownership and increased awareness in the teams, as well as upper-stack benefits and the possibility to fit services and hardware while scaling the system—benefits that we could not have realized with on-premises infrastructure.
You want to go to the cloud, but you are blocked by legacy technical debt. In this session, we guide you into the cloud using trusted application platforms like OpenShift and CloudFoundry. Come learn how to unblock your migration and unwind an otherwise complicated transformation.
AWS Head of Technology Partners, Stanley Chan presents the global market opportunity for Australian software and technology providers, at AWS TechShift Sydney.
NEW LAUNCH! Amazon EC2 Bare Metal Instances - CMP330 - re:Invent 2017Amazon Web Services
When Amazon EC2 launched in 2006 there was a single instance size: m1.small. Over the past eleven years EC2 has evolved to provide an extensive selection of compute resources to customers including specialized resources such as NVMe SSDs, GPUs, and FPGAs. Under the hood, the servers used to host EC2 instances have transformed from off the shelf designs running virtualization software on the host CPUs to purpose built servers with AWS network and storage components implemented in hardware. Now we are happy to announce a new category of EC2 instances: Amazon EC2 Bare Metal Instances. These instances provide customers access to the physical compute resources of the host processors along with the security, scale, and services of EC2. This session will provide an overview of Bare Metal instances, how VMware used EC2 Bare Metal instances to build VMware Cloud on AWS, and other customer use cases for this new EC2 capability.
Explore and build all the components of a complete connected device workflow. We start with constructing a physical drink dispenser from provided parts and connecting it to AWS IoT. Then we use Amazon Cognito, Amazon DynamoDB, AWS Lambda, Amazon API Gateway, and Amazon S3 to build a serverless application for secure device management and control of your dispenser. Learn how AWS IoT provides flexible communication with physical connected devices and integrates with other AWS services. Also learn how to incorporate a serverless application built with other AWS services to intuitively manage and control devices from a responsive web application. This workshop involves connections to the physical drink dispenser, so bring a laptop with administrative privileges and a working USB port, and have the AWS CLI loaded and configured for your AWS account (with administrative permissions). We provide the physical hardware, USB cable, and network connectivity.
DVC303-Technological Accelerants for Organizational TransformationAmazon Web Services
"Developers and management can seem at cross purposes when one group looks at technologies and the other looks at organizational issues. Both groups are looking for ways to deliver value faster, leaner, and at less cost. There are technological avenues for accomplishing these goals, including DevOps and serverless architectures. However, these approaches also have organizational implications, as they change the nature and content of communication between teams. In this session, we cover the technology benefits and organizational transformations involved in DevOps and serverless architectures.
This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities."
GPSBUS214-Key Considerations for Cloud Procurement in the Public SectorAmazon Web Services
Interested in understanding best practices for cloud procurement in the public sector? We cover how AWS partners can guide and educate public sector organizations to effectively access the full benefits of the cloud. Topics include best practices for pricing, governance, security, terms and conditions, and buying frameworks.
MSC203_How Citrix Uses AWS Marketplace Solutions To Accelerate Analytic Workl...Amazon Web Services
Find out how Citrix built a solution using Matillion ETL for Amazon Redshift from AWS Marketplace to load all data into an Amazon Redshift cluster, allowing them to do their analytics on the entire environment at a single time. We’ll discuss the transition made to consolidate multiple disparate databases in order to run analytic workloads, get a holistic view of all their data sources, and prevent inconsistent data from being captured.
GPSTEC317-From Leaves to Lawns AWS Greengrass at the Edge and BeyondAmazon Web Services
AWS Greengrass provides a wide range of opportunities from IoT gateway applications to building systems like those with microservice architectures. In this session, we first evaluate how AWS Greengrass fits into OEM, ODM, and IT service delivery models. We then wade into a gentle overview of AWS Greengrass and how it interoperates with AWS IoT and other AWS services. We walk through several key AWS Greengrass distributed architectures. Next, to help you accelerate your solution using AWS Greengrass, we discuss how AWS Greengrass fits into the AWS Cloud development and delivery model. The talk wraps up with a demonstration of AWS Greengrass facilitating communication between a closed machine to machine (M2M) network and AWS IoT.
"Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. But how can developers leverage containers to drive innovation for their applications, their team, and organization?
In this session, Asif Khan Technical Business Manager for AWS will discuss how containers are becoming a new cloud native compute primitive, and how your organization can use containers as a building block to accelerate innovation.
WeWork's Christopher Tava, Joshua Davis, and OpsLine's Radek Wierzbicki will show how they adopted containers as discipline in code development, and how they refactored their production architecture into containers running on Amazon ECS in under 8 months."
By packaging software into standardized units, Docker gives code everything it needs to run, ensuring consistency from your laptop all the way into production. But once you have your code ready to ship, how do you run and scale it in the cloud? In this session, you become comfortable running containerized services in production using Amazon EC2 Container Service. We cover container deployment, cluster management, service auto-scaling, service discovery, secrets management, logging, monitoring, security, and other core concepts. We also cover integrated AWS services and supplementary services that you can take advantage of to run and scale container-based services in the cloud.
Many industries are going through a digital transformation as their existing business models are being disrupted and new competitors emerge. The key driver is a need for faster time-to-value as a direct relationship with customers provides analytics that drive personalization and rapid product development. There’s a cultural aspect to the change, as well as new organizational patterns that go along with a migration to cloud native services. Application architectures are evolving from monoliths to microservices and serverless deployments, and they becoming more distributed, highly available, and resilient. The highly automated practices that have built up around DevOps are moving to the mainstream, and some new techniques are emerging around security red teams and chaos engineering.
BAP202_Amazon Connect Delivers Personalized Customer Experiences for Your Clo...Amazon Web Services
Join us for an overview and demonstration of Amazon Connect, a self-service, cloud-based contact center based on the same technology used by Amazon customer service associates worldwide to power millions of conversations. The self-service graphical interface in Amazon Connect makes it easy to design contact flows for self and assisted call-handling experiences, manage agents, and track performance metrics – no specialized skills required. In this session, you will hear from Capital One and T-Mobile on how they are using Amazon Connect to provide their customers with dynamic, natural, and personalized experiences. See how quickly you can get started with Amazon Connect and build your contact center.
RET304_Rapidly Respond to Demanding Retail Customers with the Same Serverless...Amazon Web Services
Today’s retail customers want to set the rules on how and when they buy, receive, and return their product. But many retailers are struggling to unify their sales channels using existing legacy e-commerce software stacks. To consistently serve customers across retail channels, retailers must adopt a modern architecture that is elastic, cost effective, and based on loosely coupled application services. In this session, we dive deep into how retailers can leverage serverless architectures using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Learn how Amazon Fresh quickly responded to customer feedback on the Totes Pickup feature, developing a cost-effective and scalable self-service serverless application to deliver a 1-click experience for the customer, while providing faster insights back to the business.
GPSBUS220-Refactor and Replatform .NET Apps to Use the Latest Microsoft SQL S...Amazon Web Services
Developers and architects migrating Microsoft enterprise applications to AWS can leverage new tools and services to implement DevOps best practices identified and developed by AWS solution architects and service teams. Learn about architectural best practices and AWS services such as AWS CodeBuild and AWS CodeDeploy, focusing on the .NET environment. Get examples of using the latest SQL Server release on Amazon EC2 or Amazon RDS, or on other database offerings native to AWS, like Amazon Aurora or serverless environments. Hear how an APN Partner took a global retail customer’s ecommerce engine and SQL Server–based data platform from on premises to the AWS Cloud in just weeks.
ARC207_Monitoring Performance of Enterprise Applications on AWSAmazon Web Services
"Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally.
New Relic helps manage these applications without sacrificing simplicity.
In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we’ve learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing.
Session sponsored by New Relic"
Journey Towards Scaling Your API to 10 Million UsersAdrian Hornsby
The slides from my talk at the NordicAPI summit 2017:
https://nordicapis.com/sessions/journey-towards-scaling-application-10-million-users/
A collection of thoughts and ideas that I experienced during my 10 years working with AWS Cloud.
Learn how to build serverless applications using the AWS Serverless Platform-...Amazon Web Services
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server?
In this session, you will learn how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverless applications, such as the AWS Serverless Application Model (AWS SAM), Chalice, and ClaudiaJS.
We will cover:
- Learn the basics of AWS Lambda and Amazon API Gateway
- Understand how to build a web application using these AWS services
- Learn to architect a serverless application
- Gain an overview of frameworks for building serverless applications
This webinar is a Level 100 session and is suited for:
- Developers
- Solution architects and engineers
- Technical managers
Speakers:
Stephen Liedig, Public Sector Solution Architect, Amazon Web Services
Q&A:
Ed Lima, Solutions Architect, Amazon Web Services
Scale Website dan Mobile Applications Anda di AWS hingga 10 juta penggunaAmazon Web Services
Dengan cloud computing, Anda akan mendapatkan sejumlah keuntungan, seperti kemampuan untuk scale web application atau website secara on-demand. Jika Anda sedang membangun web application dan ingin mulai menggunakan cloud computing, mungkin terlintas di benak Anda, “Mulai dari mana?” Mari bergabung bersama kami dalam sesi ini untuk memahami praktik terbaik untuk scaling infrastruktur Anda, mulai dari nol hingga jutaan pengguna. Kami akan tunjukkan cara terbaik untuk membangun bisnis Anda dengan layanan AWS, membantu memutuskan pilihan yang lebih cerdas, dan bagaimana scaling infrastruktur Anda di cloud.
Increasingly, organizations are turning to microservices to help them empower autonomous teams, letting them innovate and ship software faster than ever before. But implementing a microservices architecture comes with a number of new challenges that need to be dealt with. Chief among these finding an appropriate platform to help manage a growing number of independently deployable services.
In this session, Sam Newman, author of Building Microservices and a renowned expert in microservices strategy, will discuss strategies for building scalable and robust microservices architectures, how to choose the right platform for building microservices, and common challenges and mistakes organizations make when they move to microservices architectures."
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
Increasing Productivity with End-User Computing Solutions on AWSAmazon Web Services
IT organizations today need to support a modern, flexible, global workforce and ensure that their users can be productive anywhere. Moving desktops and applications to the AWS Cloud offers improved security, scale, and performance with cloud economics. In this session, we provide an overview of Amazon WorkSpaces and Amazon AppStream 2.0, and we discuss the use cases for each. Then, we dive deep into best practices for implementing Amazon WorkSpaces and AppStream 2.0
Driving Innovation with Containers - CON203 - re:Invent 2017Amazon Web Services
Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. But how can developers leverage containers to drive innovation for their applications, their team, and organization?
In this session, Asif Khan Technical Business Manager for AWS will discuss how containers are becoming a new cloud native compute primitive, and how your organization can use containers as a building block to accelerate innovation.
WeWork's Christopher Tava, Joshua Davis, and OpsLine's Radek Wierzbicki will show how they adopted containers as discipline in code development, and how they refactored their production architecture into containers running on Amazon ECS in under 8 months.
Use Amazon Rekognition to Build a Facial Recognition SystemAmazon Web Services
by Kashif Imran
Amazon Rekognition makes it easy to extract meaningful metadata from visual content. In this workshop, you will work in teams to build a simple system to help track missing persons. You'll develop a solution that leverages Amazon Rekognition and other AWS services to analyze images from various sources (e.g., social media) and provide authorities with timely reports and alerts on new leads for missing individuals. The solution will entail a repeatable and automated process that follows best practices for architecting in the cloud, such as designing for high availability and scalability.
ENT212-An Overview of Best Practices for Large-Scale MigrationsAmazon Web Services
We've partnered with hundreds of customers on their large-scale migrations to AWS. This session outlines some of the common challenges that our customers face and how they've overcome these challenges. The session also describes the patterns we've observed that make legacy migrations successful, and the mechanisms we've created to help customers migrate faster.
Slides from my talk at the first AWS Community Day in Bangalore
https://www.meetup.com/awsugblr/events/243819403/
Speaker notes: https://medium.com/@adhorn/10-lessons-from-10-years-of-aws-part-1-258b56703fcf
and https://medium.com/@adhorn/10-lessons-from-10-years-of-aws-part-2-5dd92b533870
The list is not in any particular order :)
Case Study: Ola Cabs Uses Amazon EBS and Elastic Volumes to Maximize MySQL De...Amazon Web Services
Ola Cabs is India’s leading taxi aggregator, providing point-to-point transportation for a million people daily in more than 110 cities. Ola Cabs chose AWS from the start because it offered the flexibility, scalability, and agility the company needed to establish its competitive edge. In this session, you hear about Ola Cabs’ journey to the cloud and learn how they take advantage of the flexibility and elasticity of Amazon EBS storage to optimize performance, maximize availability, and save money compared with instance storage. We describe best practices and share tips for success throughout.
If you want to deliver videos to all consumers on all devices, building such workloads is complex, time consuming, and expensive. Now, it is fast and easy to implement video-on-demand workflows on AWS and distribute video content to a global audience. Companies, small or large and in various industries, can deliver streaming video without complex professional video tools. In this session, learn how to build complex video workflows entirely in code using AWS services.
More and more enterprise companies are migrating to the AWS Cloud and there are a number of reasons why. While every organization is going to have their own unique motivations, common drivers include exiting data centers, increasing business agility, improving workforce productivity, gaining transparency in operational costs and reducing risk.
The AWS Migration Acceleration Program (MAP) is designed to help enterprises that are committed to a migration journey achieve a range of these business benefits by migrating existing workloads to Amazon Web Services. In this session, you will learn about proven migration patterns, methods and tools that AWS has delivered successfully to hundreds of enterprise customers globally that will help you accelerate migrations, reduce risk and quickly realize value.
Slides from my talk at the IP Expo Nordic 2017:
https://www.ipexponordic.com/Speakers-2017/Adrian-Hornsby
Speed and agility are essential for today’s businesses. The quicker you can get from an idea to first results, the more you can experiment and innovate with your data, perform ad-hoc analysis, and drive answers to new business questions. During this talk, Adrian will take in key features of the AWS IoT platform, latest developments and live demos
Cox Automotive’s Data Center Migration to the AWS Cloud - ENT330 - re:Invent ...Amazon Web Services
Cox Automotive provides digital solutions that transform how the world buys, sells, and owns cars. Cox is currently engaged in a multiyear effort to migrate the bulk of its applications from physical data centers to AWS, including client-facing SaaS applications and large, consumer-facing websites. In this session, they discuss their learnings on how to effectively migrate large, service-based architectures to AWS while minimizing the impact to customers. They also share lessons learned for conducting organizational change at scale and creating a culture of self-service.
Join us to learn what's new in serverless computing and AWS Lambda. Dr. Tim Wagner, General Manager of AWS Lambda and Amazon API Gateway, will share the latest developments in serverless computing and how companies are benefiting from serverless applications. You'll learn about the latest feature releases from AWS Lambda, Amazon API Gateway, and more. You will also hear from FICO about how it is using serverless computing for its predictive analytics and data science platform.
Similar to ARC207 Monitoring Performance of Enterprise Applications on AWS: Understanding the Dynamic Nature of Cloud Computing (20)
7 Tips & Tricks to Having Happy Customers at ScaleNew Relic
Customer expectations are at an all-time high, making it more and more difficult for companies to please them. Companies who understand their customers well are the ones who rise to the top over their competitors. New Relic, provider of real-time insights for software-driven businesses has this formula figured out. Roger Scott, New Relic's EVP and Chief Customer Officer shares his 7 tips and tricks for keeping your customers happy— and how to do so at a large scale.
7 Tips & Tricks to Having Happy Customers at ScaleNew Relic
Customer expectations are at an all-time high, making it more and more difficult for companies to please them. Companies who understand their customers well are the ones who rise to the top over their competitors. New Relic, provider of real-time insights for software-driven businesses has this formula figured out. Roger Scott, New Relic's EVP and Chief Customer Officer shares his 7 tips and tricks for keeping your customers happy— and how to do so at a large scale.
FutureStack Tokyo 19 -[New Relic テクニカル講演]モニタリングと可視化がデジタルトランスフォーメーションを救う! - サ...New Relic
New Relicの目指していることの一つが、DevOpsを推進することを手助けし、デジタルトランスフォーメーションを成功させることです。DevOpsにとってなぜモニタリングと可視化が重要なのか、またどのようなデータを管理する必要があるのかを考察した上で、New Relicで実現できる例をデモを交え、技術からビジネスまで幅広い観点でご紹介します。
New Relic 株式会社
ソリューション コンサルタント
佐々木 千枝
FutureStack Tokyo 19_インサイトとデータを組織の力にする_株式会社ドワンゴ 池田 明啓 氏New Relic
サービス、プロダクトを”いつまでも”継続する為には、インサイトとデータを組織の力とする必要があります。
私達が開発、運用するドワンゴジェイピーは、間もなく二十周年を迎えます。決して順風満帆ではなかったシステムの遍歴と New Relic の導入方法を交え、継続できた理由の一つ、インサイトとデータを組織の力へ変換する方法をご紹介します。
Three Monitoring Mistakes and How to Avoid ThemNew Relic
The days of parsing log files and building out homebrewed monitoring tools are (thankfully) coming to an end. Yet as those outdated techniques begin to fade, a whole new set of challenges have arisen around employing and running modern monitoring solutions.
Discover how New Relic can help turn monitoring blunders into intelligent problem solving, including how to avoid making common mistakes like:
- Not monitoring the whole system
- Monitoring arbitrary things in your system
- Making your monitoring part of the problem
Intro to Multidimensional Kubernetes MonitoringNew Relic
As a Kubernetes environment grows and becomes more complex, it gets harder to answer some very basic—but very important—questions. Questions like: What is the health of my cluster? What is the hierarchy and the health of the elements (nodes, pods, containers, and applications) within my cluster? In order to effectively manage the health and performance of your Kubernetes environments—at any scale and any level of complexity—it’s essential you have immediate, useful answers to these questions.
Our Kubernetes cluster explorer was designed to give you a multi-dimensional representation of your clusters—giving you the ability to drill down into Kubernetes data and metadata in a high-fidelity, curated UI.
Understanding Microservice Latency for DevOps Teams: An Introduction to New R...New Relic
Distributed tracing is designed to give DevOps teams an easy way to capture, visualize, and analyze traces through complex architectures—including architectures that use both monoliths and microservices. And, by leveraging New Relic Applied Intelligence capabilities, you can easily highlight anomalies within a trace for more faster resolution.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
It’s Sunday.
The day of the big game.
You’ve invited 20 of your closes friends over to watch the game on your new 300” ultra max TV.
Everyone has come, your house is full of snacks and beer. Everyone is laughing. The game is about to start.
And…
…the lights go out……the TV goes dark……the game, for you and your friends, is over.
Obviously disappointed, what happened?
You decide to pick up the phone and call the local power company.
The representative, unsympathetically, says: “We’re sorry, but we only guarantee 95% availability of our power grid.”
They could not understand why you were complaining, after all you had power “most of the time”.
Why is availability important?
* Because your customers expect your service to work…all the time.
* Anything less than 100% availability can be catastrophic to your business.
We were wondering how changing a setting on our MySQL database might impact our performance…
… but we were worried that the change may cause our production database to fail…
… Since we didn’t want to bring down production, we decided to make the change to our backup (replica) database instead…
… After all, it wasn’t being used for anything at the moment.
Until…of course...the backup was needed...
Does this story sound familiar? This exact story is a true story, and unfortunately is not uncommon.
Imagine we are a e-commerce website. We’ve got a mobile app that can purchase items in ourshop. {C} Bob uses his phone, buys something, and it takes 300ms. That’s great! {C} Sally logs in, buys something, but the database is slow. It takes much longer. She is not a happy customer.
Availability is not just whether a page responds, but how long it takes to respond.
The customer doesn’t care why a problem occurred, they don’t care why your app is slow. If it doesn’t meet their expectations at a time they expect, nothing else matters…
But keeping your application available can be tough. It may be fuzzy. Performance may be good for some users, and bad for others. But, can you even detect this, or do you just show that, on average, your site is doing fine?
The real answer to how your application is doing is not a hope and a wish. It’s in the details. It’s in the data.
Modern application monitoring can’t be done by simply looking from the outside in. It can’t be done with averaged or sampled data. You must collect data from all areas of your application, and from all transactions. You must collect tons and tons of data.
---
In fact, you typically need to collect more monitoring data than data that is within your application. And it grows continuously, every day, every second. Everything that anyone does on your application, generates performance data.
If anybody is using your application, you must collect data about exactly how they are using it and how the infrastructure behind it works together. All of it is important.
All parts of your application, from your servers thru your apps, to the business outcomes they represent {C} All generate data that you must analyze together.
To know what happened, we need data. We need data from every level of our application. Here is a typical, simple, web application. It consists of an application and some services. It consists of servers running an operationg system, and they consist of virtual hardware that all that runs on. They may also run in our customers browsers, or in their mobile applications.
Often people think that all they need is low level virtual hardware monitoring. They monitor their instances using tools like CloudWatch. But CloudWatch provides a very limited view of the world. You get virtual hardware level information, but that’s about it. You don’t even get information about the operating system, memory, processes, or system configuration. And you absolutely get no information about your application.
To know how your application is really performing. You need an application performance monitoring tool. You also need to know how the rest of your infrastructure is running (the operating system for instance). You also need to know how your remote application, such as those running on mobile devices or your customer’s browsers are running.
To monitor the application, you need full stack performance monitoring.
Because, avoiding this is critical to every business.
Point 2, there are technologies that can help you keep your application running…technologies such as the dynamic cloud. How do I mean? Let’s take a look.
How can the cloud help? Well, it turns out that there are two fundamental ways people make use of the cloud. The first is to use the cloud as a “Better Data Center”. The second is to use the “Dynamic Nature” of the cloud to build better apps faster. I’m going to talk about each of these methods.
Let’s first look at using the cloud as a “Better Data Center”.
What do I mean by using the cloud as a “Better Data Center”? I mean:* Resources are allocated to uses, just like in a regular data center <click>
* The provisioning process for new resources, though, is significantly faster <click>
* The lifetime of the resources you create is relatively long…usually measured in days, weeks, months, or years. <click>
* However, even with a faster provisioning process, traditional “capacity planning” is still important and still applies.
Why would we want to use the cloud simply as a “better data center”? What are the benefits to us building applications? Since we can add new capacity faster, we can build and scale our applications easier in the cloud. In addition to adding servers easier and quicker, we can add entire new data centers easier, which can improve our application availability and redundancy. Additionally, this ability to add additional data centers can improve our compliance, especially when it comes to things like EU Safe Harbor laws.
So, now, let’s switch to talking about using the cloud in a dynamic environment.
What do I mean by using the cloud as a “dynamic tool for dynamic applications”? I mean:
Use only the resources you need <click>
* Allocate and deallocate resources on the fly <click>
* Resource allocation becomes an integral part of your application architecture.
In a dynamic application, resources are allocated, consumed, and deallocated on the fly. And the application is aware of and is controlling this management of resources. The application is essentially performing traditional OPs resource management tasks.
New Relic did an analysis recently about how our customers are making use of Docker. The question we wanted to answer was, how long do docker containers live? This diagram shows the answer to that question. The horizontal axis is the number of hours a docker container has lived for, and the vertical axis is the number of containers in that time bucket. As you can see, there is a long tail, with some docker containers running for well over a year. However, there is a huge number of docker containers that run for less than one hour. In fact, if we zoom in on just that one hour time period…
we can see that most docker containers we run actually only run for less than one minute! Over 11% of all docker containers we run will run for less than 60 seconds.
This is some customer’s application or service, some business logic, that starts up, runs, and shuts down all within 60 seconds. This is very rapid. These are containers that are launched only for a specific business purpose and are terminated when that purpose is completed. This is what we mean by dynamic infrastructure.
And there are lots of different cloud technologies that can be used in this dynamic manner…from queues to routing to auto scaled EC2 instances. Many resources in the cloud can be used in this dynamic fashion.
The dynamic cloud allows you to build better applications, faster. The way you’ve done things in the past won’t work in the future.
Change happens faster in the cloud. This is because of dynamic servers, dynamic infrastructure, and, more recently, {c} the cloud is even more dynamic due to technologies such as AWS Lambda.
Building dynamic infrastructures in the cloud allows you to {c} scale your applications better. {c} It also allows you to make changes to your application faster and easier. {c} Both of these ultimately result in higher availability…
But only if you know what your application is actually doing…
Here is an example of a dynamic application. It looks much like the static application. It might have more services and microservices that compose the application, this is typical of a more modern application.
We still have AWS CloudWatch monitoring the low level cloud infrastructure.
And we still have traditional application performance monitoring that monitors the static nature of the application components.
Overall, this provides **almost** top to bottom monitoring of the entire application.
But what about this piece? How do you monitor the provisioning process itself? Given that resources are coming and going regularly, how do you monitor that?
How do you monitor components that are there one moment, but less than 60 seconds later, they are gone?
<click>
Remember the docker information…
It turns out that monitoring a dynamic application in a dynamic cloud is very different than monitoring traditional data center components.
You must of course still monitor each of the cloud components themselves…each of the services and resources and components that make up your application.
{c}
But you also must monitor the lifecycle of the cloud components. This is because it matters not only **that** a resource was used, it matters **when** that resource was used. Because just looking at the resources running right now is inadequate when trying to diagnose a problem from even a few minutes ago. The resources that were in use when the problem occurred are **not** the same resources in use now.
So, in the old world, your operations team was comfortable. They knew the resources they controlled, they created them, they managed them. All was simple and manageable.
But in this new world, resources are created and destroyed dynamically. The world of the operations team can no longer be as simple as tracking resources on a spreadsheet. The resources they are responsible for are dynamic and transient. Their world has gotten a lot more complicated.
But in this new world, resources are created and destroyed dynamically. The world of the operations team can no longer be as simple as tracking resources on a spreadsheet. The resources they are responsible for are dynamic and transient. Their world has gotten a lot more complicated.
The third point, is getting to the cloud. Migrating to the cloud is easy, right?
How do we move to the cloud? Often, we start our migration to the cloud with lofty expectations. But we find out that moving to the cloud isn’t necessarily as easy as we would like it to me. Problems occur. The cloud doesn’t meet our expectations that have been promised to us. How can we meet our promises to our stakeholders if we can’t get the cloud to do what we want it to do? Most companies moving to the cloud struggle with this. Some struggle more than others. Some fail to overcome the struggle.
But moving to the cloud does not have to be scary or dangerous. It can be done safely, but you must be willing to learn as you go. Learn and adapt the cloud to meet your company’s needs, and learn and adopt your expectations to the reality of what the cloud can offer.
Let’s take a look at how most enterprises figure out how to migrate to the cloud. There are six *typical* steps that most companies take to move to the cloud.
They don’t all use all the steps. Some stop part way up the path.
Some skip steps.
But this is typical…
Let’s look at each of these in turn.
Let’s start with “Experiment”.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
After you’ve done some basic “feet wetting” in the cloud, security typically becomes a concern.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
Once policies are in place and the cloud can be trusted…you start using other features the cloud has to offer.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
Now the cloud is important to you, so you start to see what else the cloud can do for us.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
Now, we start looking at cloud native services…services only available in the cloud.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
So now we are committed to the cloud…now comes the last step. Mandated use.
This is the first, tentative step into the cloud. It involves using safe technologies. Technologies that we can use in simple and subtle ways in parts of our applications that may be less critical.
There are no cloud policies created. We just build one off implementations to see how the cloud can fit into our needs.
Most companies have at least started on this step.
But ultimately, these are the steps involved.
Different companies go thru these steps at different speeds.
Different companies find the right “stopping point” that matches their needs
While these are the steps our *company* may go thru.
As we build new and migrate existing applications, our applications go thru a similar learning process…
How can a given application take advantage of the cloud?
This adoption may happen faster or slower for different types of applications.
Let’s take a look at these as two different axis on a chart.
Coporate adoption process on the left, Application adoption process on the bottom
Another way to look at this: based on application types and requirements...
So we can see we are more likely to use the “newer” technologies, such as Lambda, in new applications. But we are much less willing to use these technologies in our more business critical applications.
There exists a sweet spot…
>Corporate adoption is strong, but not “mandated”
>Application adoption is strong, but not “committed”
*This is the destination for a lot of companies and applications
Very near some of the common, core AWS services
How can I make sure a cloud migration is successful?
Understand where your culture is
Risk tolerance, Cloud commitment, Expertise
Understand your needs
Redundancy? Cost? New Opportunity?
Consciously plan your acceptance
What level are you?
What level do you need to be?
Drive your culture to where you feel you need to be
Monitor your adoption
Before migration
Baseline application
Servers
Databases
Caches
Applications
Microservices
Determine your steady state
Important before you migrate!
During migration
Incorporate Cloud’s internal monitoring
…provides cloud specific infrastructure monitoring
…AWS CloudWatch
Continue application monitoring
*Here, looking for performance deviations from steady state
Track down & explain all deviations before moving on
Understand all deviations from norm
Solve problematic deviations/problems
Continue monitoring post migration
Should understand: The infrastructure is now out of your control…you need to keep an eye on it
Cloud infrastructure changes can impact your application…you need to keep an eye on it
There are some cloud specific concerns:
Amazon EC2 instance failures
Greater part of your availability plans
Often impacts other AWS systems as well
Instance degradation (more common than you’d think)
Ongoing application & infrastructure monitoring is essential
Before, During, and After Your migration.
Monitoring plays an important role in your entire migration process.
So, that’s the third point in keeping your application running at scale…successful cloud migration.
{c}Together, these three points can keep your application highly available and running at scale.
{c}And underlying all three is monitoring your application and your infrastructure.
It used to be, long ago, that all it took to make sure an application was running was to look at the server. Did the amount of CPU or memory utilization change recently? If it did, there might be a problem. Everything was static, everything was smooth. Everything was constant. A change indicated a problem.
But in this new world, resources are created and destroyed dynamically. The world of the operations team can no longer be as simple as tracking resources on a spreadsheet. The resources they are responsible for are dynamic and transient. Their world has gotten a lot more complicated.
In order to monitor your dynamic applications in the dynamic cloud, you must monitor all aspects of your application, top to bottom, using a full stack monitoring solution, a solution such as New Relic.
Because this wasn’t acceptable.
The dynamic cloud allows you to build better applications, faster. The way you’ve done things in the past won’t work in the future.