The document discusses Microsoft's Event Grid service. It provides a 3-sentence summary:
Event Grid is a fully managed event routing service that can handle billions of events per week to trigger workflows and functions. It uses a pub/sub model to allow event publishers to emit events to topics, which then causes matching subscriptions to receive the events. Event Grid is designed to be cloud native, serverless friendly, and handle large-scale event processing reliably and securely across Microsoft Azure and other cloud services and applications.
Intellias CQRS Framework - is a cutting-edge cloud-native framework for massive-scale event-driven microservice solutions.
CQRS Framework designed as a part of IntelliGrowth cloud platform for managing mission-critical business processes by a team of Top CoE architects and engineers.
AWS re:Invent 2016: Blockchain on AWS: Disrupting the Norm (GPST301)Amazon Web Services
Blockchain technology is poised for widespread adoption. AWS is working with financial institutions and blockchain providers to further innovation. AWS provides services like CloudTrail, CloudFormation, S3 and VPC that can be used to build robust blockchain solutions globally at scale, whether for public or private blockchains. PwC has experience delivering blockchain proofs-of-concept, pilots and production systems for insurance claims management and asset distribution using these AWS services. Future blockchain use cases may include identity management, utilities, healthcare and energy.
So many times our customers need a simple routine that can be executed on a routine basis but the solution doesn’t need to be an elaborate solution without going the trouble of setting servers and other infrastructure. Serverless computer is the abstraction of servers, infrastructure, and operating systems and make getting solutions to your customer’s needs much quicker and cheaper. During this session we will look at how Azure Functions will enable you to run code on-demand without having to explicitly provision or manage infrastructure.
Netflix Open Source Meetup Season 4 Episode 3aspyker
In this episode, we will focus on security in the cloud at scale. We’ll have Netflix speakers discussing existing and upcoming security-related OSS releases, and we’ll also have external speakers from organizations that are using and contributing to Netflix security OSS.
First, Patrick Kelley from Netflix’s Security Operations team will speak about RepoMan, an upcoming OSS release designed to right-size AWS permissions. Then, Wes Miaw from Netflix’s Security Engineering team will discuss MSL (Message Security Layer).
We have two external speakers for this event - Chris Dorros from OpenDNS/Cisco will talk about his use of and contributions to Lemur, and Ryan Lane from Lyft will talk about their use of BLESS.
After the talks, we’ll have OSS authors at demo stations to answer questions and provide demos of Netflix security OSS, including Lemur, MSL, and Security Monkey.
Elastically scalable architectures with microservices. The end of the monolith?Javier Arias Losada
In the last years the microservices architecture style has been gaining traction with some companies such as Netflix, Yelp, Gilt, PayPal. Many of that companies abandoned their previous monolithic architecture and moved to a microservices approach.
Does that mean that monolithic architectures are a thing of the past?
In this talk we will review some key microservices concepts (and misconceptions), search for the essence of microservices architectures and discuss about different approaches to implement them from the industry.
Andrew Spyker presented on Netflix's cloud platform and open source projects. Some key points included:
- Netflix has migrated from monolithic architectures to microservices and continuous delivery enabled by their open source libraries and services.
- Their platform focuses on elasticity, high availability through automation, and operational visibility.
- Netflix uses technologies like Eureka, Ribbon, Hystrix, and Servo to enable scalability, resilience, and monitoring across their distributed systems.
- They contribute over 50 open source projects to help others adopt their cloud-native approaches and are working on data and UI related projects.
You’ve decided to develop in Azure and need to make a decision on the messaging technology. Storage Queues, Service Bus, Event Grid, Event Hubs, etc. Which technology should you use? How do you pick the right one if they all deal with messages? This session will help you answer these questions.
Kafka Summit SF 2017 - Providing Reliability Guarantees in Kafka at One Trill...confluent
In this presentation, I will talk about my firsthand experience dealing with the unique challenges of running Kafka at a massive scale. If you ever thought that running Kafka is difficult, this talk may change your mind and provide you with valuable insights into how to configure a Kafka cluster efficiently, how to manage Kafka for enterprise customers and how to measure, monitor and maintain the Quality of Kafka Service. Our production Kafka cluster runs over 1500+ VMs, and serves over 10 GBPS data spread across hundreds of topics for multiple teams across Microsoft. We built a self-serve Kafka management service to make the process manageable and scalable across many teams. In this talk, I will also share insights about running Kafka in Private vs multi-tenant mode, supporting failover and disaster recovery requirements, and how to make Kafka Compliant with regulatory certifications such as ISO, SOC, FEDRAMP, etc.
Presented by Nitin Kumar, Microsoft
Systems Track
Intellias CQRS Framework - is a cutting-edge cloud-native framework for massive-scale event-driven microservice solutions.
CQRS Framework designed as a part of IntelliGrowth cloud platform for managing mission-critical business processes by a team of Top CoE architects and engineers.
AWS re:Invent 2016: Blockchain on AWS: Disrupting the Norm (GPST301)Amazon Web Services
Blockchain technology is poised for widespread adoption. AWS is working with financial institutions and blockchain providers to further innovation. AWS provides services like CloudTrail, CloudFormation, S3 and VPC that can be used to build robust blockchain solutions globally at scale, whether for public or private blockchains. PwC has experience delivering blockchain proofs-of-concept, pilots and production systems for insurance claims management and asset distribution using these AWS services. Future blockchain use cases may include identity management, utilities, healthcare and energy.
So many times our customers need a simple routine that can be executed on a routine basis but the solution doesn’t need to be an elaborate solution without going the trouble of setting servers and other infrastructure. Serverless computer is the abstraction of servers, infrastructure, and operating systems and make getting solutions to your customer’s needs much quicker and cheaper. During this session we will look at how Azure Functions will enable you to run code on-demand without having to explicitly provision or manage infrastructure.
Netflix Open Source Meetup Season 4 Episode 3aspyker
In this episode, we will focus on security in the cloud at scale. We’ll have Netflix speakers discussing existing and upcoming security-related OSS releases, and we’ll also have external speakers from organizations that are using and contributing to Netflix security OSS.
First, Patrick Kelley from Netflix’s Security Operations team will speak about RepoMan, an upcoming OSS release designed to right-size AWS permissions. Then, Wes Miaw from Netflix’s Security Engineering team will discuss MSL (Message Security Layer).
We have two external speakers for this event - Chris Dorros from OpenDNS/Cisco will talk about his use of and contributions to Lemur, and Ryan Lane from Lyft will talk about their use of BLESS.
After the talks, we’ll have OSS authors at demo stations to answer questions and provide demos of Netflix security OSS, including Lemur, MSL, and Security Monkey.
Elastically scalable architectures with microservices. The end of the monolith?Javier Arias Losada
In the last years the microservices architecture style has been gaining traction with some companies such as Netflix, Yelp, Gilt, PayPal. Many of that companies abandoned their previous monolithic architecture and moved to a microservices approach.
Does that mean that monolithic architectures are a thing of the past?
In this talk we will review some key microservices concepts (and misconceptions), search for the essence of microservices architectures and discuss about different approaches to implement them from the industry.
Andrew Spyker presented on Netflix's cloud platform and open source projects. Some key points included:
- Netflix has migrated from monolithic architectures to microservices and continuous delivery enabled by their open source libraries and services.
- Their platform focuses on elasticity, high availability through automation, and operational visibility.
- Netflix uses technologies like Eureka, Ribbon, Hystrix, and Servo to enable scalability, resilience, and monitoring across their distributed systems.
- They contribute over 50 open source projects to help others adopt their cloud-native approaches and are working on data and UI related projects.
You’ve decided to develop in Azure and need to make a decision on the messaging technology. Storage Queues, Service Bus, Event Grid, Event Hubs, etc. Which technology should you use? How do you pick the right one if they all deal with messages? This session will help you answer these questions.
Kafka Summit SF 2017 - Providing Reliability Guarantees in Kafka at One Trill...confluent
In this presentation, I will talk about my firsthand experience dealing with the unique challenges of running Kafka at a massive scale. If you ever thought that running Kafka is difficult, this talk may change your mind and provide you with valuable insights into how to configure a Kafka cluster efficiently, how to manage Kafka for enterprise customers and how to measure, monitor and maintain the Quality of Kafka Service. Our production Kafka cluster runs over 1500+ VMs, and serves over 10 GBPS data spread across hundreds of topics for multiple teams across Microsoft. We built a self-serve Kafka management service to make the process manageable and scalable across many teams. In this talk, I will also share insights about running Kafka in Private vs multi-tenant mode, supporting failover and disaster recovery requirements, and how to make Kafka Compliant with regulatory certifications such as ISO, SOC, FEDRAMP, etc.
Presented by Nitin Kumar, Microsoft
Systems Track
All Things Open 2014 - Day 1
Wednesday, October 22nd, 2014
Mark Hinkle
Senior Director & Citrix Open Source Business Office for Citrix
Cloud
Crash Course in Cloud Computing
Find more of Mark's talks here: http://www.slideshare.net/socializedsoftware
Kubernetes is a system for orchestrating containerized workloads and services across many nodes that provides tools for managing replication, scaling, and state. KEDA allows Kubernetes to automatically scale function apps in response to events from sources like message queues or serverless triggers by integrating with functions running as pods and scaling them based on metrics and triggers. KEDA is useful for running serverless functions on Kubernetes in environments like on-premises, at the edge, or alongside other Kubernetes workloads where full control over scaling is needed.
This document discusses Kubernetes event-driven autoscaling (KEDA) which allows deployments to scale based on external events rather than resource metrics. KEDA monitors event sources like queues and scales the workload by modifying the horizontal pod autoscaler. It supports scaling deployments from zero replicas and scaling batch jobs. Real-world examples of using KEDA include scaling game workload for events and processing messages from queues in batches.
An overview of the Netflix Security Monkey Open Source tool. The presentation provides some background information, architectural overview, and screenshots showing the tool in action.
This document provides an agenda and best practices for CI/CD pipelines for .NET workloads using Azure DevOps. It recommends that CI pipelines should block pull requests, run tests faster than 5 minutes, and collect metrics. It suggests repository structure, including separating code, tests, and infrastructure, and using filters to trigger specific pipelines on changes. The document also discusses gated pre-merge pipelines, build pipelines, Roslyn analyzers, SonarCloud for quality analysis, post-merge build pipelines, and release pipelines for continuous deployment.
Herding Kats - Netflix’s Journey to Kubernetes Publicaspyker
An update from Netflix Compute's container management platform, Titus, covering the work to move from Mesos to Kubernetes. Lessons learned, next steps, and challenges.
Monitoring kubernetes across data center and cloudDatadog
This document summarizes a presentation about monitoring Kubernetes clusters across data centers and cloud platforms using Datadog. It discusses how Kubernetes provides container-centric infrastructure and flexibility for hybrid cloud deployments. It also describes how monitoring works in Google Container Engine using cAdvisor, Heapster, and Stackdriver. Finally, it discusses how Datadog and Tectonic can be used to extend Kubernetes monitoring capabilities for enterprises.
This document discusses serverless computing and the OpenWhisk platform. It describes how OpenWhisk allows developers to build event-driven applications without managing servers. OpenWhisk provides a programming model based on actions that are triggered by events to execute code without worrying about scaling. It also offers an open source implementation that can run locally or on IBM Bluemix and supports various use cases like serverless apps, IoT, and chatbots.
Distributed architecture in a cloud native microservices ecosystemZhenzhong Xu
This document summarizes key aspects of distributed architecture in a cloud native microservices ecosystem. It discusses Netflix's transition to microservices running in the cloud, key characteristics of microservices and cloud computing like scalability and availability, challenges of operating in the cloud like unpredictable failures and latency, Netflix's open source tools for discovery, circuit breaking, resilience, continuous delivery, and more. It also provides an overview of how to develop, integrate, operate, and optimize microservices in terms of embracing failures, caching, operations, and using a data-driven approach.
This document discusses how Netflix implements microservices. It outlines key principles such as modeling services around business domains, decentralizing all things, designing for failure, and making systems highly observable. Services are autonomous and communicate through dumb pipes and smart endpoints. Netflix uses service discovery, dynamic configuration, circuit breakers, and chaos testing to make services resilient and prevent failures from cascading. The document emphasizes that each service needs a fallback strategy and that a reliable routing layer is essential for microservices architectures to function properly at Netflix's scale.
Running a Massively Parallel Self-serve Distributed Data System At ScaleZhenzhong Xu
Nearly any Internet-connected screen is capable of streaming Netflix content. Sitting on top of a cloud-native microservice architecture, the entire ecosystem generates over 1 trillion events every day to feed critical Netflix systems to monitor service health, to detect fraudulent behaviors, and to improve customer experience.
Keystone is the critical piece of Netflix backend infrastructure to ensure massive amount of events are processed in near real time, reliably, at scale, and in face of failures in a cloud-native microservices environment.
Turns out, such an embarrassingly parallel stream processing system is not embarrassingly easy to develop and operate, especially given the challenges of unpredictable failures in a cloud-native environment, self-serve multi-tenancy support, and assumptions of maintaining extremely high development/operation agility.
This talk will shed light on how we built an elastic, resilient, reactive, and self-healing distributed system in the cloud. Zhenzhong will present * High-level cloud-native microservice based Keystone architecture. * A deep dive on how we built the system based on ideas such as declarative reconciliation, container based immutable deployment, logical workload isolation, and chaos exercise. * Insights into our operation best practices, such as capacity provisioning, delivery semantics, deployment tradeoffs, backpressure management, etc.
This document discusses Azure cloud computing services and how they can be used for deep learning. It provides an overview of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the Azure Data Science Virtual Machine, which contains tools for deep learning like TensorFlow. It recommends using scripting to manage resources on Azure and shutting down VMs when not in use to save costs.
Keystone event processing pipeline on a dockerized microservices architectureZhenzhong Xu
The document provides an overview of Keystone, Netflix's event processing pipeline. Some key points:
- Keystone is a collection of microservices and components that form a single, self-contained logical service for processing over 500 billion events generated daily at Netflix.
- It acts as a self-scaling, multi-tenant event processing pipeline that embraces continuous integration/continuous delivery to be self-healing and cloud failure tolerant.
- The routing infrastructure uses Zookeeper for instance assignment and checkpoints to clusters stored in S3 for at-least-once delivery semantics under failure conditions.
- The control plane handles container resource allocation, scheduling, and cluster orchestration and deployments.
- Current
This document discusses Azure and deep learning. It provides an overview of cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It also describes the Azure Data Science Virtual Machine which provides tools for deep learning like TensorFlow. It recommends scripting resource management and shutting down VMs when not in use to save costs.
Citrix VP of Product Marketing, Peder Ulander offers a history lesson on CloudStack during his opening remarks at the CloudStak=c Collaboration Conference.
The Reinvent 2016 conference hosted by Amazon Web Services included keynotes, over 400 sessions across 4 locations over 5 days. New services and updates were announced across compute, analytics, database, developer tools, artificial intelligence, monitoring, migration, mobile, containers, and lambda. Significant announcements included new instance types, elastic GPUs, IPv6 support for EC2, Athena for querying S3 data with SQL, Glue for data integration and transformations, and expanded capabilities for many existing services like Lambda, CloudFront, and Snowmobile for large data transfers.
Serverless apps can be developed using OpenWhisk, an open source serverless platform. OpenWhisk allows code to execute in response to events, using triggers, actions, and rules. It provides polyglot support and scales dynamically. The document demonstrates how to create a timer triggered action and a Slack bot using OpenWhisk. It also provides an overview of OpenWhisk's architecture and implementation.
Leonard Austin (Ravelin) - DevOps in a Machine Learning WorldOutlyer
As machine learning moves from niche to mainstream tech stacks how do DevOps engineers prepare for a very different set of problems. A brief look at the new issues that arise from machine learning, an overview of cutting-edge "old school" solutions and how to drag data science (kicking and screaming) into a world of automation.
Video: https://www.youtube.com/watch?v=KHxZCRajRiA
Join DevOps Exchange London here: http://meetup.com/DevOps-Exchange-London/
Follow DOXLON on twitter http://www.twitter.com/doxlon
This document discusses serverless computing and compares it to traditional server-based computing. It defines serverless computing and provides examples of serverless technologies like AWS Lambda. It also outlines common use cases for serverless computing like handling dynamic workloads and scheduled tasks. Finally, it compares different services between server-based and serverless models like compute, files, databases, data pipelines, machine learning, and IoT.
When IoT meets Serverless - from design to production and monitoringAlex Pshul
IoT is not the future anymore. It is happening right here and right now. There are more and more applications for deploying tiny electronic devices and companies are starting to see the value in this approach. To meet the high demand for IoT solutions, Microsoft invested 5 BILLION dollars in their IoT services last year.
Developing and deploying IoT code using Azure services is easy. The hard part is supporting the large amount of data that comes with it. By using the classic approach for developing backend services, scalability, maintenance, deployment and deciding frameworks are the biggest nightmares any architect will face.
Serverless computing comes to solve these issues and allows us to focus on what matters most – the logic. In this session we will discuss the differences between the classic backend approach and the new serverless approach. We will go over the services that Azure provides us for IoT development and how we can connect them to other services on Azure to create a completely serverless system, which will save us development and maintenance time.
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Amazon Web Services
"This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. This presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity.
The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine."
Alabama CyberNow 2018: Cloud Hardening and Digital Forensics ReadinessToni de la Fuente
This document provides an overview of digital forensics and security in the cloud. It discusses common attacks such as access key compromise and misconfigured services. It also outlines an incident response workflow and tools that can be used to acquire evidence from AWS resources like EC2 instances, S3 buckets, and RDS databases. Finally, it discusses hardening strategies like using immutable infrastructure and auditing tools like Prowler to assess security configurations.
All Things Open 2014 - Day 1
Wednesday, October 22nd, 2014
Mark Hinkle
Senior Director & Citrix Open Source Business Office for Citrix
Cloud
Crash Course in Cloud Computing
Find more of Mark's talks here: http://www.slideshare.net/socializedsoftware
Kubernetes is a system for orchestrating containerized workloads and services across many nodes that provides tools for managing replication, scaling, and state. KEDA allows Kubernetes to automatically scale function apps in response to events from sources like message queues or serverless triggers by integrating with functions running as pods and scaling them based on metrics and triggers. KEDA is useful for running serverless functions on Kubernetes in environments like on-premises, at the edge, or alongside other Kubernetes workloads where full control over scaling is needed.
This document discusses Kubernetes event-driven autoscaling (KEDA) which allows deployments to scale based on external events rather than resource metrics. KEDA monitors event sources like queues and scales the workload by modifying the horizontal pod autoscaler. It supports scaling deployments from zero replicas and scaling batch jobs. Real-world examples of using KEDA include scaling game workload for events and processing messages from queues in batches.
An overview of the Netflix Security Monkey Open Source tool. The presentation provides some background information, architectural overview, and screenshots showing the tool in action.
This document provides an agenda and best practices for CI/CD pipelines for .NET workloads using Azure DevOps. It recommends that CI pipelines should block pull requests, run tests faster than 5 minutes, and collect metrics. It suggests repository structure, including separating code, tests, and infrastructure, and using filters to trigger specific pipelines on changes. The document also discusses gated pre-merge pipelines, build pipelines, Roslyn analyzers, SonarCloud for quality analysis, post-merge build pipelines, and release pipelines for continuous deployment.
Herding Kats - Netflix’s Journey to Kubernetes Publicaspyker
An update from Netflix Compute's container management platform, Titus, covering the work to move from Mesos to Kubernetes. Lessons learned, next steps, and challenges.
Monitoring kubernetes across data center and cloudDatadog
This document summarizes a presentation about monitoring Kubernetes clusters across data centers and cloud platforms using Datadog. It discusses how Kubernetes provides container-centric infrastructure and flexibility for hybrid cloud deployments. It also describes how monitoring works in Google Container Engine using cAdvisor, Heapster, and Stackdriver. Finally, it discusses how Datadog and Tectonic can be used to extend Kubernetes monitoring capabilities for enterprises.
This document discusses serverless computing and the OpenWhisk platform. It describes how OpenWhisk allows developers to build event-driven applications without managing servers. OpenWhisk provides a programming model based on actions that are triggered by events to execute code without worrying about scaling. It also offers an open source implementation that can run locally or on IBM Bluemix and supports various use cases like serverless apps, IoT, and chatbots.
Distributed architecture in a cloud native microservices ecosystemZhenzhong Xu
This document summarizes key aspects of distributed architecture in a cloud native microservices ecosystem. It discusses Netflix's transition to microservices running in the cloud, key characteristics of microservices and cloud computing like scalability and availability, challenges of operating in the cloud like unpredictable failures and latency, Netflix's open source tools for discovery, circuit breaking, resilience, continuous delivery, and more. It also provides an overview of how to develop, integrate, operate, and optimize microservices in terms of embracing failures, caching, operations, and using a data-driven approach.
This document discusses how Netflix implements microservices. It outlines key principles such as modeling services around business domains, decentralizing all things, designing for failure, and making systems highly observable. Services are autonomous and communicate through dumb pipes and smart endpoints. Netflix uses service discovery, dynamic configuration, circuit breakers, and chaos testing to make services resilient and prevent failures from cascading. The document emphasizes that each service needs a fallback strategy and that a reliable routing layer is essential for microservices architectures to function properly at Netflix's scale.
Running a Massively Parallel Self-serve Distributed Data System At ScaleZhenzhong Xu
Nearly any Internet-connected screen is capable of streaming Netflix content. Sitting on top of a cloud-native microservice architecture, the entire ecosystem generates over 1 trillion events every day to feed critical Netflix systems to monitor service health, to detect fraudulent behaviors, and to improve customer experience.
Keystone is the critical piece of Netflix backend infrastructure to ensure massive amount of events are processed in near real time, reliably, at scale, and in face of failures in a cloud-native microservices environment.
Turns out, such an embarrassingly parallel stream processing system is not embarrassingly easy to develop and operate, especially given the challenges of unpredictable failures in a cloud-native environment, self-serve multi-tenancy support, and assumptions of maintaining extremely high development/operation agility.
This talk will shed light on how we built an elastic, resilient, reactive, and self-healing distributed system in the cloud. Zhenzhong will present * High-level cloud-native microservice based Keystone architecture. * A deep dive on how we built the system based on ideas such as declarative reconciliation, container based immutable deployment, logical workload isolation, and chaos exercise. * Insights into our operation best practices, such as capacity provisioning, delivery semantics, deployment tradeoffs, backpressure management, etc.
This document discusses Azure cloud computing services and how they can be used for deep learning. It provides an overview of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the Azure Data Science Virtual Machine, which contains tools for deep learning like TensorFlow. It recommends using scripting to manage resources on Azure and shutting down VMs when not in use to save costs.
Keystone event processing pipeline on a dockerized microservices architectureZhenzhong Xu
The document provides an overview of Keystone, Netflix's event processing pipeline. Some key points:
- Keystone is a collection of microservices and components that form a single, self-contained logical service for processing over 500 billion events generated daily at Netflix.
- It acts as a self-scaling, multi-tenant event processing pipeline that embraces continuous integration/continuous delivery to be self-healing and cloud failure tolerant.
- The routing infrastructure uses Zookeeper for instance assignment and checkpoints to clusters stored in S3 for at-least-once delivery semantics under failure conditions.
- The control plane handles container resource allocation, scheduling, and cluster orchestration and deployments.
- Current
This document discusses Azure and deep learning. It provides an overview of cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It also describes the Azure Data Science Virtual Machine which provides tools for deep learning like TensorFlow. It recommends scripting resource management and shutting down VMs when not in use to save costs.
Citrix VP of Product Marketing, Peder Ulander offers a history lesson on CloudStack during his opening remarks at the CloudStak=c Collaboration Conference.
The Reinvent 2016 conference hosted by Amazon Web Services included keynotes, over 400 sessions across 4 locations over 5 days. New services and updates were announced across compute, analytics, database, developer tools, artificial intelligence, monitoring, migration, mobile, containers, and lambda. Significant announcements included new instance types, elastic GPUs, IPv6 support for EC2, Athena for querying S3 data with SQL, Glue for data integration and transformations, and expanded capabilities for many existing services like Lambda, CloudFront, and Snowmobile for large data transfers.
Serverless apps can be developed using OpenWhisk, an open source serverless platform. OpenWhisk allows code to execute in response to events, using triggers, actions, and rules. It provides polyglot support and scales dynamically. The document demonstrates how to create a timer triggered action and a Slack bot using OpenWhisk. It also provides an overview of OpenWhisk's architecture and implementation.
Leonard Austin (Ravelin) - DevOps in a Machine Learning WorldOutlyer
As machine learning moves from niche to mainstream tech stacks how do DevOps engineers prepare for a very different set of problems. A brief look at the new issues that arise from machine learning, an overview of cutting-edge "old school" solutions and how to drag data science (kicking and screaming) into a world of automation.
Video: https://www.youtube.com/watch?v=KHxZCRajRiA
Join DevOps Exchange London here: http://meetup.com/DevOps-Exchange-London/
Follow DOXLON on twitter http://www.twitter.com/doxlon
This document discusses serverless computing and compares it to traditional server-based computing. It defines serverless computing and provides examples of serverless technologies like AWS Lambda. It also outlines common use cases for serverless computing like handling dynamic workloads and scheduled tasks. Finally, it compares different services between server-based and serverless models like compute, files, databases, data pipelines, machine learning, and IoT.
When IoT meets Serverless - from design to production and monitoringAlex Pshul
IoT is not the future anymore. It is happening right here and right now. There are more and more applications for deploying tiny electronic devices and companies are starting to see the value in this approach. To meet the high demand for IoT solutions, Microsoft invested 5 BILLION dollars in their IoT services last year.
Developing and deploying IoT code using Azure services is easy. The hard part is supporting the large amount of data that comes with it. By using the classic approach for developing backend services, scalability, maintenance, deployment and deciding frameworks are the biggest nightmares any architect will face.
Serverless computing comes to solve these issues and allows us to focus on what matters most – the logic. In this session we will discuss the differences between the classic backend approach and the new serverless approach. We will go over the services that Azure provides us for IoT development and how we can connect them to other services on Azure to create a completely serverless system, which will save us development and maintenance time.
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Amazon Web Services
"This is a technical architect's case study of how Loggly has employed the latest social-media-scale technologies as the backbone ingestion processing for our multi-tenant, geo-distributed, and real-time log management system. This presentation describes design details of how we built a second-generation system fully leveraging AWS services including Amazon Route 53 DNS with heartbeat and latency-based routing, multi-region VPCs, Elastic Load Balancing, Amazon Relational Database Service, and a number of pro-active and re-active approaches to scaling computational and indexing capacity.
The talk includes lessons learned in our first generation release, validated by thousands of customers; speed bumps and the mistakes we made along the way; various data models and architectures previously considered; and success at scale: speeds, feeds, and an unmeltable log processing engine."
Alabama CyberNow 2018: Cloud Hardening and Digital Forensics ReadinessToni de la Fuente
This document provides an overview of digital forensics and security in the cloud. It discusses common attacks such as access key compromise and misconfigured services. It also outlines an incident response workflow and tools that can be used to acquire evidence from AWS resources like EC2 instances, S3 buckets, and RDS databases. Finally, it discusses hardening strategies like using immutable infrastructure and auditing tools like Prowler to assess security configurations.
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly SolarWinds Loggly
This document summarizes Loggly's transition from their first generation log management infrastructure to their second generation infrastructure built on Apache Kafka, Twitter Storm, and ElasticSearch on AWS. The first generation faced challenges around tightly coupling event ingestion and indexing. The new system uses Kafka as a persistent queue, Storm for real-time event processing, and ElasticSearch for search and storage. This architecture leverages AWS services like auto-scaling and provisioned IOPS for high availability and scale. The new system provides improved elasticity, multi-tenancy, and a pre-production staging environment.
A guide through the Azure Messaging services - Update ConferenceEldert Grootenboer
https://www.updateconference.net/en/2019/session/a-guide-through-the-azure-messaging-services
A guide through the Azure Messaging services - Update Conference
by Gavin Adams, Sr. IoT Specialist SA AWS
Join us for AWS IoT day at the AWS San Francisco Loft. AWS IoT enables you to easily connect and manage millions of devices securely. You can gather data from, run sophisticated analytics on, and take actions in real-time on your diverse fleet of IoT devices from edge to the cloud. You will build IoT applications with AWS IoT experts. AWS IoT provides edge-based software and cloud-based services so you can easily build IoT applications. Edge-based software, including AWS Greengrass, enables you to securely connect devices, gather data and take intelligent actions locally even when Internet connectivity is down. Cloud-based services, including AWS IoT Core, allow you to quickly onboard large and diverse fleets, maintain fleet health, and keep fleets secure.
This document provides an overview and summary of DevOps, microservices, and serverless architecture. It discusses key concepts like DevOps and how it relates to software delivery. Microservices and their rise in popularity for building loosely coupled services. Serverless architecture and how it abstracts away infrastructure management. It also summarizes different AWS services that can be used to build microservices and serverless applications, like ECS, Lambda, API Gateway, and provides examples of architectures using these services.
An Azure of Things, a developer’s perspectiveBizTalk360
The world of integration is changing very quickly and we have the opportunity to use a lot of different technologies. There are many ways to solve the same problem and new technologies being introduced all of the time. Azure is now full of very interesting features and the real challenge is understanding how to use and combine all of these together in an effective way to create a good solution. In this session Nino will talk about his experiences and thoughts from the last year around areas such as BizTalk, Hybrid Integration, Microservices, Event Hubs, Stream Analytics and more.
Azure Day Rome Reloaded 2019 - Reactive Systems with Event Gridazuredayit
Event Grid Può essere usato in modo estremamente pervasivo e versatile per costruire architetture serverless reattive, ad esempio nel mondo IoT delle Smart Things, a costo “quasi zero”. Con Event GRid è possibile creare sistemi potenzialmente giganteschi (e impossibili da ricreare on premises), che si autogovernano, espandono (e cambiano!!!) sulla base delle logiche di campo.
Using Azure Sentinel to catch the bad guys covers how to use Azure Sentinel and other Microsoft security tools to detect threats. The document discusses the growing ransomware threat landscape, example attack methods like credential dumping and lateral movement, and important log sources in Azure like Azure Active Directory logs, Azure Network logs, and Windows event logs. It also covers setting up Azure Sentinel with data connectors, creating analytics rules and queries, and automating response with Logic Apps playbooks. Examples of hunting queries and using external threat intelligence are provided.
Creating Event Driven Applications with Azure Event GridCallon Campbell
Azure Event Grid is an event service built for modern applications. Learn about what is Azure Event Grid and how you can use it for an event driven architecture in the cloud.
This document discusses several Azure serverless services for building event-driven applications at massive scale including Event Hubs for high-volume data streams, Service Bus for critical workflows, Event Grid for business logic triggered by events, and IoT Hub. It highlights key capabilities like near real-time processing, high reliability, and massive throughput of these services.
Serverless architectures let you build and deploy applications and services with infrastructure resources that require zero administration. In the past, you had to provision and scale servers to run your application code, install and operate distributed databases, and build and run custom software to handle API requests. Now, AWS provides a stack of scalable, fully-managed services that eliminates these operational complexities.
In this session, you will learn about the benefits of serverless architectures and the basics of the serverless stack AWS provides. We will also walk through how you can use serverless architectures for everything from data processing to mobile and web backends.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Jeremy Edberg, Co-Founder, CloudNative, & AWS Community Hero
This document provides an overview of Docker and containers on AWS. It discusses the benefits of containers including portability and efficiency. It also describes how microservices architectures are a natural fit for containers. The document then discusses using Amazon ECS for container scheduling and orchestration, including task definitions, services, task placement strategies, and consuming real-time events. Finally, it introduces Blox, an open source project that provides an alternative scheduler and cluster management experience on ECS.
This document provides an overview of Docker and container orchestration on AWS using Amazon ECS. It discusses the benefits of containers, microservices architecture, and how ECS handles scheduling and placement of containers across a cluster. It also introduces Blox, an open source project that provides an alternative scheduler and cluster state service for ECS. Key points include:
- Containers provide portability, flexibility, and efficiency for applications compared to virtual machines.
- ECS handles scheduling and orchestrating containers across a cluster of EC2 instances, providing high availability and scalability.
- Blox is an open source project that provides an alternative to ECS for scheduling and managing cluster state, giving more control and flexibility
Scalability strategies for cloud based system architectureSangJin Kang
- Scalability & Availability for the Global Markets
- Global scaled Scalability, Availability and Security
- Architecture for 100, 1K, 100K, 500K, 1M and 10M global users
- Auto-Scaling
- Understand Cloud Services
- Cloud Demo(AWS, GCP, Azure and Cloudflare)
- Wrap-Up
This document provides an overview of monitoring Azure and AWS cloud environments. It discusses why monitoring is important for threat detection, hunting and response. It outlines what aspects should be monitored, including operating systems, applications, network traffic, and cloud service logs. Specific AWS and Azure monitoring options are described, such as CloudTrail, VPC Flow Logs, and Azure Audit Logs. Integrating cloud logs with SIEMs and threat intelligence feeds is also covered. Endpoint monitoring tools are suggested to record process, file, registry and network activity on virtual machines.
TechEvent 2019: Oracle Databases as Managed Service at AWS, Yes it works!; Al...Trivadis
This document summarizes an Oracle Databases as a Managed Service on AWS presentation by Daniel Hillinger and Alexander Hofstetter. It discusses using RDS for Oracle databases on AWS, including security features, migration options, and some caveats. RDS provides automated backups, monitoring, and high availability capabilities for Oracle databases in AWS without needing to manage the underlying infrastructure.
- Just Eat is a leading digital marketplace for takeaway food delivery founded in 2001 operating in 13 markets globally. It has processed up to 2,500 orders per minute at peak times.
- Just Eat migrated to AWS 5 years ago and runs hundreds of EC2 instances at peak dinner times using scheduled scaling, CloudFormation, and other AWS services.
- AWS Lambda was introduced in 2014 and Just Eat started using it for micro tasks like resetting delivery times, publishing SNS messages, and provisioning instance access to reduce infrastructure costs and management compared to running EC2 fleets.
This document provides an overview of Hashicorp Vault and how it can be used for securing secrets and sensitive data in modern, dynamic cloud environments. It discusses the challenges of digital transformation and how Vault addresses them through secret management workflows. The basic Vault workflow is described along with examples for Kubernetes and legacy applications. Finally, Vault Enterprise features for replication, access control, multi-factor authentication and compliance are covered.
Re:invent 2016 Container Scheduling, Execution and AWS Integrationaspyker
This document summarizes a presentation about Netflix's use of containers and the Titus container management platform. It discusses:
1. Why Netflix uses containers to increase innovation velocity for tasks like media encoding and software development. Containers allow for faster iteration and simpler deployment.
2. How Titus was developed to manage containers at Netflix's scale of over 100,000 VMs and 500+ microservices, since existing solutions were not suitable. Titus integrates with AWS for resources like VPC networking and EC2 instances.
3. How Titus supports both batch jobs and long-running services, with challenges like networking, autoscaling, and upgrades that services introduce beyond batch. Collaboration with Amazon on ECS
Similar to Event Grid - quiet event to revolutionize Azure and more (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
11. Why Microsoft built EventGrid?
• Simplicity
• Fan-out with high throughput
• Pay-per-event with Push model
• Built-in and Custom events
AWS Sample Events: https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html
12. Event Grid Concepts
1. Events: what happened
2. Event Publishers: where it took place
3. Topics: where publishers send events
4. Event Subscriptions: how you receive events
5. Event Handlers: the app or service reacting to the event
6. Security:
20. Why is Event Grid different?
• Cloud Native by design
• Which makes it serverless friendly
• Engineered for reliability and scale
• Supports CNCF Cloud Events v0.1
21. Cloud native by design
• Always available (99.99% SLA)
• Near real-time event delivery (<1s e2e)
• At least once delivery
• Dynamic scale
• 10,000,000 events per second per region
• 100,000,000 subscriptions per region
• Platform agnostic (WebHook)
• Language agnostic (HTTP protocol)
23. Engineered for scale and reliability
• Cascading log architecture keeps hot path clear
Cascading retry logs
Replicated Log
10 sec
1 min
1 hr
Event Grid Cluster
24. Engineered for scale and reliability
• Cascading log architecture keeps hot path clear
• Each clusters has all subscriptions
• Nodes can be added to
or removed from cluster
Event Grid Cluster
25. Engineered for scale and reliability
• Cascading log architecture keeps hot path clear
• Each clusters has all subs
• Nodes can be added to
or removed from cluster
• Clusters can be added
1 min
Event Grid Cluster 1
1 min
Event Grid Cluster 2
26. US WestUS East
Engineered for scale and reliability
• Cascading log architecture keeps hot path clear
• Each clusters has all subs
• Nodes can be added to
or removed from cluster
• Clusters can be added
• Traffic spans regions
Event Grid Cluster R1C1 Event Grid Cluster R1C2 Event Grid Cluster R2C1
27. Engineered for scale and reliability
• Retry intervals
• 10 seconds
• 30 seconds
• 1 minute
• 5 minutes
• 10 minutes
• 30 minutes
• 1 hour
• Once an hour up to 24 hours
• Defaults: 30 delivery attempts / 24 hours
28. Engineered for scale and reliability
• Dead-lettering
• Requires Storage account + container
• Dead-lettered events stored as blobs
31. What is CNCF CloudEvents?
• Event Protocol Suite developed in CNCF Serverless WG
• Common metadata attributes for events
• Flexibility to innovate on event semantics
• Simple abstract type system mappable to different encodings
• Transport options
• HTTP(S) 1.1 Webhooks, also HTTP/2 (v0.1)
• MQTT 3.1.1 and 5.0 (draft)
• AMQP 1.0 (draft)
• Encoding options
• JSON (v0.1, required for all implementations)
• Extensible for binary encodings: Avro, MessagePack, AMQP, etc.
33. Ubiquitous
• Today there are 10+ publishers
• By year end most Azure services will be publishers
• Then most Microsoft services
• Expect industry to embrace Cloud Events
• Grid will be coming to IoT Edge
• And beyond…
34. How Event Grid is sold
• Publish is an operation
• Delivery attempt for each destination is an operation
• Advanced matching (filtering) is an operation
35.
36. How Event Grid composes with Queues and Streams
• Other messaging services can be publishers or subscribers to
Event Grid
• Sometimes you want WebHook
• Sometimes Queue
• Others Stream
• Why: at high scale a queue or log can work better
• Grid will give you all of them
sub1
sub2
mytopic Storage queue
Event Hubs
37.
38. Security and Authentication
• Validation Handshake (WebHook event delivery)
• Event of type Microsoft.EventGrid.SubscriptionValidationEvent
• With validation data
• Prove
• Echo back
{validationCode: “value”}
• Send GET to validationURL
42. Segment Simple Queuing Events & PubSub Big Data Streaming Enterprise Messaging
Product Storage Queues Event Grid Event Hubs Service Bus
What do you care
about
• Communication
within an app
• Simple and easy to
use
• Individual message
• Queue semantics /
polling buffer
• Pay as you go
• Communication
between apps /
orgs
• Individual message
• Push semantics
• Filtering and
routing
• Pay as you go
• Fan out
• Many messages in a
Stream (think in MBs)
• Ease of use and operation
• Low cost
• Fan in
• Strict ordering
• Works with other tools
(maybe Kafka?)
• Instantaneous
consistency
• Strict ordering
• Interoperability (AMQP?)
• Security &
Non-repudiation
• Geo-Replication &
Availability
• Rich features achieved
with compute (de-dupe,
scheduling, etc.)
What are you
willing to sacrifice
to get it
• Ordering of
messaging
• Instantaneous
consistency
• Ordering of
messaging
• Instantaneous
consistency
• Server side cursor
• Only Once
• Reach features achieved
with compute
• Cost
• Simplicity
EnterpriseBig DataServerless
Segmentation of the cloud messaging market
This session is about tools.
Azure messaging tools.
I’ll be covering Azure Messaging services to help you to make educated decision what Azure messaging services to use.
Event Grid is one of the latest additions to the messaging services that has recently GA-ed.
It’s an eventing backplane that enables event-driven, reactive programming based on a publish-subscribe model.
Traditionally, with queues or subscriptions, a message is sent and it needs to be RECEIVED. An application is responsible to poll for messages.
This requires a continuous execution of a process that checks for new messages.
But when we have applications that need to react to occasionally sent messages, we no longer need to have a 24/7 running process to check for new messages.
The polling model is not viable anymore. And with a rise of serverless options, it even became more apparent that some applications need a push model to react to changes.
And it all boils down to events. Let’s define what an event is.
Some example would be detecting specific objects in the image; generating thumbnails; etc.
Simplicity - Point and click to aim events from your Azure resource to any event handler or endpoint.
Fan-out - Subscribe multiple endpoints to the same event to send copies of the event to as many places as needed.
High throughput - Build high-volume workloads on Event Grid with support for millions of events per second.
Pay-per-event - Pay only for the amount you use Event Grid.
Built-in Events - Get up and running quickly with resource-defined built-in events.
Custom Events - use Event Grid route, filter, and reliably deliver custom events in your app.
Reliability - Utilize 24-hour retry with exponential backoff to ensure events are delivered.
Advanced filtering - Filter on event type or event publish path to ensure event handlers only receive relevant events.
An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. For example, an event about a new file being created in Azure Storage has details about the file, such as the lastTimeModified value. Or, an Event Hubs event has the URL of the Capture file. Each event is limited to 64 KB of data.
A publisher is the user or organization that decides to send events to Event Grid. Microsoft publishes events for several Azure services. You can publish events from your own application. Organizations that host services outside of Azure can publish events through Event Grid.
An event publisher (aka source) is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. IoT Hub is the event source for device created events. Your application is the event source for custom events that you define. Event sources are responsible for sending events to Event Grid.
The event grid topic provides an endpoint where the source sends events. The publisher creates the event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to.
System topics are built-in topics provided by Azure services. You don't see system topics in your Azure subscription because the publisher owns the topics, but you can subscribe to them. To subscribe, you provide information about the resource you want to receive events from. As long as you have access the resource, you can subscribe to its events.
Custom topics are application and third-party topics. When you create or are assigned access to a custom topic, you see that custom topic in your subscription.
When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want.
A subscription tells Event Grid which events on a topic you are interested in receiving. When creating the subscription, you provide an endpoint for handling the event. You can filter the events that are sent to the endpoint. You can filter by event type, or subject pattern.
An event handler is the place where the event is sent. The handler takes some further action to process the event. Event Grid supports multiple handler types. You can use a supported Azure service or your own webhook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event handlers, the event is retried until the handler returns a status code of 200 – OK. For Azure Storage Queue, the events are retried until the Queue service is able to successfully process the message push into the queue.
Event Grid provides security for subscribing to topics, and publishing topics. When subscribing, you must have adequate permissions on the resource or event grid topic. When publishing, you must have a SAS token or key authentication for the topic.
And this is what’s possible today.
Here’s an example of an event generated by Storage Blob when a new blob is create.
Notice that there’s a topic, subject, and eventType that every event will have.
topic - full resource path to the event source. This field is not writeable. Event Grid provides this value. (here: Storage Account)
subject - Publisher-defined path to the event subject. (here: Blob Container)
eventType - One of the registered event types for this event source. (here: blob created event)
Each event is comprised of data. Data schema is defined by event publishers.
Note: that events can be bundled and subscribers could receive multiple events.
The way Event Grid works at ARM level is composition
A call is made to a specific resource provider (Storage in this case) with an extension that is EventGrid resource provider.
Request body contains subscriber’s endpoint and filtering that are stored by EventGrid to filter out events and publish to subscriber’s endpoints.
destination - The object that defines the endpoint.
filter - An optional field for filtering the types of events.
endpointType - The type of endpoint for the subscription (webhook/HTTP, Event Hub, or queue).
How is Event Grid is different from Azure Service Bus? It’s cloud native by design.
Built to address serverless issues with reactive nature of communication.
Which includes built in reliability and massive scale it can handle – cross data-center
It was designed for cloud scenarios, highly available with near real time delivery end-to-end
It has at-least-once delivery semantics.
The scale is incomparable to a service such as ASB.
It was designed in a way that can support various platforms. Yes, not just Azure.
And thanks to the HTTP protocol use it can be used from any platform.
Event Grid adds a small randomization to all retry intervals. After one hour, event delivery is retried once an hour.
By default, Event Grid expires all events that aren't delivered within 24 hours. You can customize the retry policy when creating an event subscription. You provide the maximum number of delivery attempts (default is 30) and the event time-to-live (default is 1440 minutes). [30=7 in the first hour + 23 with one-per-hour]
By default, Event Grid expires all events that aren't delivered within 24 hours. You can customize the retry policy when creating an event subscription. You provide the maximum number of delivery attempts (default is 30) and the event time-to-live (default is 1440 minutes). [30=7 in the first hour + 23 with one-per-hour]
Cloud Native Compute Foundation - open source software foundation dedicated to making cloud native computing universal and sustainable.
A sub-division of CNCF is CloudEvents, lead by major cloud providers and big names in the software industry.
The project has 3 major goals:
Consistency - The lack of a common way of describing events means developers must constantly re-learn how to receive events.
Accessibility - This also limits the potential for libraries, tooling and infrastructure to aide the delivery of event data across environments, like SDKs, event routers or tracing systems.
Portability – The portability and productivity we can achieve from event data is hindered overall.
CloudEvents simplifies interoperability by providing a common event schema for publishing, and consuming cloud based events.
This schema allows for uniform tooling, standard ways of routing & handling events, and universal ways of deserializing the outer event schema.
With a common schema, you can more easily integrate work across platforms.
Azure Logic Apps,
Azure Automation,
Azure Functions for EventGrid Trigger.
Only HTTPS is supported!
Topics use either Shared Access Signature (SAS) or key authentication. We recommend SAS, but key authentication provides simple programming, and is compatible with many existing webhook publishers.
You include the authentication value in the HTTP header. For SAS, use aeg-sas-token for the header value. For key authentication, use aeg-sas-key for the header value.