My presentation at QConNY 2017 about the Internet of Things and Edge Compute architecture / strategy at Chick-fil-A. I discuss using a cloud-native approach to computing at the Edge, and discuss the services that are part of our architecture to enable data collection and control of "things" in our restaurants.
(NET409) How Twilio Migrated Its Services from EC2-Classic to EC2-VPCAmazon Web Services
"Amazon Virtual Private Cloud (Amazon VPC) has many obvious benefits. For example, you can use Amazon VPC to define a virtual network in your own logically isolated area within the AWS cloud, and launch your EC2 instances into a VPC. But how can you take advantage of the EC2-VPC platform if your services and infrastructure are already deployed in the EC2-Classic platform? In this deep-dive session, learn how to safely and reliably migrate from EC2-Classic to EC2-VPC with zero downtime. We show you how Twilio approached the problem of a VPC migration, or what we internally called the “Moving Datacenters Project.” We discuss the technologies and tools (both internal and external) we used to complete the migration, the infrastructure we built along the way, and the lessons we learned.
Session sponsored by Twilio."
What’s new in OpenText Extended ECM Platform CE 20.4 and OpenText Content Sui...OpenText
The document provides an overview of new features and enhancements in OpenText Content Suite Platform Cloud Edition 20.4 and OpenText Extended ECM Platform Cloud Edition 20.4. Key updates include improvements to the user experience with embedded document viewing, role-specific dashboards, and migration of folders to business workspaces. The release also features enhancements to digital workplace capabilities such as automatically sharing folders, electronic signing with Core Signature Service, and solution accelerators for business workspaces. Additional changes cover advances in intelligent automation, ecosystem integrations, cloud deployment using Helm charts, and support for multiple Kubernetes platforms.
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Getting Started With Continuous Delivery on AWS - AWS April 2016 Webinar SeriesAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying code changes. This automation helps you catch bugs sooner and increases developer productivity.
In this webinar, we’ll share the processes that Amazon engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Learning Objectives:
• Learn what is continuous delivery, its benefits, and how to implement it
• Learn how to increase the frequency and reliability of your application updates
• Learn to create an automated software release workflow on AWS
• Understand the basics of AWS CodePipeline and AWS CodeDeploy
(BAC404) Deploying High Availability and Disaster Recovery Architectures with...Amazon Web Services
The document discusses disaster recovery strategies for AWS including backup and restore, pilot light, and warm standby approaches. It provides examples of architectures using these approaches including replicating databases across Availability Zones and regions for high availability and disaster recovery. CloudFormation templates are shown that can automate the deployment of load balanced auto-scaled web servers across Availability Zones for disaster recovery.
The Event Mesh: real-time, event-driven, responsive APIs and beyondSolace
Phil Scanlon, Head of Technology in Asia Pacific & Japan for Solace, describes "The Event Mesh" at API Days Melbourne in September 2018. Scanlon explains the complexities of the Event Mesh using the evolution to event-driven, the anatomy of an event, and real world examples.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Best Practices for Data Center Migration Planning - August 2016 Monthly Webin...Amazon Web Services
Migrating large scale data centers to the cloud can be challenging and there are generally many ways to execute these projects successfully. Using the right AWS services and tools can help you lower migration risk and expense.. This webinar will recommend a project management and decision-making approach that will help you make the right AWS migration decisions while minimizing unnecessary expenses and maximizing ROI.
Learning Objectives:
• Understand how to apply the AWS Cloud Adoption Framework to migrations
• Understand financial considerations (ROI, CapEx versus OpEx, budgeting for overlapping expenses)
• Learn a method for prioritization of workloads (both technical and financial)
• Understand how different project management approaches (Traditional, Kanban/Lean) can be used most effectively
• Learn how to lower project risk and difficulty using key AWS services (Snowball, Direct Connect, RDS, DMS)
• Learn how to define project completion criteria - when is a migration really done?
(NET409) How Twilio Migrated Its Services from EC2-Classic to EC2-VPCAmazon Web Services
"Amazon Virtual Private Cloud (Amazon VPC) has many obvious benefits. For example, you can use Amazon VPC to define a virtual network in your own logically isolated area within the AWS cloud, and launch your EC2 instances into a VPC. But how can you take advantage of the EC2-VPC platform if your services and infrastructure are already deployed in the EC2-Classic platform? In this deep-dive session, learn how to safely and reliably migrate from EC2-Classic to EC2-VPC with zero downtime. We show you how Twilio approached the problem of a VPC migration, or what we internally called the “Moving Datacenters Project.” We discuss the technologies and tools (both internal and external) we used to complete the migration, the infrastructure we built along the way, and the lessons we learned.
Session sponsored by Twilio."
What’s new in OpenText Extended ECM Platform CE 20.4 and OpenText Content Sui...OpenText
The document provides an overview of new features and enhancements in OpenText Content Suite Platform Cloud Edition 20.4 and OpenText Extended ECM Platform Cloud Edition 20.4. Key updates include improvements to the user experience with embedded document viewing, role-specific dashboards, and migration of folders to business workspaces. The release also features enhancements to digital workplace capabilities such as automatically sharing folders, electronic signing with Core Signature Service, and solution accelerators for business workspaces. Additional changes cover advances in intelligent automation, ecosystem integrations, cloud deployment using Helm charts, and support for multiple Kubernetes platforms.
The benefits of running databases in the cloud are compelling but how do you get the data there? In this session we will explore how to use the AWS Database Migration Service and the AWS Schema Conversion Tool to help you to migrate, or continuously replicate, your on-premise databases to AWS.
Speaker: Jarrod Spiga, Solutions Architect, Amazon Web Services
Getting Started With Continuous Delivery on AWS - AWS April 2016 Webinar SeriesAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying code changes. This automation helps you catch bugs sooner and increases developer productivity.
In this webinar, we’ll share the processes that Amazon engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Learning Objectives:
• Learn what is continuous delivery, its benefits, and how to implement it
• Learn how to increase the frequency and reliability of your application updates
• Learn to create an automated software release workflow on AWS
• Understand the basics of AWS CodePipeline and AWS CodeDeploy
(BAC404) Deploying High Availability and Disaster Recovery Architectures with...Amazon Web Services
The document discusses disaster recovery strategies for AWS including backup and restore, pilot light, and warm standby approaches. It provides examples of architectures using these approaches including replicating databases across Availability Zones and regions for high availability and disaster recovery. CloudFormation templates are shown that can automate the deployment of load balanced auto-scaled web servers across Availability Zones for disaster recovery.
The Event Mesh: real-time, event-driven, responsive APIs and beyondSolace
Phil Scanlon, Head of Technology in Asia Pacific & Japan for Solace, describes "The Event Mesh" at API Days Melbourne in September 2018. Scanlon explains the complexities of the Event Mesh using the evolution to event-driven, the anatomy of an event, and real world examples.
Introducing AWS DataSync - Simplify, automate, and accelerate online data tra...Amazon Web Services
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.N/A
Best Practices for Data Center Migration Planning - August 2016 Monthly Webin...Amazon Web Services
Migrating large scale data centers to the cloud can be challenging and there are generally many ways to execute these projects successfully. Using the right AWS services and tools can help you lower migration risk and expense.. This webinar will recommend a project management and decision-making approach that will help you make the right AWS migration decisions while minimizing unnecessary expenses and maximizing ROI.
Learning Objectives:
• Understand how to apply the AWS Cloud Adoption Framework to migrations
• Understand financial considerations (ROI, CapEx versus OpEx, budgeting for overlapping expenses)
• Learn a method for prioritization of workloads (both technical and financial)
• Understand how different project management approaches (Traditional, Kanban/Lean) can be used most effectively
• Learn how to lower project risk and difficulty using key AWS services (Snowball, Direct Connect, RDS, DMS)
• Learn how to define project completion criteria - when is a migration really done?
Monitor every app, in every stage, with free and open Elastic APMElasticsearch
Elastic APM helps you deliver better digital experiences by providing complete visibility into the health of your apps — no matter how they are built, where they run, or which dev stage they’re in. Learn about the evolution of Elastic APM, the problems it solves, and where it’s headed. See how it connects traces, logs, and metrics to help you quickly get to the root cause. Plus, hear from customers using Elastic APM to improve their applications.
Learn about what a serverless architecture is, why they are growing in popularity, and who the key players are in a serverless API build on the AWS platform. Then get started building your own servless API!
Microservices Integration Patterns with KafkaKasun Indrasiri
Microservice composition or integration is probably the hardest thing in microservices architecture. Unlike conventional centralized ESB based integration, we need to leverage the smart-endpoints and dumb pipes terminology when it comes to integrating microservices.
There two main microservices integration patterns; service orchestration (active integrations) and service choreography (reactive integration). In this talk, we will explore on, Microservice Orchestration, Microservice Choreography, Event Sourcing, CQRS and how Kafka can be leveraged to implement microservices composition
This document discusses concepts related to observability including Prometheus, ELK stack, OpenTracing, and Victoria Metrics. It provides examples of setting up Prometheus and Grafana to monitor metrics from applications instrumented with exporters. It also demonstrates setting up Filebeat, Logstash and Elasticsearch (ELK stack) to monitor logs and send them to Elasticsearch. Additionally, it shows how to implement OpenTracing in a Java application and visualize traces using Jaeger. Finally, it outlines an exercise to build a microservices ecommerce application incorporating logging, metrics and tracing using the discussed tools.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides information on serverless computing platforms from Azure, AWS, and Google. It outlines the supported programming languages and runtimes for each platform's functions as well as common event sources that can trigger function execution, such as S3 buckets, queues, and HTTP requests. It also lists serverless database options and notes that serverless computing allows for automatic scaling of resources and reduced management overhead compared to traditional reserved servers.
Monitoring End User Experience with Endpoint AgentThousandEyes
Endpoint Agent monitors end user experience from employee laptops and desktops to understand web performance and Internet connectivity of any browser-based service. Learn to troubleshoot end user issues and use the newest Endpoint Agent features.
Pinterest is rolling out a phased platform migration from EC2-Classic to EC2-VPC. We used ClassicLink to link our EC2-Classic instances to VPCs, and we applied AWS best practices to configure VPC subnets and security groups. In this session, we share the lessons we learned along the way, and we also show you how to create a migration strategy and track migration costs.
How encryption works in AWS: What assurances do you have that unauthorized us...Amazon Web Services
Customers who want their data encrypted on AWS increasingly take advantage of AWS services that allow them to encrypt data and manage access to the encryption keys. This session discusses how your data is encrypted in transit and at rest in AWS services like Amazon EC2, Amazon S3, and Elastic Load Balancing. Learn about the AWS key management options available, such as AWS KMS, CloudHSM, and ACM. The session also covers some of the security controls that AWS uses to minimize risk of compromise by unauthorized users as it works to keep your data safe.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
Amazon Web Services provides several mobile services to help developers build faster mobile apps. These services handle common tasks like user authentication, data synchronization, push notifications, and analytics so developers can focus on their core app functionality. Some key services include Amazon Cognito for user identity and data syncing across devices, Amazon SNS for push notifications, and Amazon Mobile Analytics for analyzing user behavior and app usage. These services simplify development by taking care of undifferentiated heavy lifting and infrastructure management so mobile apps can scale easily.
Modern development teams are delivering features at a rapid pace using modern technologies such as containers, microservices, and serverless functions. Operations and infrastructure teams are supporting these rapid delivery cycles using Infrastructure as Code, Test Driven Infrastructure (TDI), and cloud automation. Yet, most security teams are still using traditional security approaches and can't keep up with the rate of accelerated change.
Security must be reinvented in a DevOps world to take advantage of the opportunities provided by continuous integration and delivery pipelines. In this talk, attendees will take a journey through the DevSecOps Toolchain broken down into the key phases: pre-commit, commit, acceptance, production, and operations. We will explore the pre-commit and commit phases in-depth, identifying security controls, open source tools, and how to integrate these tools into a pipeline. Attendees will walk away with a practical approach for weaponizing the toolchain and building a successful DevSecOps program.
IBM API Connect is a Comprehensive API Solution. It is an integrated creation, runtime, management, and security foundation for enterprise grade API’s and Microservices to power modern digital applications.
In this webinar,
API Management Concepts
IBM API Connect overview and features
Kellton Tech’s API Strategy with IBM API Connect.
Technology: IBM API Connect 5.0
API-first design - Basis for an consistent API-Management approachSven Bernhardt
Intuitive API design is a critical success factor for APIs. API-First propagates a collaborative approach, where API development starts with the design and brings various stakeholders together, which dramatically increases efficiency and consistency while defining APIs. Questions that that come up in this area are about quality requirements APIs have to meet nowadays, in order to deliver the desired business value. In this session we want to present an approach how APIs can be defined and implemented consistently using tools like Apiary and Apimatic and the design artifacts can be incorporated in existing CI/CD pipelines, using tools like Dredd, since APIs are a first-class citizen which need to be maintained appropriately.
Swift 7.2 & Customer Security: Providing choice, flexibility and control. Nancy Hernandez
Meeting Swift 7.2 & Customer Security Deadlines: Practical strategies for success.
Presented by Patricia Hines, Senior Celent Analyst and Head of Swift Services, B. Venkat from PayCommerce.
The document describes Accenture's API Maturity Model, which provides a framework to help organizations develop and manage their APIs from an initial "ad hoc" stage to a fully "industrialized" stage. The model outlines five stages of maturity - ad hoc, organize, tactical, mission critical, and industrial. For each stage, it describes key characteristics and capabilities an organization should develop in areas like strategy, architecture, development process, community management, and optimization. The goal of the model is to help organizations assess their current API maturity and identify steps to progress along the maturity curve to better enable and manage their APIs and digital ecosystems.
Getting Started with AWS Enterprise Applications: WorkSpaces, WorkMail, WorkDocsAmazon Web Services
AWS Enterprise Applications deliver managed, secure desktop and productivity capabilities run in the AWS cloud. Amazon WorkSpaces allows customers to easily provision cloud-based desktops that allow end-users to securely access the documents, applications, and resources they need with the device of their choice. Amazon WorkMail is a secure and managed business email and calendaring service that gives users the ability to seamlessly access their email, contacts, and calendars while allowing IT to maintain control over encryption and location of data. The speakers also dive into Amazon WorkDocs, a fully managed and secure enterprise storage and sharing service with strong administrative controls and feedback capabilities. In this session, we explore each of these services, explain how your organization can benefit from them, and also provide a brief demo to show how they work together.
토스증권은 Blitzscaling을 꿈꾸며 여정을 준비하고 있습니다.
효율적인 서비스 제공과 안정적인 운영을 위해 선택했던 경험들을 공유하고, 빠른 변화에 민첩하게 대응하는 증권팀의 비전과 높은 생산성을 만들기 위해서 선택했던 AWS 클라우드 사용 경험 중 멀티캐스트 기능을 활용한 주식 실시간 시세 제공 서비스에 대해서 구축 사례를 중심으로 소개하겠습니다.
This document discusses advanced event brokers, what they are, and when they should be used. It begins by describing the goals of early event brokers which focused on easy connections without libraries and using REST. It then discusses the additional capabilities of advanced event brokers, including resilience, monitoring, security, management, service discovery, and supporting multiple protocols. The document outlines how advanced event brokers can support hybrid cloud, IoT/digital transformation scenarios at massive scale. Key characteristics of advanced event brokers are described as smart routing/filtering, multi-protocol support, geographic awareness, security awareness, IoT optimization, and reliability. Examples use cases are provided such as connecting 10 million connected cars in real-time. In general, the document recommends using an
Advanced Event Brokers: what are they, and when should you use one? Solace Systems Architect Tom Fairbairn explains.
Looking to learn more? Watch the webinar now: http://bit.ly/2Ml0LF0
Monitor every app, in every stage, with free and open Elastic APMElasticsearch
Elastic APM helps you deliver better digital experiences by providing complete visibility into the health of your apps — no matter how they are built, where they run, or which dev stage they’re in. Learn about the evolution of Elastic APM, the problems it solves, and where it’s headed. See how it connects traces, logs, and metrics to help you quickly get to the root cause. Plus, hear from customers using Elastic APM to improve their applications.
Learn about what a serverless architecture is, why they are growing in popularity, and who the key players are in a serverless API build on the AWS platform. Then get started building your own servless API!
Microservices Integration Patterns with KafkaKasun Indrasiri
Microservice composition or integration is probably the hardest thing in microservices architecture. Unlike conventional centralized ESB based integration, we need to leverage the smart-endpoints and dumb pipes terminology when it comes to integrating microservices.
There two main microservices integration patterns; service orchestration (active integrations) and service choreography (reactive integration). In this talk, we will explore on, Microservice Orchestration, Microservice Choreography, Event Sourcing, CQRS and how Kafka can be leveraged to implement microservices composition
This document discusses concepts related to observability including Prometheus, ELK stack, OpenTracing, and Victoria Metrics. It provides examples of setting up Prometheus and Grafana to monitor metrics from applications instrumented with exporters. It also demonstrates setting up Filebeat, Logstash and Elasticsearch (ELK stack) to monitor logs and send them to Elasticsearch. Additionally, it shows how to implement OpenTracing in a Java application and visualize traces using Jaeger. Finally, it outlines an exercise to build a microservices ecommerce application incorporating logging, metrics and tracing using the discussed tools.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides information on serverless computing platforms from Azure, AWS, and Google. It outlines the supported programming languages and runtimes for each platform's functions as well as common event sources that can trigger function execution, such as S3 buckets, queues, and HTTP requests. It also lists serverless database options and notes that serverless computing allows for automatic scaling of resources and reduced management overhead compared to traditional reserved servers.
Monitoring End User Experience with Endpoint AgentThousandEyes
Endpoint Agent monitors end user experience from employee laptops and desktops to understand web performance and Internet connectivity of any browser-based service. Learn to troubleshoot end user issues and use the newest Endpoint Agent features.
Pinterest is rolling out a phased platform migration from EC2-Classic to EC2-VPC. We used ClassicLink to link our EC2-Classic instances to VPCs, and we applied AWS best practices to configure VPC subnets and security groups. In this session, we share the lessons we learned along the way, and we also show you how to create a migration strategy and track migration costs.
How encryption works in AWS: What assurances do you have that unauthorized us...Amazon Web Services
Customers who want their data encrypted on AWS increasingly take advantage of AWS services that allow them to encrypt data and manage access to the encryption keys. This session discusses how your data is encrypted in transit and at rest in AWS services like Amazon EC2, Amazon S3, and Elastic Load Balancing. Learn about the AWS key management options available, such as AWS KMS, CloudHSM, and ACM. The session also covers some of the security controls that AWS uses to minimize risk of compromise by unauthorized users as it works to keep your data safe.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
Amazon Web Services provides several mobile services to help developers build faster mobile apps. These services handle common tasks like user authentication, data synchronization, push notifications, and analytics so developers can focus on their core app functionality. Some key services include Amazon Cognito for user identity and data syncing across devices, Amazon SNS for push notifications, and Amazon Mobile Analytics for analyzing user behavior and app usage. These services simplify development by taking care of undifferentiated heavy lifting and infrastructure management so mobile apps can scale easily.
Modern development teams are delivering features at a rapid pace using modern technologies such as containers, microservices, and serverless functions. Operations and infrastructure teams are supporting these rapid delivery cycles using Infrastructure as Code, Test Driven Infrastructure (TDI), and cloud automation. Yet, most security teams are still using traditional security approaches and can't keep up with the rate of accelerated change.
Security must be reinvented in a DevOps world to take advantage of the opportunities provided by continuous integration and delivery pipelines. In this talk, attendees will take a journey through the DevSecOps Toolchain broken down into the key phases: pre-commit, commit, acceptance, production, and operations. We will explore the pre-commit and commit phases in-depth, identifying security controls, open source tools, and how to integrate these tools into a pipeline. Attendees will walk away with a practical approach for weaponizing the toolchain and building a successful DevSecOps program.
IBM API Connect is a Comprehensive API Solution. It is an integrated creation, runtime, management, and security foundation for enterprise grade API’s and Microservices to power modern digital applications.
In this webinar,
API Management Concepts
IBM API Connect overview and features
Kellton Tech’s API Strategy with IBM API Connect.
Technology: IBM API Connect 5.0
API-first design - Basis for an consistent API-Management approachSven Bernhardt
Intuitive API design is a critical success factor for APIs. API-First propagates a collaborative approach, where API development starts with the design and brings various stakeholders together, which dramatically increases efficiency and consistency while defining APIs. Questions that that come up in this area are about quality requirements APIs have to meet nowadays, in order to deliver the desired business value. In this session we want to present an approach how APIs can be defined and implemented consistently using tools like Apiary and Apimatic and the design artifacts can be incorporated in existing CI/CD pipelines, using tools like Dredd, since APIs are a first-class citizen which need to be maintained appropriately.
Swift 7.2 & Customer Security: Providing choice, flexibility and control. Nancy Hernandez
Meeting Swift 7.2 & Customer Security Deadlines: Practical strategies for success.
Presented by Patricia Hines, Senior Celent Analyst and Head of Swift Services, B. Venkat from PayCommerce.
The document describes Accenture's API Maturity Model, which provides a framework to help organizations develop and manage their APIs from an initial "ad hoc" stage to a fully "industrialized" stage. The model outlines five stages of maturity - ad hoc, organize, tactical, mission critical, and industrial. For each stage, it describes key characteristics and capabilities an organization should develop in areas like strategy, architecture, development process, community management, and optimization. The goal of the model is to help organizations assess their current API maturity and identify steps to progress along the maturity curve to better enable and manage their APIs and digital ecosystems.
Getting Started with AWS Enterprise Applications: WorkSpaces, WorkMail, WorkDocsAmazon Web Services
AWS Enterprise Applications deliver managed, secure desktop and productivity capabilities run in the AWS cloud. Amazon WorkSpaces allows customers to easily provision cloud-based desktops that allow end-users to securely access the documents, applications, and resources they need with the device of their choice. Amazon WorkMail is a secure and managed business email and calendaring service that gives users the ability to seamlessly access their email, contacts, and calendars while allowing IT to maintain control over encryption and location of data. The speakers also dive into Amazon WorkDocs, a fully managed and secure enterprise storage and sharing service with strong administrative controls and feedback capabilities. In this session, we explore each of these services, explain how your organization can benefit from them, and also provide a brief demo to show how they work together.
토스증권은 Blitzscaling을 꿈꾸며 여정을 준비하고 있습니다.
효율적인 서비스 제공과 안정적인 운영을 위해 선택했던 경험들을 공유하고, 빠른 변화에 민첩하게 대응하는 증권팀의 비전과 높은 생산성을 만들기 위해서 선택했던 AWS 클라우드 사용 경험 중 멀티캐스트 기능을 활용한 주식 실시간 시세 제공 서비스에 대해서 구축 사례를 중심으로 소개하겠습니다.
This document discusses advanced event brokers, what they are, and when they should be used. It begins by describing the goals of early event brokers which focused on easy connections without libraries and using REST. It then discusses the additional capabilities of advanced event brokers, including resilience, monitoring, security, management, service discovery, and supporting multiple protocols. The document outlines how advanced event brokers can support hybrid cloud, IoT/digital transformation scenarios at massive scale. Key characteristics of advanced event brokers are described as smart routing/filtering, multi-protocol support, geographic awareness, security awareness, IoT optimization, and reliability. Examples use cases are provided such as connecting 10 million connected cars in real-time. In general, the document recommends using an
Advanced Event Brokers: what are they, and when should you use one? Solace Systems Architect Tom Fairbairn explains.
Looking to learn more? Watch the webinar now: http://bit.ly/2Ml0LF0
Transforming Consumer Banking with a 100% Cloud-Based Bank (FSV204) - AWS re:...Amazon Web Services
Customer demands for higher levels of service and value, constantly evolving technology capabilities, and stringent regulatory requirements are all powerful forces reshaping retail banking. Built exclusively on AWS, Starling Bank’s 100% cloud-based, mobile-only banking solution satisfies regulators in terms of its resilience, security, and reliability. It also satisfies consumers by giving them greater control over their data, streamlining the account opening process, accelerating payments, and providing access to innovative new services developed from scratch with open APIs, a developer platform, integration with Apple Pay, Google Pay, and Fitbit Pay and a custom backend ledger and payments integrations. Starling Bank is leading the open banking revolution. In this session, learn how Starling Bank delivers value to their customers and innovates at a very fast pace in a sector that can be slow to evolve.
Slide share device to iot solution – a blueprintGuy Vinograd ☁
Creating an IoT Cloud service for a connected product presents a huge challenge. Why? Because the tasks of serving millions, responding to events in near real-time, securing the solution from ambitious IoT hackers, AND generating a monthly bill that doesn't collapse the business model, resemble attempts to solve Rubik's Cube, but are far more difficult. Commercial IoT platforms are irrelevant because of the vendor-lock, so we must use basic building blocks to accomplish all this. This session will illustrate the architecture of an IoT service on top of the AWS Cloud.
Why HTTP Won't Work For The Internet of Things (Dreamforce 2014)kellogh
The document discusses why HTTP is not well-suited for Internet of Things (IoT) applications compared to MQTT. HTTP requires that the client and server be continuously available for request/response, while MQTT uses a broker to decouple publishers and subscribers, allowing asynchronous communication even when devices are intermittent. MQTT also supports features like publish/subscribe messaging with topics, quality of service guarantees, and retained messages that make it more robust for constrained IoT devices and unreliable networks.
Fastly is an edge cloud platform provider that aims to upgrade the internet experience by making applications and digital experiences fast, engaging, and secure. It has a global network of 100+ points of presence across 30+ countries serving over 1 trillion daily requests. The presentation discusses how internet requests are handled traditionally versus more modern approaches using an edge cloud platform like Fastly. It emphasizes that the edge must be programmable, deliver general purpose compute anywhere, and provide high reliability, security, and data privacy by default.
More and more enterprises are restructuring their development teams to replicate the agility and innovation of startups.
In the last few years, microservices have gained popularity for their ability to provide modularity, scalability, high availability, as well as make it easier for smaller development teams to develop in an agile way.
But how do they deal with security? what about security contexts?
This talk will give insights about the most interesting issues found in the last years while testing the security of multilayered microservices solutions and how they were fixed.
The History and Status of Web Crypto API (2012)Channy Yun
The document discusses the history and status of the Web Cryptography API. It outlines the legacy approaches to cryptography in web browsers like crypto.signText and CAPICOM. It then summarizes the development of the Web Cryptography API, from early proposals in the HTML5 working group to the current W3C Web Cryptography working group developing the standard. The API aims to provide common cryptographic functions like encryption and signatures to web applications through a standardized JavaScript API.
This document proposes a new model called OAuthing for federated identity, access control, and data sharing in IoT. It describes the growth of IoT devices and privacy/security issues. The model includes a Device Identity Provider (DIdP) that provides anonymous identities and tokens, a Personal Cloud Middleware (PCM) that runs on behalf of each user to filter data, and an Intelligent Gateway (IG) that routes requests based on identities. It presents the implementation including a device bootloader, and prototype results showing it can support 400 brokers handling 10 messages/second each with low latency. Comparisons are made to related work which don't provide the same anonymous identities, registration processes, or personal middleware capabilities.
Developing Interoperable Components for an Open IoT Foundation Eurotech
In this presentation Eurotech and Red Hat present Kapua, a modular cloud platform that provides management for Internet of Things (IoT) gateways and smart edge devices. It represents a key milestone towards the development of a truly open, end-to-end foundation for IoT and its ecosystem of partners and solutions. Kapua provides a core integration framework with services for device registry, data and device management, message routing, and applications.
The document discusses Server-Sent Events (SSE) and compares it to alternative technologies like WebSockets. SSE allows for a server to push automatic updates to clients via an HTTP connection. It has advantages over polling in being more memory efficient and not requiring the client to accumulate all messages. SSE is simpler to implement than WebSockets as it uses HTTP, but is limited to UTF-8 and mono-directional communication. The document provides an example of implementing SSE from scratch using Symfony Mercure.
UtrechtJUG_Exploring statefulmicroservices in a cloud-native world.pptxGrace Jansen
This document discusses stateful microservices in a cloud native world. It begins by covering some key aspects of cloud native applications including the Twelve-Factor App methodology and its emphasis on stateless processes. It then explores the differences between stateless and stateful computing models. While cloud native is often viewed as requiring stateless microservices, the document explains that real-world applications often need stateful capabilities. It discusses some traditional stateful approaches and their limitations in cloud native environments. The document then covers some techniques for building stateful microservices in cloud native environments, including caching, databases, cookies/tokens, and approaches using cloud native infrastructure like Kubernetes. Programming patterns like SAGA and long-running actions are also discussed. Finally
This document discusses stateful microservices in a cloud native world. It begins by explaining that while cloud native applications are usually designed to be stateless, many real-world applications require stateful capabilities. It then explores techniques for building stateful microservices, such as using caches, databases, cookies, and tokens to preserve state. Finally, it discusses how tools like Kubernetes statefulsets, persistent volumes, and MicroProfile Long Running Actions can help enable stateful applications in a cloud native environment.
Connecting devices to the internet of thingsBernard Kufluk
Connecting devices to IBM's Internet of Things Foundation. The foundation is a PaaS service allowing you to get devices connected quicker than ever before.
The document provides an overview of Agile, DevOps and Cloud Management from a security, risk management and audit compliance perspective. It discusses how the IT industry paradigm is shifting towards microservices, containers, continuous delivery and cloud platforms. DevOps is described as development and operations engineers participating together in the entire service lifecycle. Key differences in DevOps include changes to configuration management, release and change management, and event monitoring. Factors for DevOps success include culture, collaboration, eliminating waste, unified processes, tooling and automation.
In this session, Sam will give an overview of the new Hybrid Connections feature. With this feature, customers can easily connect their cloud services with their existing on premises resources. Sam will demonstrate the various capabilities of this new service and will discuss the advanced features, such as load balancing, Always On connectivity, connection cardinality, automation and performance.
Iot 1906 - approaches for building applications with the IBM IoT cloudPeterNiblett
The IBM Internet of Things cloud allows customers to quickly register, connect, and send data from devices. This sensor data is collected and stored in a data historian and also made available as a real-time event stream. This session discusses how to build applications that consume and exploit this data to show business insight and value. This session covers how to access and use the streaming application programming interfaces to build new applications and/or connect to existing systems and applications. It includes integration with IBM BlueMix and NodeRed.
IoT and Maker Crossover (IMCO) Conference 2015Jollen Chen
This document discusses open Internet of Things (IoT) cloud architectures and protocols. It introduces Mokoversity and its open IoT cloud platform Openmbed, which uses web technologies like HTTP and open standards to simplify and liberate IoT development. Openmbed aims to make developing for the Web of Things easier and more open than existing solutions by providing free and open-source tools and projects.
This document summarizes a session on developing Internet of Things (IoT) applications with AWS IoT, AWS Lambda, and AWS Cognito. The session will include deep dives on AWS IoT, patterns for building IoT applications, creating applications using the listed AWS services, and a customer story from EROAD. There will also be demonstrations and audience participation.
Similar to Internet of Things and Edge Compute at Chick-fil-A (20)
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
5. Principles: Security
TODO – some sort of intro to IOT
design principles / considerations
slide
Maybe just some pictures over a few
slides that tell the story
Secure
Credit: https://www.glassdoor.com/Photos/AMG-National-Trust-Bank-Office-Photos-IMG491177.htm
Secure
Credit: Brook Ward / https://creativecommons.org/licenses/by-nc/2.0/
Secure
10. Let’s create a new product…
Requirements
• Should be amazing!
• Produced with a new machine we’ll develop
• Should be able to collect data from our machine
• Should be able to command our machine to cook what
we want on demand
15. Registration & AuthN/AuthZ
• Dynamic Client Registration for OAuth Clients
• Authorization – Human authorization
• Auth Code Flow / Device Code Flow
• Stateless Tokens – JWT
• No degradation when WAN offline
• Software Development Kit (SDK) to make it easy
16. Security: Demo
What happens with a new device?
1. Connect (Wi-Fi in our case)
2. Discover endpoints via .wellknown
3. Register with Auth Server
4. Request authorization as Johnny 5
5. Approve the request (SSO / MFA)
6. Return a JWT
7. Switch Wi-Fi Networks
18. Security Recommendations
1. Don’t hardcode permanent, powerful credentials at
manufacture time, and then never change them
2. Require human authorization for devices whenever
possible
3. Monitor device traffic profiles to ensure they are behaving
normally
4. Don’t allow inbound connectivity if possible
23. What if we lose connectivity?
What if the network is too slow?
24. Edge Architecture
Why Edge Compute?
• Support critical businesses when
network is down
• Reduce latency for “thing” interactions
• Data aggregation before shipping to
cloud
33. How do I build an application to
control my device?
34. Edge Applications
• Run in Docker containers
• On-board as a software “thing”
• Interact with local and cloud services
• Short-lived vs Long-lived
• Service Limits
35. CI /CD for IOT
Commit Build Virtual Edge Validate Release
Candidate
Deploy
Integration
Tests
36. Edge Applications: Putting it together
MQTT
Johnny 5
Controller
Cloud
Controller
App
Edge
Cloud
Cook
State
Get Data
Pub State
Subscribe
Subscribe
Pub State
41. Key Takeaways
Connecting things creates the opportunity to orchestrate
interactions between devices and people
• Think ecosystem: secure, open, scalable
• Cloud First, but if you need Edge, design it like a micro-
cloud
• Ensure that you have a strong security story
42. What’s Next for Chick-fil-A?
• Analytics and Machine Learning on IoT Data
• Machine Learning at the Edge
• Considering providing local queueing for Edge apps
• Re-evaluating persistence
• Support for short-lived apps
43. Where to find me
www.linkedin.com/in/brian-chambers
@brianchambers21
http://brianchambers.blog
Editor's Notes
Intro to ME / Connect personally
Maybe make it personal
Going to be friends and co-workers here for the next 50 mins
Tell a few things I do outside of work so people know something about me
What I do at CFA as EA
Intro to CFA
make sure to talk about the scale and the scope of what we’re doing, and make sure people understand what CFA is (distributed nature and scale)
2000 restaurants across the US and Canada
Fast growth
Culture of innovation – what are the stories that tell about who we are as a brand?
(No longer applicable) Key Takeaway: How you can take MQTT, OAuth, and Docker Swarm to build a scalable IOT solution
Intro to CFA / Me
Agenda overview
Dive into each of the key topics and do a quick demo
Go into the CFA architecture overview
Time for QA at the end
What are “things” in the IOT world?
Things can mean a lot of different…. Things…
Mobile devices and wearables
Consumer electronics and assets that are connected (long-lived like ovens and refrigerators)
Could be big assets like cars… or industrial machines… engines, etc. (look at GE).
In the CFA world here’s the way we think of things.
They can be hardware things…
Kitchen equipment like we mentioned
Software things
Often many software things on a single hardware thing
IOT is simply taking things and connecting them
Why?
To collect data
To create interactions between people and things that are meaningful / create amazing experiences
So why is a company like CFA working on IOT?
Do we even have technology? We’re just fast food right? Fast food companies don’t do interesting engineering…
Not so much…
Why???
Capacity
Quality of products
Equipment usage and health
Food safety
Customer experiences
Automation
We see technology are critical to the future of our customer experiences, our ability to scale, and out ability to keep our foothold as a leader in the industry with quality food and great customer services experiences for everyone that comes.
Principles
Secure
Open
Scalable
Why was secure important to us? Tell a story about security.. Maybe the nanny cam one from a ways back
Security
Lot of headlines about security issues with IOT devices
Firmware often not updated
Ability to access other services is not well thought out
Design goal to ensure that we have a very secure solution
Important to have layers – connectivity, service authentication, granular permissions
Idea of where inertia takes you
Lack of standards in IOT today
Credit: Brook Ward / https://creativecommons.org/licenses/by-nc/2.0/
Open
Minimal hurdles to engage
Easy rules of engagement to follow
SDK for vendors that want to participate
Platform / Ecosystem
Mention the digital ecosystem we are building at CFA
Credit - https://www.inc.com/14-tips-for-jumping-entrepreneurships-hurdles.html
Scalable
Where inertia takes you is not good
A lot of IOT vendors see themselves as the center of the universe
Everyone provides a gateway
Everyone provides connectivity
Everyone provides a portal with analytics
There is No interoperability
No ability to build bigger things that are composites of different solutions
Costs get out of hand
Complexity gets out of hand
Tons of single points of failure
Difficult to support and manage
No interoperability
Credit - http://www.content4demand.com/blog/better-approach-building-modular-content/
Quick look at the end state architecture we use. We’re going to tear it down and build back up as we go.
Overview of the structure
We have services in the cloud that support the overall solution.
We hae the Edge
WE have a layer for connectivity
And we have things.
So the first question to answer
So today we are going to learn about Chick-fil-As approach to IOT but we are also going to do something that I think is a first at this conference…
We are going to create a new product together and figure out how to make the architecture work to support it.
Sound good?
Lets just say that, hypothetically, I wanted to be able to automate the creation of my new product idea, the IOT sandwich…
Have an opportunity together today to create a brand new product. The IOT sandwich. You might shake your head.. But trust me… its going to be amazing.
We’ll talk through the thought process and the architecture to be able to do all the things we need to do to be able to bring the device online, get connectivity, collect data from it, automate the preparation, and more.
Now that we have our IOT sandwich machine ready to go… Super high tech.
Lets get it to a point where it can securely send out data about what its doing, cause that’s useful data for us to collect and use for future decisions.
We’ll need to make sure that all of this is done VERY SECURELY.
Lets take a look at what things we should think about when it comes to security for things..
Alright, we have a product that we’ve developed that looks amazing
And we have a state of the art machine ready to be able to make it
So if Johnny is going to be an IOT device, how do we get him connected? Lets take a look at that. This is where security really comes into play…
Lets talk for a second about what we need to think about when it comes to onboarding devices
There are a lot of different approaches…
We could put crypto material on at manufacture time, but unfortunately that doesn’t work for our case… and it creates some management challenges as well
So lets talk about the approach we used to keep it simple for device manufacturers.. And what our new IOT Sandwich Bot will use.
Network credentials – don’t want to have real credentials or certs for networks hardcoded at manufacture time. Sometimes that’s not possible. Talk about our approach. We have certain credentials profiles at manufacture time, but that really doesn’t let you do anything other than register yourself. You still have to be authorized on install by a person with credentials that have access to authoirze that particular type of device. And that profile varies… so its pretty abstract and difficult to figure out
TLS – really goes without saying that everything needs to be TLS these days… both internal and external just in case. We use our own certificate authority for SSL at the edge.
Device Registration – how does a device show up in a restaurant and get connected and get permissions – more on this in a second
Authentication and Authorization – more on that as well
Brokered Communications – devices don’t get to talk direct to each other. They send messages via the MQTT broker. Single point of authentication. In fact, we actually don’t allow peer to peer communications at the network level, so if you’re a device, all you have to talk to is our edge, and perhaps internet services as well.
Use Industry standards instead of inventing our own approach
No degredation when Network offline
Demo time – 5-7 mins
So lets do it. Lets get our Johnny 5 IOT Sandwich maker connected.
My laptop will be Johnny and we will interact with the Auth server in the cloud to get Johnny on-boarded and ready to start cooking our sandwich.
So far, this is what we have seen from the architecture perspective
We can onboard a “thing” or device into the CFA ecosystem
We have connectivity as part of that process
We can get a token from the auth server
Why arent we talking about power management and constrained devices?
Most of the use cases we have solved so far don’t have power as constraints.
We do support Bluetooth but have not seen a lot of use cases so far.
The way we’ve solve that just requires a separate piece that we call an “advocate”. It has special permissions. If you’re interested come find me and I can tell you more afterwards.
Much better to use refreshable tokens. With network credentials, distribute via some authenticated service. Supports being able to change the credentials in the future if needed.
Human authorization – ensures a person makes a deicison, and ads more layers that have to be compromised. In our case we use SSO and MFA so you really need to be the intended person to onboard a device.
Great, so we’re connected and we have a token…
Our device can be manually controlled, or be “app controlled / autonomous”
For now, lets focus on a manual controlled device.
So What can we do with it?
First, we might want to be able to collect data from the machine so we understand how its being used, how often, if its being cleaned correctly, if its throwing error codes, or if it has a strange pattern of behavior for some reason. So lets do that.
IN SHORT…
We basically need a way to send and receive messages. A messaging services.
There are a number of options, but one of the market leaders is called MQTT.
We provide a messaging service for our IOT ecosystem to solve this.
Why is it a good protocol to use for us?
We run it both at the edge and in the cloud.
We use it at the edge to broker interactions – relatively high volumes of messages.
We use it in the cloud to bridge back down and send messages to the edge, relatively low volume.
https://github.com/mqtt/mqtt.github.io/wiki/Design-Principles
Explain the kinds of messages we might want to send out from a device
When it starts cooking
When it finishes
When someone presses a button
When it cleans
The fact that it came online
Putting it together so far,
here’s where we are.
All good, but what if we lose connectivity?
We won’t be able to collect any data for a while which might be okay
But it might not be good enough for us to make our sandwich. We probably want to be able to do that whether the network is up or down… so how are we going to solve that problem?
Our goal is to make the IOT sandwich anytime we want
This is where Edge compute comes into play.
What is edge compute? Why do we have it? What does it actually look like for us (3-5 devices, commodity hardware, 8GB RAM, SSD storage)
How do we think about Edge?
Edge Design is like cloud design
More than just hardware, its really a software ecosystem that we’ve developed to enable our business.
In a sense, Edge computing is really just cloud thinking at a micro scale.
We have sometimes referred to it as a micro-private-cloud or micro-datacenter.
Quick narrative
I believe CLOUD THINKING is relevant and directly applicable to EDGE THINKING
Has been a really interesting dynamic to live in. Cloud on one side with unlimited resources.
Edge on the other, highly constrained.
We were able to take cloud concepts and bring them into play at the Edge, but still have to manage our limited resource capacity. This causes us to implement some interesting patterns when it comes to Applications that run at the edge.
We have tried to apply what we’ve learned from cloud to the edge from a design principle perspective.
Build reusable, scalable platforms that have reasonable, well-documented service levels and limitations
Talk about what services we actually provide at the edge (next slide)
Deeper dive into the Edge and what we do there
How it talks out to the cloud for services
What kinds of apps we run there
What we will do at the edge in the future
Devices vs Edge apps… what they need
Devices need auth and messaging
Apps need HTTP server and persistence store
Need event collection
What is it?
Why do we use it?
Same reason you would want to use containers in the cloud basically. The design principles hold.
Isolation of apps
Self-healing architecture. When one of our edge devices dies or a service dies, the Swarm will ensure another instance is started back up
Edge tools are used to interact with the swarm remotely to handle edge cases (heh) where we have swarm failures and need to rebuild remotely, or issue other kinds of commands to the edge. Actually refining our toolset now. Came
Explain a little more how this works when services are offline
Devices almost always need only the edge to do their jobs
They rarely depend on cloud
Edge apps have some dependency on cloud and have graceful degradation when WAN is down
Putting it together so far, here’s where we are.
So I want to build an application here that can interact with the RIOT Sandwich maker…
First, I’ll need to be able to authenticate so I need to onboard like any other device to get a token, like we saw before.
Then I’ll need to be able to interact with the device so we’ll need to solve that.
I might need to know what demand there is for my sandwich at any given time so I know what to make…
And I might need to persist some state in case the server where my app is running dies. In fact, I’d really like to be HA so that I have little to no outages.
What kinds of inputs could an app take?
MQTT
Could take images or video if we wanted
This speaks to the power of the edge.
If you have enough resources (but not too many) you can do some cool stuff very quickly at the Edge.
We aren’t quite there yet, but perhaps in the future.
We are certainly building with that in mind in a world where images and video are increasingly common ways to solve problems
When the cloud isnt there, we want graceful degredation of services.
What kinds of inputs could an app take?
MQTT
Could take images or video if we wanted
This speaks to the power of the edge.
If you have enough resources (but not too many) you can do some cool stuff very quickly at the Edge.
We aren’t quite there yet, but perhaps in the future.
We are certainly building with that in mind in a world where images and video are increasingly common ways to solve problems
When the cloud isnt there, we want graceful degredation of services.
Putting it together so far, here’s where we are.
Now we’ve got a full Edge architecture including apps that we want to run
All this is great, but we need to consider Operations and Management as well…
Really want to think about scale…
ML and analytics to diagnose failures
Auto-shipping for devices diagnosed as failed (costs as much to troubleshoot remotely as replace)
Edge Tools – issue whitelisted tasks to the Swarm to fix it, perform other kinds of updates, etc.
CI/CD critical as well
Congrats, we’ve successfully been able to create the IOT sandwich.
Give yourselves a hand!
And one more look at our final architecture to recap.
So now we’ve added Operations and Management… and we have our final view of the architecture
To recap…
What can we do?
Support scale of hundreds of thousands of events per day across thousands of restaurants, at scale
Decentralized onboarding
Enabled ecosystem
Why did we do it ourselves instead of using a cloud platform?
Wanted to use open standards as much as possible, especially with security / OAuth
Wanted to be able to run our stuff and vendor stuff at the edge so we needed an open platform like Docker
So what should you take away from this?
Areas we are focusing on now…
Analytics / Machine Learning on the data collected
ML at the Edge
MQTT broker – QOS 2 not implemented in ours, may need for some use cases
Persistence change to shared Cassandra
Waking up apps when there are requests to conserve resources
Putting a post up with some helpful links related to the talk and more resources for those that want to go deeper
Quick thanks to Wes, Jean, and CFA team for all the hard work put into this, and to audience for listening. Love to connect further.