This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
by Gavin Adams, Sr. IoT Specialist SA AWS
Join us for AWS IoT day at the AWS San Francisco Loft. AWS IoT enables you to easily connect and manage millions of devices securely. You can gather data from, run sophisticated analytics on, and take actions in real-time on your diverse fleet of IoT devices from edge to the cloud. You will build IoT applications with AWS IoT experts. AWS IoT provides edge-based software and cloud-based services so you can easily build IoT applications. Edge-based software, including AWS Greengrass, enables you to securely connect devices, gather data and take intelligent actions locally even when Internet connectivity is down. Cloud-based services, including AWS IoT Core, allow you to quickly onboard large and diverse fleets, maintain fleet health, and keep fleets secure.
The document discusses building an IoT-enabled smoker device using AWS services. It provides an overview of the architecture, which includes an IoT device with sensors that collects cooking data and sends it to AWS IoT Greengrass. The data is then processed by serverless AWS services including a data service to store the data in DynamoDB, a detection service using AWS Lambda to monitor for cooking thresholds, and a notification service to alert the user. The system was improved over time to use a more event-driven architecture and decouple the different services.
In this session, we review how the combined use of Amazon Web Services native tools, advanced modeling, and machine learning techniques can simplify many of the hardest security problems that are within the customer’s responsibility. Join us as we explore how services like Amazon Virtual Private Cloud flow logs, AWS CloudTrail, and Amazon Inspector combine to enable highly automated, scalable, and comprehensive security for your AWS applications. Learn how to effectively harness the data provided by AWS for security, and understand how Cisco Stealthwatch Cloud and AWS create an integrated, effective security solution.
Introducing to serverless computing and AWS lambda - Israel Clouds MeetupBoaz Ziniman
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more.
How to Easily and Securely Connect Devices to AWS IoT - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand the features of AWS IoT and how to use them
- Articulate architectures for IoT applications across commercial, consumer, and industrial use cases
- Hints and tips for keeping devices secure
This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
by Gavin Adams, Sr. IoT Specialist SA AWS
Join us for AWS IoT day at the AWS San Francisco Loft. AWS IoT enables you to easily connect and manage millions of devices securely. You can gather data from, run sophisticated analytics on, and take actions in real-time on your diverse fleet of IoT devices from edge to the cloud. You will build IoT applications with AWS IoT experts. AWS IoT provides edge-based software and cloud-based services so you can easily build IoT applications. Edge-based software, including AWS Greengrass, enables you to securely connect devices, gather data and take intelligent actions locally even when Internet connectivity is down. Cloud-based services, including AWS IoT Core, allow you to quickly onboard large and diverse fleets, maintain fleet health, and keep fleets secure.
The document discusses building an IoT-enabled smoker device using AWS services. It provides an overview of the architecture, which includes an IoT device with sensors that collects cooking data and sends it to AWS IoT Greengrass. The data is then processed by serverless AWS services including a data service to store the data in DynamoDB, a detection service using AWS Lambda to monitor for cooking thresholds, and a notification service to alert the user. The system was improved over time to use a more event-driven architecture and decouple the different services.
In this session, we review how the combined use of Amazon Web Services native tools, advanced modeling, and machine learning techniques can simplify many of the hardest security problems that are within the customer’s responsibility. Join us as we explore how services like Amazon Virtual Private Cloud flow logs, AWS CloudTrail, and Amazon Inspector combine to enable highly automated, scalable, and comprehensive security for your AWS applications. Learn how to effectively harness the data provided by AWS for security, and understand how Cisco Stealthwatch Cloud and AWS create an integrated, effective security solution.
Introducing to serverless computing and AWS lambda - Israel Clouds MeetupBoaz Ziniman
Serverless computing allows you to build and run applications without the need for provisioning or managing servers. With serverless computing, you can build web, mobile, and IoT backends; run stream processing or big data workloads; run chatbots, and more.
How to Easily and Securely Connect Devices to AWS IoT - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Understand the features of AWS IoT and how to use them
- Articulate architectures for IoT applications across commercial, consumer, and industrial use cases
- Hints and tips for keeping devices secure
The document discusses the financial impacts of cloud computing. It defines various cloud service models like SaaS, PaaS, IaaS and provides examples. Moving workloads to the cloud can significantly reduce IT costs by eliminating upfront hardware/software costs and allowing companies to pay based on usage and scale resources up or down as needed. This flexible "opex model" of the cloud can save companies 30-40% of annual IT costs on average compared to maintaining infrastructure on-premises. The cloud also enables faster innovation by making it easier to deploy applications and experiments without large capital investments.
AWS NYC Meetup - May 2017 - "AWS IoT and Greengrass"Chris Munns
Solstice and Amazon Web Services (AWS) will present the benefits and use cases of edge computing, including an overview AWS IoT and the newly launched AWS Greengrass.
AWS IoT closes the gap between physical and digital with things, internet and connectivity. AWSGreengrass enables connected devices running on AWS’s technology to process data locally-- reducing latency, allowing offline functionality, improving security, and more. We’ll share best practices for building with edge computing and Greengrass, and how you can apply it to your current and future IoT solutions. Solstice will also walk through a real-life implementation of AWS IoT and AWS Greengrass that was showcased at AWS re:Invent 2016.
Speakers:
• Chris Munns, Senior Developer Advocate, AWS
• Andrew Whiting, VP of Business Development, Solstice
• Pat Smolen, Sr. Technical Consultant, Solstice.
Getting Started with Windows Workloads on Amazon EC2 - TorontoAmazon Web Services
Thinking through how you want to run Microsoft Windows Server and application workloads on AWS is straightforward, when you have a game plan. Understanding which service to leverage– like Amazon EC2, Amazon RDS, and Directory Services to name a few – will accelerate the process further. There are also a number of new enhancements to help make things even easier. In this session we will walk through how to think about mapping to the various AWS services available so you can get your deployment or migration project off to the right start. Think of this session as the decoder ring between your on-premises deployment and what you can expect from the AWS cloud for your Microsoft Windows Server and applications.
This document discusses how broadcasters can start shifting their operations to the cloud. It outlines typical broadcast workflows and how aspects like storage, processing, and delivery can be moved using AWS and partner solutions. The challenges of bridging physical requirements to the virtual cloud are addressed through services like AWS Storage Gateway and AWS Direct Connect. Key trends highlighted include the need to manage increasing storage demands, scale media processing, and deliver content globally to meet rising consumer expectations across multiple devices. Case studies demonstrate how media companies are leveraging AWS to save costs while improving reliability and the ability to innovate.
Slide share device to iot solution – a blueprintGuy Vinograd ☁
Creating an IoT Cloud service for a connected product presents a huge challenge. Why? Because the tasks of serving millions, responding to events in near real-time, securing the solution from ambitious IoT hackers, AND generating a monthly bill that doesn't collapse the business model, resemble attempts to solve Rubik's Cube, but are far more difficult. Commercial IoT platforms are irrelevant because of the vendor-lock, so we must use basic building blocks to accomplish all this. This session will illustrate the architecture of an IoT service on top of the AWS Cloud.
General discussions
Why cloud?
The terminology: relating virtualization and cloud
Types of Virtualization and Cloud deployment model
Decisive factors in migration
Hands-on cloud deployment
Cloud for banks
This document provides an overview of AWS networking services including Virtual Private Cloud, Amazon Route 53, AWS Direct Connect, VPN, and Elastic Load Balancing. It describes each service's purpose such as Virtual Private Cloud allowing users to launch AWS resources in a virtual private network and Amazon Route 53 providing scalable and available cloud DNS. The document also defines networking terminology like scalability, fault tolerance, elasticity, durability, and availability.
AWS re:Invent 2016: Understanding IoT Data: How to Leverage Amazon Kinesis in...Amazon Web Services
The growing popularity and breadth of use cases for IoT are challenging the traditional thinking of how data is acquired, processed, and analyzed to quickly gain insights and act promptly. Today, the potential of this data remains largely untapped. In this session, we explore architecture patterns for building comprehensive IoT analytics solutions using AWS big data services. We walk through two production-ready implementations. First, we present an end-to-end solution using AWS IoT, Amazon Kinesis, and AWS Lambda. Next, Hello discusses their consumer IoT solution built on top of Amazon Kinesis, Amazon DynamoDB, and Amazon Redshift.
Big data journey to the cloud 5.30.18 asher bartchCloudera, Inc.
We hope this session was valuable in teaching you more about Cloudera Enterprise on AWS, and how fast and easy it is to deploy a modern data management platform—in your cloud and on your terms.
Full Stack Application Monitoring for AWS Powered by AIDynatrace
Title: Full stack application monitoring for AWS powered by AI
Speaker: Wayne Segar
Abstract: Dynatrace artificial intelligence autonomously detects performance and availability issues and pinpoints their root causes.
Join the “AWS Services Overview” webinar to take a fast-paced 45-minute tour through our broad range of services. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
Learning Objectives:
Overview of AWS Services
Advice for Getting Started
Microservices and serverless for MegaStartups - DLD TLV 2017Boaz Ziniman
Boaz Ziniman, a technical evangelist at AWS, presented on microservices and serverless architectures for mega startups. He discussed how monolithic architectures can limit agility and discussed how microservices help address these issues by decomposing applications into independently deployable services. He then explained how serverless computing removes the need to manage servers by allowing developers to run code without provisioning or managing servers. Examples of serverless offerings from AWS like AWS Lambda were provided. Common use cases for microservices and serverless architectures like web applications, backends, and data processing were also outlined.
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...SnapLogic
In this webinar, learn how SnapLogic and Amazon Web Services helped Earth Networks create a responsive, self-service cloud for data integration, preparation and analytics.
We also discuss how Earth Networks gained faster data insights using SnapLogic’s Amazon Redshift data integration and other connectors to quickly integrate, transfer and analyze data from multiple applications.
To learn more, visit: www.snaplogic.com/redshift
Trust No-One Architecture For Services And DataAidan Finn
This document discusses implementing a "trust no-one" architecture for services and data in cloud environments. It recommends micro-segmenting networks into secure zones, limiting public IP addresses, controlling network edges with firewalls and routing, implementing security measures like NSGs at multiple depths, and logging and monitoring traffic with Azure Security Center and Sentinel. The goal is to break from common practices of open internal networks and implement layered security everywhere using features like private endpoints, firewalls, and logging.
During this session we will describe common methods used to create a Hybrid Cloud with Amazon Web Services. We will step through successful operational models, how to get started, and tools to simplify operations. We will explore topics such as networking, directories, DNS, and security. Importantly, we will cover ongoing operational and management practices.
Mark Statham, Senior Cloud Architect - Professional Services, Amazon Web Services, ASEAN
AWS Finland User Group Meetup 2017-05-23Rolf Koski
This document discusses how adopting AWS can help customers with security and compliance. It notes that AWS manages over 1,800 security controls to secure the cloud infrastructure, allowing customers to focus on security within their applications. The document outlines key AWS security services like IAM, encryption, firewalls and more that provide automated protections. It also discusses the shared security responsibility model between AWS and customers.
The document discusses the financial impacts of cloud computing. It defines various cloud service models like SaaS, PaaS, IaaS and provides examples. Moving workloads to the cloud can significantly reduce IT costs by eliminating upfront hardware/software costs and allowing companies to pay based on usage and scale resources up or down as needed. This flexible "opex model" of the cloud can save companies 30-40% of annual IT costs on average compared to maintaining infrastructure on-premises. The cloud also enables faster innovation by making it easier to deploy applications and experiments without large capital investments.
AWS NYC Meetup - May 2017 - "AWS IoT and Greengrass"Chris Munns
Solstice and Amazon Web Services (AWS) will present the benefits and use cases of edge computing, including an overview AWS IoT and the newly launched AWS Greengrass.
AWS IoT closes the gap between physical and digital with things, internet and connectivity. AWSGreengrass enables connected devices running on AWS’s technology to process data locally-- reducing latency, allowing offline functionality, improving security, and more. We’ll share best practices for building with edge computing and Greengrass, and how you can apply it to your current and future IoT solutions. Solstice will also walk through a real-life implementation of AWS IoT and AWS Greengrass that was showcased at AWS re:Invent 2016.
Speakers:
• Chris Munns, Senior Developer Advocate, AWS
• Andrew Whiting, VP of Business Development, Solstice
• Pat Smolen, Sr. Technical Consultant, Solstice.
Getting Started with Windows Workloads on Amazon EC2 - TorontoAmazon Web Services
Thinking through how you want to run Microsoft Windows Server and application workloads on AWS is straightforward, when you have a game plan. Understanding which service to leverage– like Amazon EC2, Amazon RDS, and Directory Services to name a few – will accelerate the process further. There are also a number of new enhancements to help make things even easier. In this session we will walk through how to think about mapping to the various AWS services available so you can get your deployment or migration project off to the right start. Think of this session as the decoder ring between your on-premises deployment and what you can expect from the AWS cloud for your Microsoft Windows Server and applications.
This document discusses how broadcasters can start shifting their operations to the cloud. It outlines typical broadcast workflows and how aspects like storage, processing, and delivery can be moved using AWS and partner solutions. The challenges of bridging physical requirements to the virtual cloud are addressed through services like AWS Storage Gateway and AWS Direct Connect. Key trends highlighted include the need to manage increasing storage demands, scale media processing, and deliver content globally to meet rising consumer expectations across multiple devices. Case studies demonstrate how media companies are leveraging AWS to save costs while improving reliability and the ability to innovate.
Slide share device to iot solution – a blueprintGuy Vinograd ☁
Creating an IoT Cloud service for a connected product presents a huge challenge. Why? Because the tasks of serving millions, responding to events in near real-time, securing the solution from ambitious IoT hackers, AND generating a monthly bill that doesn't collapse the business model, resemble attempts to solve Rubik's Cube, but are far more difficult. Commercial IoT platforms are irrelevant because of the vendor-lock, so we must use basic building blocks to accomplish all this. This session will illustrate the architecture of an IoT service on top of the AWS Cloud.
General discussions
Why cloud?
The terminology: relating virtualization and cloud
Types of Virtualization and Cloud deployment model
Decisive factors in migration
Hands-on cloud deployment
Cloud for banks
This document provides an overview of AWS networking services including Virtual Private Cloud, Amazon Route 53, AWS Direct Connect, VPN, and Elastic Load Balancing. It describes each service's purpose such as Virtual Private Cloud allowing users to launch AWS resources in a virtual private network and Amazon Route 53 providing scalable and available cloud DNS. The document also defines networking terminology like scalability, fault tolerance, elasticity, durability, and availability.
AWS re:Invent 2016: Understanding IoT Data: How to Leverage Amazon Kinesis in...Amazon Web Services
The growing popularity and breadth of use cases for IoT are challenging the traditional thinking of how data is acquired, processed, and analyzed to quickly gain insights and act promptly. Today, the potential of this data remains largely untapped. In this session, we explore architecture patterns for building comprehensive IoT analytics solutions using AWS big data services. We walk through two production-ready implementations. First, we present an end-to-end solution using AWS IoT, Amazon Kinesis, and AWS Lambda. Next, Hello discusses their consumer IoT solution built on top of Amazon Kinesis, Amazon DynamoDB, and Amazon Redshift.
Big data journey to the cloud 5.30.18 asher bartchCloudera, Inc.
We hope this session was valuable in teaching you more about Cloudera Enterprise on AWS, and how fast and easy it is to deploy a modern data management platform—in your cloud and on your terms.
Full Stack Application Monitoring for AWS Powered by AIDynatrace
Title: Full stack application monitoring for AWS powered by AI
Speaker: Wayne Segar
Abstract: Dynatrace artificial intelligence autonomously detects performance and availability issues and pinpoints their root causes.
Join the “AWS Services Overview” webinar to take a fast-paced 45-minute tour through our broad range of services. During the webinar, you will have the opportunity to propose questions for the live Q&A session following the presentation.
Learning Objectives:
Overview of AWS Services
Advice for Getting Started
Microservices and serverless for MegaStartups - DLD TLV 2017Boaz Ziniman
Boaz Ziniman, a technical evangelist at AWS, presented on microservices and serverless architectures for mega startups. He discussed how monolithic architectures can limit agility and discussed how microservices help address these issues by decomposing applications into independently deployable services. He then explained how serverless computing removes the need to manage servers by allowing developers to run code without provisioning or managing servers. Examples of serverless offerings from AWS like AWS Lambda were provided. Common use cases for microservices and serverless architectures like web applications, backends, and data processing were also outlined.
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...SnapLogic
In this webinar, learn how SnapLogic and Amazon Web Services helped Earth Networks create a responsive, self-service cloud for data integration, preparation and analytics.
We also discuss how Earth Networks gained faster data insights using SnapLogic’s Amazon Redshift data integration and other connectors to quickly integrate, transfer and analyze data from multiple applications.
To learn more, visit: www.snaplogic.com/redshift
Trust No-One Architecture For Services And DataAidan Finn
This document discusses implementing a "trust no-one" architecture for services and data in cloud environments. It recommends micro-segmenting networks into secure zones, limiting public IP addresses, controlling network edges with firewalls and routing, implementing security measures like NSGs at multiple depths, and logging and monitoring traffic with Azure Security Center and Sentinel. The goal is to break from common practices of open internal networks and implement layered security everywhere using features like private endpoints, firewalls, and logging.
During this session we will describe common methods used to create a Hybrid Cloud with Amazon Web Services. We will step through successful operational models, how to get started, and tools to simplify operations. We will explore topics such as networking, directories, DNS, and security. Importantly, we will cover ongoing operational and management practices.
Mark Statham, Senior Cloud Architect - Professional Services, Amazon Web Services, ASEAN
AWS Finland User Group Meetup 2017-05-23Rolf Koski
This document discusses how adopting AWS can help customers with security and compliance. It notes that AWS manages over 1,800 security controls to secure the cloud infrastructure, allowing customers to focus on security within their applications. The document outlines key AWS security services like IAM, encryption, firewalls and more that provide automated protections. It also discusses the shared security responsibility model between AWS and customers.
Building a serverless AI powered translation serviceJimmy Dahlqvist
We'll craft a serverless, event-driven Slack bot, that not only translates your text with accuracy but also breathes life into it with voice generation. Leveraging the power of AWS's cloud, we'll use services like AWS StepFunctions, EventBridge, and Lambda with the advanced AI capabilities of AWS Translate and Polly. This session is not just a talk; it's a live, interactive experience where we'll build the solution right before your eyes.
The document discusses serverless cloud architecture patterns, including event-driven and messaging patterns like the saga orchestration pattern, resiliency patterns like storage first and circuit breaker, and queue patterns like priority queues. It provides examples of implementing these patterns in AWS using services like API Gateway, SQS, Step Functions and Kinesis. The talk introduces the patterns, covers their benefits and considerations, and shows how they can solve problems like handling unpredictable spikes in load and building resilient, scalable systems.
Jimmy Dahlqvist gave a presentation on building a serverless AI-powered translation bot using AWS services. He discussed generative AI and how it can create new content using large foundation models trained on massive datasets. The presentation covered Amazon Translate for text translation, Amazon Polly for text-to-speech, and Amazon Comprehend for natural language processing. Dahlqvist also discussed services like Amazon API Gateway, Amazon EventBridge, AWS Step Functions and AWS Lambda that could be used to build a serverless architecture for an AI translation bot application on AWS. He concluded with an overview of the architecture for such a translation service using various AWS AI and serverless services.
The document discusses different EventBridge patterns for routing events in AWS applications. It describes single and multi-bus patterns that can be used within a single AWS account or across multiple accounts. The centralized single bus pattern provides easy integration but has single points of failure, while the distributed multi bus pattern avoids single points of failure but is more complex to design. The document also shares a client use case that takes a hybrid approach, using multiple buses within a single account for ingress, egress and internal events.
The document summarizes discussions from re:Invent 2021 about sustainability, serverless computing, community initiatives, and low-code/no-code tools. On sustainability, AWS is adding it as a new pillar and tools like the carbon footprint calculator. Serverless options were expanded for services like S3, MSK, Redshift, and EMR. The community is using new forums like re:Post and improvements to CDK. Low-code tools like Amplify Studio and SageMaker Canvas make app and model building more accessible. The takeaways note sustainability as a hot topic and continued investment in serverless.
CI/CD (continuous integration/continuous delivery) aims to automate software delivery through repeatable processes. It acts as both the first and last line of defense by integrating automated testing into development workflows and deploying to production only when tests pass. The document outlines how CI/CD enables pull request testing, merge testing, nightly testing, blue/green deployments, canary releases, and chaos engineering to catch issues early and validate changes before production.
This document provides an overview of cloud concepts including cloud native applications, infrastructure as code, automation, microservices, serverless computing, deployment methods, chaos engineering, and observability. Specifically, it discusses how cloud native applications are loosely coupled and scale independently, the benefits of modeling infrastructure as code and storing it in version control, and techniques for automating infrastructure provisioning, testing, and deployments. It also covers asynchronous communication, event-driven architectures, blue/green and canary deployments, and using chaos engineering experiments to test system reliability in production environments.
The document discusses the history and principles of chaos engineering. It began in 2004 at Amazon and was further developed and popularized at Netflix in 2010-2012 when they created tools like Chaos Monkey and open sourced their Simian Army. Key aspects of chaos engineering discussed include defining the steady state of a system, monitoring key metrics, starting with small and reversible experiments, automating experiments to run often, and shifting mindsets to proactively address failures. The overall goal is to build confidence in a system's ability to withstand failures through experimentation.
Road to an asynchronous device registration APIJimmy Dahlqvist
The document describes the process of developing an asynchronous device registration API to address performance issues in the initial version. The first version resulted in multiple API calls and was impacted by cold starts and spikes from lambda throttling. Several improvements were then made, including making registration asynchronous without the client needing to know when it was completed, using DynamoDB for faster lookups instead of S3, and Elasticsearch for better searching instead of CloudSearch. This led to faster and more even load handling as well as a more robust backup system. However, a lesson was learned that combining low lambda concurrency with a low max receive count in the redrive policy can be problematic.
This document discusses GitOps and how it was implemented using an Alexa skill called jBot. Some key points:
- GitOps uses Git as the single source of truth for infrastructure changes and ensures an audit trail for all changes. Pull requests are at the heart of GitOps for reviewing and approving changes.
- With jBot, voice commands can be used to create pull requests, merge them, and deploy to environments like production. It uses AWS services like Step Functions, CodeBuild, and CloudFormation to automate the CI/CD pipelines.
- When a pull request is opened or closed, EventBridge triggers Step Functions workflows that comment on the PR, build the code, create temporary environments for testing
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
6. @jimmydahlqvist
Concept
• A network of physical device
• Collection of data from connected devices
• Wide range of use cases
• Rapidly growing, more and more devices being connected
• IIoT – Industry 4.0
7. @jimmydahlqvist
Scenario
• Thousands to millions of devices
• Unpredicatble traffic and varying load
• High speed real time data
• Asynchronous data processing
9. @jimmydahlqvist
AWS IoT Core
• Managed service to easily connect, register, and manage devices
• Robust security, authentication, encryption, and access control
• Scales to millions of devices
• Support standard protocol, MQTT and HTTP
• Event-based architecture
10. @jimmydahlqvist
IoT Core and X.509 Certificates
• Used to authenticate and secure connections
• Certificate base mutual authentication
• Unique certificate per device
• Policy based authorization, based on identities
• Built in Certificate Authority (CA)
• Support custom CA
11. @jimmydahlqvist
MQTT Broker and Topics
• Integrates with several AWS services
• Powerful rules engine
• Topics are used to route messages
• Hierarchical
• sensors/{sensor_id}/temperature
First Second Third
16. @jimmydahlqvist
IoT Policy
• Define permissions and access
• Attached to Things, Thing Types, or AWS IoT resources
• Support variables from Thing and Certificate
38. @jimmydahlqvist
• Reliably capture data
• Managed services
• Powerful
Storage first
Service C
Service B
Service A
Data Transformation Service
Analytics Service
56. @jimmydahlqvist
I would say so!
• Several production environments
• Thousands and thousands of devices
• Hundreds of messages per second
• Vast amount of data analysed
Smart homes, manufacturing, smart buildings, door bells, smart cities?
....
Lightbulbs, Locks, Sensors for temp and humidity, Security cameras, connected Doors, Water usage sensors, Gas usage
The list continues
100-rds of devices, 1000, 10.0000, Millions
The scale is unlimited
Iot Systems come in all shapes and forms and our architecture must be able to meet the demand when is come to scalability and cost.
The number of IoT systems grows for every day and the IoT space is one of the most growing areas at the moment.
Today we will take a look at Iot and one of several cloud architectures I have built over the year. The challenges we phased and what changes we did.
Today we will look into 4 areas.
Starting of with an introduction to IoT and AWS IoT services
We’ll take a look at a sample architecture, the challenges we phased
What changes was made when we decided to go event-driven and fully serverless
And finally I will leae you with some takeaways and thoughts.
Before we start, who am I?
My name is Jimmy Dahlqvist!!
Father of two daughter, afraid of nothing!!
IoT, Serverless, event-driven fantast
Serverless since 2016 – lambda Old
AWS Ambassador + Community Builder (cheer ??)
Day Job – Head of sigma – (cheer) collogues in audience
IoT…. Internet of things.
IIOT…. Industrial internet of things, also called industry 4.0
What is it? What is a thing? How do we connect them? What are the purpose of them all?
How do we build a efficient cloud architecture it?
Hopefully I can answer some of those questions today.
network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and connectivity, which enables these objects to connect and exchange data.
IoT enables businesses and organizations to collect data from a vast network of connected devices, analyse that data, and use the insights gained to optimize operations, reduce costs, and improve customer experiences.
IoT technology is rapidly growing, with more and more devices being connected to the internet every day.
IoT has a wide range of use cases, from simple home automation to complex industrial automation systems.
IIOT…. Industrial internet of things, industry 4.0
IIoT enables machines, sensors, and other devices to collect and exchange data, and can be used to improve operational efficiency, reduce downtime, and enable predictive maintenance. IIoT can also help businesses to optimize their supply chain management, reduce waste, and improve product quality.
Through out this talk this is the scanario we’ll keep in mind.
This is what we’ll try to build our architecture and IoT system for.
With that in mind, now let’s move over and talk about different components of AWS IoT.
There are so many different services in this collection so our focus will be on AWS IoT Core and it’s components.
Except core there are services like Greengrass that you run on your device, Device Defender for security and protection of devices, Fleetwise to manage and on-board vast amount of devices.
managed cloud service that makes it easy to connect, register, and manage IoT devices at scale. organizations can easily connect a wide range of devices, including sensors, gateways, and appliances, to the cloud and securely transmit data to and from them.
key features of AWS IoT Core is its robust security capabilities. It provides authentication, encryption, and access control to help ensure that only authorized devices and users can access the system and the data transmitted over it.
AWS IoT Core is highly scalable and can support millions of devices, making it ideal for large-scale IoT deployments.
AWS IoT Core supports standard protocols such as MQTT (Message Queuing Telemetry Transport) and HTTP.
The event-based architecture of AWS IoT Core enables organizations to build complex IoT applications that respond to real-time data and events. This architecture supports the use of rules and actions to automate actions based on incoming data, which can help organizations to respond quickly to changing conditions and improve operational efficiency.
AWS IoT Core uses X.509 certificates to authenticate and secure connections between IoT devices and the cloud.
X.509 certificates are digital certificates that use a standard format to encode public key information, identity information, and digital signatures. X.509 certificates are widely used for authentication and encryption in many security protocols, such as SSL/TLS, IPSec, and S/MIME.
In AWS IoT Core, X.509 certificates are used for mutual authentication, which means that both the device and the cloud need to authenticate each other using a unique certificate.
Each device is assigned a unique X.509 certificate, which is used to authenticate and authorize its connections to the cloud. The device's certificate is signed by a built-in Certificate Authority (CA) in IoT Core, which ensures the authenticity and integrity of the certificate.
IoT Core also supports custom CAs, which enable customers to use their own certificate infrastructure to issue and manage device certificates.
In addition to authentication, IoT Core uses X.509 certificates for authorization, which means that access to resources and services is based on the policies associated with the device's identity.
Policies in IoT Core define the permissions and restrictions for each device, based on its identity and attributes. Policies can be based on the device's certificate, its Thing Type, its attributes, or other contextual information.
X.509 certificates and policies enable a secure and scalable IoT ecosystem, where devices and cloud services can communicate with each other in a trusted and controlled way.
In summary, AWS IoT Core uses X.509 certificates for mutual authentication and authorization of IoT devices. Each device has a unique certificate, which is signed by a built-in or custom CA. Policies define the permissions and restrictions for each device, based on its identity and attributes. X.509 certificates and policies enable a secure and scalable IoT ecosystem.
MQTT (Message Queuing Telemetry Transport) is a lightweight messaging protocol that is widely used in IoT applications
MQTT broker is a server that acts as a central hub for message exchange between devices. The broker receives messages published by devices and routes them to the appropriate subscribers.
key features of MQTT is its use of topics to route messages between devices
MQTT topics are hierarchical,
For example, a topic like "sensors/{sensor_id}/temperature"
AWS IoT integrates with several AWS services, including Amazon S3, Amazon Kinesis, and Amazon DynamoDB
AWS IoT also includes a powerful rules engine that allows organizations to define rules and actions based on incoming data from IoT devices.
In summary, MQTT is a lightweight messaging protocol that enables efficient and reliable communication between devices and the cloud. The use of topics and a hierarchical structure enables devices to subscribe to specific messages based on their interests. AWS IoT integrates with several AWS services and includes a powerful rules engine that enables organizations to automate workflows and respond quickly to changing conditions.
What is an IoT Thing?
IoT Things represent physical or virtual devices that can communicate with other devices and cloud services over the internet. Things can be assigned a Thing Type, which defines their attributes and behaviors. Things can publish and subscribe to MQTT topics to send and receive data. IoT Things can also interact with other AWS services to enable storage, analytics, and processing of IoT data.
Before we can use a Thing and receive data from it, it must be registered in IoT Core.
The unique device certificates are provisioned and flashed onto the device during manufacturing. However we don’t want it to be registered directly. Instead we like to provision our devices as they connect, just-in-time.
What do we need for JIT?
It required the use of our own custom CA, we can’t use the AWS built in CA as these certs are created and feteched from IoT Core.
For this CA we attach what is know as a provision template that defines attributes and actions that should happen during the provisioning process.
As an extra security measure it’s good pratcice to use pr-provisioning hooks to validate the cert data and things.
So a step by step illustration for the provisioning process.
The thing connects to IoT Core. The CA is recognized and the template is used to start the prcess.
We invoke our lambda hook that can validate our thing and return OK/ NOK back.
We create our policy, register the certificate but doesn’t enable it, and create the Iot thing.
Next the policy is attached to the cert and the cert is attached to the thing.
Finally we enable the cert.
An IoT Core policy is a set of rules and permissions that defines how IoT devices and other entities can interact with AWS IoT Core resources and services. Policies in IoT Core are based on identities, which are represented by X.509 certificates
Policies can grant or deny access to MQTT topics
Policies can be attached to things, thing types, principals, or groups.
Policy language support variables from Thing and Certificate
Here is an example policy that would allow a device to publish to a topic that end with it’s ThingName.
By using simiiar conditions we can restrict topics that a device can subscribe to, this way we can enforce that devices can only subscribe or publish to topics that belomng to that device.
Sometimes devices need to interact with AWS services directly, that could be a video camera sending video to Kinesis video stream or a device uploadning data to S3.
The device can then use the Iam credentials endpoint and exchange the x.509 certificate for temporary iam credentials. This is similar to IAM Roles Anywhere, but this functionality has been around for longer time.
When devices send data we use topic based routing.
IoT rules are created and associated with an topic, the rule can the contain logic and conditions and if these are met the rule and associated targets are invoked.
Receiving data work abou the same but there are no rules that are invoked.
A device creates a subscription on a topic and when messages arrive on that topic the message is sent to the device
With this send / subscribe functionality we can build an MQTT based API where devices send api actions on one topic and then expects an answer to be published to a specific topic.
This is a very powerful way for devices to interact and perform actions on resources in the cloud.
For example, a device could send an API action that is want to upload data to S3. Instead of using IAM credentials endpoint and implement logic in the device, an MQTT api could respond with an pre-signed url that the device can upload to.
So that was the basics…..
Now let’s jumo into and start building our Cloud architecture.
We will start out with a fairly basic design, from a real world use case, and we’ll iterate over that design to improve it over time, where we finally arrive at out final version.
First version of cloud looked like this…
Device data -> IoT Core
IoT Core -> Rules -> Storage + Athena + Dynamo
IoT Core -> Rules -> Several SF business logic (thresholds, trends…)
API GW RESTful…
IoT rules as router –> on mqtt –> not on payload –> messages discarded in business logic
Each event –> one OBJECT in s3 –> Glue/Athena not optimized for that
Data written directly to storage (Storage First is good but…) –> Format dictated by the device –> need transform
Hard to extend –> Several services did same thing –> notification –> or needed to implement API
First improvement to this is to move to a more event-driven system. Even if Iot Core can be used to build out an event-driven system I often found it not to be optimal.
The theory I had that was going to improve the architecture was:
Remove IoT Rules as event router
Improve extensibility
Introduce Amazon EventBridge
Rules Engine pricing
Rules initiated: $0.15 (per million rules triggered / per million actions applied)
Actions applied: $0.15 (per million rules triggered / per million actions applied)
But before we do any changes, just let us look at what an event can be defined as
In the Theory the plan was to use EB.
So why use EB then and not IoT Core? Well it come with several good things and some differences
Main difference is that EB-2-EB, services can subscibe and publish, not only devices., easy to extend
So with that improvement we removed the Topic routing and instead send all messages to an SQS queue.
The queue invokes a Lambda function that send the event / message to EB.
All services can then listen for the event and react to that.
BUT…..
Still not solved the S3 part…. Needed a way for the raw untouched data to be stored for later use.
So instead of SQS Kinesis was used as the service bewteen IoT and EB
That way Firehose can be set as consumer on kinesis as well and send data in batches to S3.
If we then expand on that simplified architecture to include more services for different purpose.
We have expanded on the analytics service, not only does it store data in S3, it also send data to an OpenSearch cluster that can be used for easy visualization.
QuickSight could be used with Athena to create advanced business dashboards
I have also introduced one new key services….. CLICK!
This data transformation service.
Why? Becasuse this makes it easy to use different data formats from thde devices but all internal services use an different format.
We can also augment the data and add more information in the event that goes to EB.
That mean that this service pick upp all data events from IoT, transform to an internal format, repost it onto EB and all internal service uses this.
Of course the service is using StepFunction and in that as many SDK actions as possible.
Why write code if you don’t need to right ?
In this small example we….. And if you have seen my BBQ talk before you might recognize this
After a couple of iteration this is the architecture I’m using…. It has a couple of advantages
Reason You build an serverless and eventdriven architecture.
Loosely coupled services
Scale and fail independently
Cost effective – pay for what you use
Extensibility – easy and fast to extend
HA – built in
Let’s take a closer look at architecture pattern that I tend to use all the time.
Actually even used it in my very first architecture in 2016. That time we integrates API Gateway directly with SQS:
And that of course is:
CLICK! -> Animate
Create reliable way to capture data – prevent data loss
Use managed services
Very powerful when incoming data doesn’t require instant transformation
That has been a lot of information…
Now I like to leave you with some take-aways and thoughts
MQTT and HTTP messaging pricing
Up to 1 billion messages: $1.00 (per million messages)
Rules Engine pricing
Rules initiated: $0.15 (per million rules triggered / per million actions applied)
Actions applied: $0.15 (per million rules triggered / per million actions applied)
EventBridge Choreographs
Four bounded contexts represented by each service (orchestrated)
Stepfunctions
The service has unique business logic that need to be implemented and happen in a certain order, when a event is invoked.
Don’t use lambda to transport data, it should be used to transform data.
Before there were no other option, unfortunately. But nw we have EventBridge pipe……
So the next change in this will be to remove the Lambda function and move over to EB Pipes
Be kind to services around you.
In an event-driven and serverless system you can scale out massivly. IF you call downstream services, event 3rd party, try and not overwhelm them. If they fail don’t retry over and over agin.
Use exponantial backof and patterns like circuit breaker
Some tjoughts on StepFunctions
I use SDK integration as much as possble….
As I said, why write code.
But choose carefully
Be aware of the different workflow types, how they behave, and how you pay for them
Selecting the wrong one can become expensive
And also…. Remember that!
StepFunctions Express Workflow has an invocation model of “At least once” which means that it’s possible that your workflow get invoked twice.
In dev and test it’s unlikely that it will happen, even in my small project I never saw it. But if you run it in a large enough scale it will happen.
So make sure your workload is idempotent and can handle it.
We learned this the hard way….. With data being written twice.
In the normal small case you will not see this, but when you run millions of events it will happen
When building on EventBridge I would recommend that you create subscriptions.
With that I mean that you create one event rule for one target. Even if EB support up to 5 targets per rule I still say this should be a 1-2-1 mapping.
Why?
If you create one rule with multiple targets you will create a coupling on the event filter. And if you hit the 5 target limit what are you suppose to do then? Create a second rule with the same filter and start adding new targets?
What if you need to update the filter? You will impact several targets that might not be what you wanted to do in the first place, leaving you to start breaking things apart again.
So instead we create subscriptions where we create the coupling on the event it self, and we set one target for one rule.
It’s easy to add more target, just create a second rule and add the target to it. And the filter in every rule can change without affecting any other target.
However…. This of course can lead to several rules having the same filter and that could create problems on it’s own.
But in my opinion it’s still better to create subscriptions and deal with a rule explosion.
there is a default EB bus in the region already existing that AWS services post events to.
Should you use that as your bus in your application? My recommendation is NO. Leave that bus to be used by AWS services and create your own custom buses for you application.
The reason for that is that in that case will become easier to extend and ass busses later. I have seen the default bus being used and the mess it was when then moving to a custom bus.
So can’t repeat it enough… Leave the default bus alone! Create custom buses! It’s like using the default VPC…. We don’t do that either!
Talk about 2 types of patterns, centralized and de-centralized (Single / multi Bus)
Single and Multi Account
All the patterns allow us to decouple the publisher from the subscriber. The service that publish doesn’t really need to know who is listening in the other end.
Look at centralized advantages and disadvantages
Advantagesallow you to manage all routing, security and policies in one place, single deployment)
All routing centralized, concentrating all communication to a single event bus (
Enables central management of resources
Allows you to easy integrate applications with few changes.
Disadvantages
As number of integrations grow so does the complexity and resource utilization. Can become a Single point of failure.
All routing is centralized….
Prevents autonomy
Single point of failure
In a decentralized approach routing is spread across multiple event buses and the publisher often becomes the logical owner of that bus.
The service owns the mechanism to distribute the events.
Even if more buses are more work from a operational approach it enables autonomy and doesn’t become a single point of failure
On the other hand, designing distributed systems, managing all resources, can become a huge challenge if not done properly from the start. Applying this as an afterthought is almost impossible.
So the time to get started might be longer, integration of new services and applications require more change and take more time.
Characteristics……
Not to forget!!
It powers the most amazing system in the whole world!!