The document discusses building an IoT-enabled smoker device using AWS services. It provides an overview of the architecture, which includes an IoT device with sensors that collects cooking data and sends it to AWS IoT Greengrass. The data is then processed by serverless AWS services including a data service to store the data in DynamoDB, a detection service using AWS Lambda to monitor for cooking thresholds, and a notification service to alert the user. The system was improved over time to use a more event-driven architecture and decouple the different services.
This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
This document provides an overview of cloud concepts including cloud native applications, infrastructure as code, automation, microservices, serverless computing, deployment methods, chaos engineering, and observability. Specifically, it discusses how cloud native applications are loosely coupled and scale independently, the benefits of modeling infrastructure as code and storing it in version control, and techniques for automating infrastructure provisioning, testing, and deployments. It also covers asynchronous communication, event-driven architectures, blue/green and canary deployments, and using chaos engineering experiments to test system reliability in production environments.
1) Cloud native applications are built to take advantage of cloud computing resources like dynamically provisioned micro-services and distributed ephemeral components.
2) Netflix has transitioned to being a cloud native application built on an open source platform using AWS for scalable infrastructure, but also uses other providers for services not fully supported by AWS like content delivery and DNS.
3) What has changed is developers are freed from being the bottleneck through decentralization and automation of operations, allowing for greater agility, innovation, and business competitiveness in the cloud native model.
This document discusses building an IoT-enabled smoker device using AWS services. The architecture includes an IoT device with sensors that collects cooking data and sends it to AWS. In the cloud, the data flows through several serverless services - a data service stores the data, a detection service checks for cooking thresholds, and a notification service alerts the user. The architecture was improved over time to use AWS Greengrass on the device and event-driven lambdas in the cloud. The final system reliably captures cooking data and notifies the user to ensure great BBQ.
This document provides an overview of cloud concepts including cloud native applications, infrastructure as code, automation, microservices, serverless computing, deployment methods, chaos engineering, and observability. Specifically, it discusses how cloud native applications are loosely coupled and scale independently, the benefits of modeling infrastructure as code and storing it in version control, and techniques for automating infrastructure provisioning, testing, and deployments. It also covers asynchronous communication, event-driven architectures, blue/green and canary deployments, and using chaos engineering experiments to test system reliability in production environments.
1) Cloud native applications are built to take advantage of cloud computing resources like dynamically provisioned micro-services and distributed ephemeral components.
2) Netflix has transitioned to being a cloud native application built on an open source platform using AWS for scalable infrastructure, but also uses other providers for services not fully supported by AWS like content delivery and DNS.
3) What has changed is developers are freed from being the bottleneck through decentralization and automation of operations, allowing for greater agility, innovation, and business competitiveness in the cloud native model.
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...SnapLogic
In this webinar, learn how SnapLogic and Amazon Web Services helped Earth Networks create a responsive, self-service cloud for data integration, preparation and analytics.
We also discuss how Earth Networks gained faster data insights using SnapLogic’s Amazon Redshift data integration and other connectors to quickly integrate, transfer and analyze data from multiple applications.
To learn more, visit: www.snaplogic.com/redshift
Understand the Cloud - ebook by EBC GroupAdam Flynn
This document provides an introduction to cloud computing. It discusses how cloud computing services allow businesses to outsource their IT infrastructure to a specialist provider, reducing costs and hassles. The document then discusses the benefits of cloud computing such as scalability, cost savings, accessibility, and disaster recovery capabilities. It provides statistics on cloud computing adoption and cost savings. The document then discusses different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It uses the analogy of making pizza to explain the differences between on-premise, IaaS, PaaS and SaaS models. The document emphasizes that not all cloud providers are
The document discusses serverless cloud architecture patterns, including event-driven and messaging patterns like the saga orchestration pattern, resiliency patterns like storage first and circuit breaker, and queue patterns like priority queues. It provides examples of implementing these patterns in AWS using services like API Gateway, SQS, Step Functions and Kinesis. The talk introduces the patterns, covers their benefits and considerations, and shows how they can solve problems like handling unpredictable spikes in load and building resilient, scalable systems.
Business at the Speed of Life: Video Conferencing Made EasyLifesize
Business needs to run at the speed of life. Unfortunately, far too often our corporate communications are encumbered by overly complex technology or distance which slows down our decision making and general business processes. In this webinar, Lifesize Video Evangelist, Simon Dudley, will reveal how you can transform your business and save time through the implementation of extremely simple and cost-effective cloud-based video conferencing. Join Simon to learn how you can connect yourself, your office and your team to the world in this fun, yet informative, webinar.
You will learn:
-How Lifesize has revolutionized how users utilize video conferencing technology
-The benefits of having your video environment hosted in the cloud, like freeing up your time and resources
-How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
CI/CD (continuous integration/continuous delivery) aims to automate software delivery through repeatable processes. It acts as both the first and last line of defense by integrating automated testing into development workflows and deploying to production only when tests pass. The document outlines how CI/CD enables pull request testing, merge testing, nightly testing, blue/green deployments, canary releases, and chaos engineering to catch issues early and validate changes before production.
Transforming Manufacturing with Easy-to-Use Video ConferencingLifesize
Implementing video conferencing is no longer an effort of epic proportions. In the past, many professionals shied away from the technology due to perceived complexity and cost. Recent developments have made it MUCH simpler to implement, manage and understand, and completely accessible for companies of all sizes. Join Simon Dudley, Lifesize Video Evangelist, as he reveals how new cloud-based video conferencing solutions can transform the way you do business and how they enable you to connect your office and employees to the world in this fun, yet informative, webinar.
You will learn:
•How Lifesize has revolutionized how users utilize video conferencing technology
•The benefits of having your video environment hosted in the cloud, like freeing up your resources and time
•How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
Get YOUR Time Back with Video ConferencingLifesize
In the consulting world, time is the most precious commodity. And although video conferencing can help you save time, most consulting professionals have shied away from the technology due to perceived complexity and cost. Recent developments have made it MUCH simpler to implement, manage and understand, and completely accessible for companies of all sizes. Join Simon Dudley, Lifesize Video Evangelist, as he reveals how new cloud-based video conferencing solutions can help get YOUR time back as well as improve your customer service with improved communications. Learn how you can connect yourself, your office and your team to the world in this fun, yet informative, webinar.
You will learn:
-How Lifesize has revolutionized how users utilize video conferencing technology
-The benefits of having your video environment hosted in the cloud, like freeing up your time and resources
-How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
Moving applications to the cloud
Microsoft cloud services
Google cloud application
Amazon cloud services
Cloud application
Cloud based solution
Cloud Software Management
Google App engine
PLNOG 22 - Sebastian Grabski - Is your network ready for application from the...PROIDEA
This document discusses a company's journey to adopting cloud applications and transforming their network architecture. They initially implemented Office 365 and local internet breakouts at branches. This improved the Office 365 experience but users at headquarters still experienced issues. They deployed Zscaler to eliminate firewall appliances at headquarters locations and consolidated their 80+ network vendors. They then implemented Zscaler apps on mobile devices and split tunneling for internet traffic. This improved the remote user experience. Their goal is now migrating applications to the cloud, simplifying access, and establishing a zero-trust security model for the future.
Computing at the Edge with AWS Greengrass and Amazon FreeRTOS, ft. General El...Amazon Web Services
Edge computing is all about moving compute power to the source of the data instead of having to bring it to the cloud. The edge is a fundamental part of IoT, and it is not only about connecting things to the internet. In this sesssion, we discuss how AWS Greengrass, which is an IoT edge software, can power devices small and large, from a sensor all the way to a wind turbine. With AWS Greengrass, these IoT devices can securely gather data, keep device data in sync, and communicate with each other while still using the cloud for management, analytics, and durable storage. Join us to learn more about the edge of IoT.
Three Key Steps for Moving Your Branches to the CloudZscaler
Is backhauling traffic the most efficient way to route traffic when your workloads move to the cloud? The migration of applications from the data center to the cloud calls for a new approach to networking and security. But, keeping up with application demands and user expectations can be a struggle. Explore the challenges and benefits of establishing secure local breakouts from someone who has done it.
Advantages and disadvantages of cloud computing ppt.pptxNetwork Kings
Are you searching for the best Advantages and Disadvantages of Cloud Computing? Look no further! Check out this presentation as in this presentation you will get to know every detail of the Best Advantages and Disadvantages of Cloud Computing. Career prospects, Job Opportunities and roles and much more.
Monitoring Half a Million ML Models, IoT Streaming Data, and Automated Qualit...Databricks
Quby, an Amsterdam-based technology company, offers solutions to empower homeowners to stay in control of their electricity, gas and water usage. Using Europe’s largest energy dataset, consisting of petabytes of IoT data, the company has developed AI powered products that are used by hundreds of thousands of users on a daily basis. Delta Lake ensures the quality of incoming records though schema enforcement and evolution. But it is the Data Engineers role to check whether the expected data is ingested in to the Delta Lake at the right time with expected metrics so that downstream processes will function their duties. Re-training models and serving on the fly might go wrong unless we put the right monitoring infrastructure too.
Cloud computing refers to delivering computing services like servers, storage, databases, and networking over the internet. Key advantages of cloud computing include cost savings, flexibility, scalability, automatic updates, security, and backup/recovery. The top cloud providers are Amazon Web Services, Microsoft Azure, and Google Cloud Platform, each offering a broad range of services tailored to different needs. Specific Google Cloud Platform services like Cloud CDN, Cloud Storage, and Cloud SQL have uses like accelerating content, storing data, and managing databases.
The document discusses Google Cloud Platform and its capabilities for building, storing, and analyzing IT infrastructure in the cloud. It highlights key services including Compute Engine, App Engine, Cloud Storage, Cloud Datastore, Cloud SQL, BigQuery, and Cloud Endpoints. The platform offers scalable, reliable and secure computing resources with options for infrastructure, platform and software services as a utility.
The document discusses using Cloudera DataFlow to address challenges with collecting, processing, and analyzing log data across many systems and devices. It provides an example use case of logging modernization to reduce costs and enable security solutions by filtering noise from logs. The presentation shows how DataFlow can extract relevant events from large volumes of raw log data and normalize the data to make security threats and anomalies easier to detect across many machines.
Cloud computing involves accessing computing resources such as software, storage, and servers over the internet. Key characteristics include storing data on remote servers accessed via the web, and accessing applications from any device. There are three main types of cloud services - Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Major advantages include lower costs, improved performance, and unlimited storage capacity.
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...SnapLogic
In this webinar, learn how SnapLogic and Amazon Web Services helped Earth Networks create a responsive, self-service cloud for data integration, preparation and analytics.
We also discuss how Earth Networks gained faster data insights using SnapLogic’s Amazon Redshift data integration and other connectors to quickly integrate, transfer and analyze data from multiple applications.
To learn more, visit: www.snaplogic.com/redshift
Understand the Cloud - ebook by EBC GroupAdam Flynn
This document provides an introduction to cloud computing. It discusses how cloud computing services allow businesses to outsource their IT infrastructure to a specialist provider, reducing costs and hassles. The document then discusses the benefits of cloud computing such as scalability, cost savings, accessibility, and disaster recovery capabilities. It provides statistics on cloud computing adoption and cost savings. The document then discusses different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It uses the analogy of making pizza to explain the differences between on-premise, IaaS, PaaS and SaaS models. The document emphasizes that not all cloud providers are
The document discusses serverless cloud architecture patterns, including event-driven and messaging patterns like the saga orchestration pattern, resiliency patterns like storage first and circuit breaker, and queue patterns like priority queues. It provides examples of implementing these patterns in AWS using services like API Gateway, SQS, Step Functions and Kinesis. The talk introduces the patterns, covers their benefits and considerations, and shows how they can solve problems like handling unpredictable spikes in load and building resilient, scalable systems.
Business at the Speed of Life: Video Conferencing Made EasyLifesize
Business needs to run at the speed of life. Unfortunately, far too often our corporate communications are encumbered by overly complex technology or distance which slows down our decision making and general business processes. In this webinar, Lifesize Video Evangelist, Simon Dudley, will reveal how you can transform your business and save time through the implementation of extremely simple and cost-effective cloud-based video conferencing. Join Simon to learn how you can connect yourself, your office and your team to the world in this fun, yet informative, webinar.
You will learn:
-How Lifesize has revolutionized how users utilize video conferencing technology
-The benefits of having your video environment hosted in the cloud, like freeing up your time and resources
-How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
CI/CD (continuous integration/continuous delivery) aims to automate software delivery through repeatable processes. It acts as both the first and last line of defense by integrating automated testing into development workflows and deploying to production only when tests pass. The document outlines how CI/CD enables pull request testing, merge testing, nightly testing, blue/green deployments, canary releases, and chaos engineering to catch issues early and validate changes before production.
Transforming Manufacturing with Easy-to-Use Video ConferencingLifesize
Implementing video conferencing is no longer an effort of epic proportions. In the past, many professionals shied away from the technology due to perceived complexity and cost. Recent developments have made it MUCH simpler to implement, manage and understand, and completely accessible for companies of all sizes. Join Simon Dudley, Lifesize Video Evangelist, as he reveals how new cloud-based video conferencing solutions can transform the way you do business and how they enable you to connect your office and employees to the world in this fun, yet informative, webinar.
You will learn:
•How Lifesize has revolutionized how users utilize video conferencing technology
•The benefits of having your video environment hosted in the cloud, like freeing up your resources and time
•How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
Get YOUR Time Back with Video ConferencingLifesize
In the consulting world, time is the most precious commodity. And although video conferencing can help you save time, most consulting professionals have shied away from the technology due to perceived complexity and cost. Recent developments have made it MUCH simpler to implement, manage and understand, and completely accessible for companies of all sizes. Join Simon Dudley, Lifesize Video Evangelist, as he reveals how new cloud-based video conferencing solutions can help get YOUR time back as well as improve your customer service with improved communications. Learn how you can connect yourself, your office and your team to the world in this fun, yet informative, webinar.
You will learn:
-How Lifesize has revolutionized how users utilize video conferencing technology
-The benefits of having your video environment hosted in the cloud, like freeing up your time and resources
-How cloud video conferencing can fit your organization’s specific needs, mobilize your workforce, and make your job easier (without breaking the bank!)
Moving applications to the cloud
Microsoft cloud services
Google cloud application
Amazon cloud services
Cloud application
Cloud based solution
Cloud Software Management
Google App engine
PLNOG 22 - Sebastian Grabski - Is your network ready for application from the...PROIDEA
This document discusses a company's journey to adopting cloud applications and transforming their network architecture. They initially implemented Office 365 and local internet breakouts at branches. This improved the Office 365 experience but users at headquarters still experienced issues. They deployed Zscaler to eliminate firewall appliances at headquarters locations and consolidated their 80+ network vendors. They then implemented Zscaler apps on mobile devices and split tunneling for internet traffic. This improved the remote user experience. Their goal is now migrating applications to the cloud, simplifying access, and establishing a zero-trust security model for the future.
Computing at the Edge with AWS Greengrass and Amazon FreeRTOS, ft. General El...Amazon Web Services
Edge computing is all about moving compute power to the source of the data instead of having to bring it to the cloud. The edge is a fundamental part of IoT, and it is not only about connecting things to the internet. In this sesssion, we discuss how AWS Greengrass, which is an IoT edge software, can power devices small and large, from a sensor all the way to a wind turbine. With AWS Greengrass, these IoT devices can securely gather data, keep device data in sync, and communicate with each other while still using the cloud for management, analytics, and durable storage. Join us to learn more about the edge of IoT.
Three Key Steps for Moving Your Branches to the CloudZscaler
Is backhauling traffic the most efficient way to route traffic when your workloads move to the cloud? The migration of applications from the data center to the cloud calls for a new approach to networking and security. But, keeping up with application demands and user expectations can be a struggle. Explore the challenges and benefits of establishing secure local breakouts from someone who has done it.
Advantages and disadvantages of cloud computing ppt.pptxNetwork Kings
Are you searching for the best Advantages and Disadvantages of Cloud Computing? Look no further! Check out this presentation as in this presentation you will get to know every detail of the Best Advantages and Disadvantages of Cloud Computing. Career prospects, Job Opportunities and roles and much more.
Monitoring Half a Million ML Models, IoT Streaming Data, and Automated Qualit...Databricks
Quby, an Amsterdam-based technology company, offers solutions to empower homeowners to stay in control of their electricity, gas and water usage. Using Europe’s largest energy dataset, consisting of petabytes of IoT data, the company has developed AI powered products that are used by hundreds of thousands of users on a daily basis. Delta Lake ensures the quality of incoming records though schema enforcement and evolution. But it is the Data Engineers role to check whether the expected data is ingested in to the Delta Lake at the right time with expected metrics so that downstream processes will function their duties. Re-training models and serving on the fly might go wrong unless we put the right monitoring infrastructure too.
Cloud computing refers to delivering computing services like servers, storage, databases, and networking over the internet. Key advantages of cloud computing include cost savings, flexibility, scalability, automatic updates, security, and backup/recovery. The top cloud providers are Amazon Web Services, Microsoft Azure, and Google Cloud Platform, each offering a broad range of services tailored to different needs. Specific Google Cloud Platform services like Cloud CDN, Cloud Storage, and Cloud SQL have uses like accelerating content, storing data, and managing databases.
The document discusses Google Cloud Platform and its capabilities for building, storing, and analyzing IT infrastructure in the cloud. It highlights key services including Compute Engine, App Engine, Cloud Storage, Cloud Datastore, Cloud SQL, BigQuery, and Cloud Endpoints. The platform offers scalable, reliable and secure computing resources with options for infrastructure, platform and software services as a utility.
The document discusses using Cloudera DataFlow to address challenges with collecting, processing, and analyzing log data across many systems and devices. It provides an example use case of logging modernization to reduce costs and enable security solutions by filtering noise from logs. The presentation shows how DataFlow can extract relevant events from large volumes of raw log data and normalize the data to make security threats and anomalies easier to detect across many machines.
Cloud computing involves accessing computing resources such as software, storage, and servers over the internet. Key characteristics include storing data on remote servers accessed via the web, and accessing applications from any device. There are three main types of cloud services - Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Major advantages include lower costs, improved performance, and unlimited storage capacity.
Building a serverless AI powered translation serviceJimmy Dahlqvist
We'll craft a serverless, event-driven Slack bot, that not only translates your text with accuracy but also breathes life into it with voice generation. Leveraging the power of AWS's cloud, we'll use services like AWS StepFunctions, EventBridge, and Lambda with the advanced AI capabilities of AWS Translate and Polly. This session is not just a talk; it's a live, interactive experience where we'll build the solution right before your eyes.
Jimmy Dahlqvist gave a presentation on building a serverless AI-powered translation bot using AWS services. He discussed generative AI and how it can create new content using large foundation models trained on massive datasets. The presentation covered Amazon Translate for text translation, Amazon Polly for text-to-speech, and Amazon Comprehend for natural language processing. Dahlqvist also discussed services like Amazon API Gateway, Amazon EventBridge, AWS Step Functions and AWS Lambda that could be used to build a serverless architecture for an AI translation bot application on AWS. He concluded with an overview of the architecture for such a translation service using various AWS AI and serverless services.
The document discusses different EventBridge patterns for routing events in AWS applications. It describes single and multi-bus patterns that can be used within a single AWS account or across multiple accounts. The centralized single bus pattern provides easy integration but has single points of failure, while the distributed multi bus pattern avoids single points of failure but is more complex to design. The document also shares a client use case that takes a hybrid approach, using multiple buses within a single account for ingress, egress and internal events.
The document summarizes discussions from re:Invent 2021 about sustainability, serverless computing, community initiatives, and low-code/no-code tools. On sustainability, AWS is adding it as a new pillar and tools like the carbon footprint calculator. Serverless options were expanded for services like S3, MSK, Redshift, and EMR. The community is using new forums like re:Post and improvements to CDK. Low-code tools like Amplify Studio and SageMaker Canvas make app and model building more accessible. The takeaways note sustainability as a hot topic and continued investment in serverless.
The document discusses the history and principles of chaos engineering. It began in 2004 at Amazon and was further developed and popularized at Netflix in 2010-2012 when they created tools like Chaos Monkey and open sourced their Simian Army. Key aspects of chaos engineering discussed include defining the steady state of a system, monitoring key metrics, starting with small and reversible experiments, automating experiments to run often, and shifting mindsets to proactively address failures. The overall goal is to build confidence in a system's ability to withstand failures through experimentation.
Road to an asynchronous device registration APIJimmy Dahlqvist
The document describes the process of developing an asynchronous device registration API to address performance issues in the initial version. The first version resulted in multiple API calls and was impacted by cold starts and spikes from lambda throttling. Several improvements were then made, including making registration asynchronous without the client needing to know when it was completed, using DynamoDB for faster lookups instead of S3, and Elasticsearch for better searching instead of CloudSearch. This led to faster and more even load handling as well as a more robust backup system. However, a lesson was learned that combining low lambda concurrency with a low max receive count in the redrive policy can be problematic.
This document discusses GitOps and how it was implemented using an Alexa skill called jBot. Some key points:
- GitOps uses Git as the single source of truth for infrastructure changes and ensures an audit trail for all changes. Pull requests are at the heart of GitOps for reviewing and approving changes.
- With jBot, voice commands can be used to create pull requests, merge them, and deploy to environments like production. It uses AWS services like Step Functions, CodeBuild, and CloudFormation to automate the CI/CD pipelines.
- When a pull request is opened or closed, EventBridge triggers Step Functions workflows that comment on the PR, build the code, create temporary environments for testing
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
3. Hello, I'm
JIMMY DAHLQVIST
Father of two girls
Serverless enthusiast
AWS Ambassador
AWS Community Builder
Head of AWS Technologies at Sigma Technology Cloud
22. @jimmydahlqvist
Cloud architecture
• Reliably capture data
• Managed services
• Powerful
Data service
Detection service
Notification service
Data augmentation service
Storage first
Hi!! So exited to be back here at Öredev as a speaker.
It’s really great to see so many BBQ enthusiasts in the same room.
I’m the guy between you and lunch….
In the next 40 minutes will we talk about how to get great BBQ by using tech. We will touch topics like BBQ, IoT, BBQ, Serverless and eventdriven architecture, and did I say BBQ?
This is Iot Enabled Smoker for great BBQ and my name is Jimmy and I will be you pitmaster today!
So the agenda for the day is:
Background to the project
Overview of the architecture and how it has evolved over time
Cloud deep dive and – talk about changes I made and the benefits of that
Summary – and final thoughts
Before we dive into everything, who am I?
As I said, my name is Jimmy Dahlqvist!!
Father of two daughter, afraid of nothing!!
IoT, Serverless, BBQ fantast!
Serverless since 2016 – lambda Old
Day Job – Head of sigma – (cheer) collegues in audience
AWS Ambassador + Community Builder (cheer ??)
Let’s jump into the background so we all have the same context.
What is ?
Low & Slow – Vs hot and fast I normally target ~120-125c – Reason for doing that is….. Tenderize, tissue breakdown
Styles – Us (varies by state, Texas, Tenesee, Nort caroline, New york…), Jamaica, Austrailia, UK – Sweden most US styles
Not cooking – Art! But doesn’t mean we can’t use Tech…..
Audience Poll!!
Kamado ?
UDS?
Electric ?
Offset ?
My offset smoker – Isn’t she a beuti!
How does an offset work + IoT Device – ANIMATION
IoT Device – Watch fire, help from technology, What does it do!!
This is an overview of the IoT Device and the food probes I use, standard 2.5mm thermistor probes.
What is a Thermistor?
Background and context done
Let’s look at the components, and architecture
Two parts – IoT device + cloud
Example services for both
CLICK!!!
Let us take a closer look at the IoT Device
Two parts – HW + SW
HW:
Rasp-Pi 4
2.5mm food probes – thermistor!
MCP3008 (10bit 8 channel)– AD converter – read voltage
SW:
AWS IoT Greengrass 2.0 – Core
Custom component – math voltage to temp
Initialy simple Python app updated over SSH
Problems – Logs and Hard to update
Done – Looked at IoT device
CLICK
Let’s talk about Cloud – where we will spend most of our time
First version of cloud looked like this…
Happy little man to the left I guess is me…
Device data -> IoT Core
IoT Core -> Rules -> Storage + Athena + Dynamo
IoT Core -> Rules -> Several SF business logic (thresholds, trends…)
API GW RESTful…
IoT rules as router –> on mqtt –> not on payload –> messages discarded in business logic
Each event –> one OBJECT in s3 –> Glue/Athena not optimized for that
Data written directly to storage (Storage First is good but…) –> Format dictated by the device –> need transform
Hard to extend –> Several services did same thing –> notification –> or needed to implement API
So I had a couple of areas that I wanted to improve.
I wanted to add the possibility to do a proper ETL and data transformation. So it would be easy to change how data is stored and presented in the cloud withour having to change the device,
Introduce a event driven architecture with EventBridge as the event router instead of relying to heavily on rules in IoT core. The rules are great but at this point they didn’t really fulfill what I wanted to accomplish.
Lastly decouple the services. Break the mini-monolith...
All changes was to create a more flexible system that was easy to extend and manage.
Let’s start by looking at the changes made for the IoT device
With the Initial problems I decided to test out Greengrass.
Interact with AWS Services – s3 config etc
AWS provided components – Log Manager
Build SW as components - Easy to push and publish new versions
AWS Lambda support
Now we move over to the cloud part what was done there
Second iteration several improvments.
IoT Core no longer primary message router -> EventBridge introduced -> EventDriven architecture -° Rules / targets / subscription
Business Logic -> Microservice pattern – with clear responsibility -> Communicating over EB and API
Transformation service -> EB Transform / augemnt
EB – PayLoad filtering
Let’s take a closer look at some parts of the architecture…..
And start with the ingress part……
CLICK! -> Animate
Favorite pattern – Storage First
Create reliable way to capture data – prevent data loss
Use managed services
Very powerful when incoming data doesn’t require instant transformation
The next part we should look at is the data augmentation
CLICK
This part became very important in the new design…
CLICK
….. It allowed me to decouple cloud development from device development
Data transform pattern
Data augmentation -> Additional information fetched from DynamoDB.
Data is transformed to an internal format -> Decouple from the IoT device
Almost no code. StepFunctions integration to other services
One of the most important services are the detection service
CLICK
This is where all the BBQ magic happens…..
CLICK
Threshold breach
Trends
Stall – Happens around 70c (160f). What is it?
Next part to look at is actually the entire system.
CLICK!!
Everything is built on a serverless and event-driven approach.
Reason You build an serverless and eventdriven architecture.
Loosely coupled services
Scale and fail independently
Cost effective – pay for what you use
Extensibility – easy and fast to extend
HA – built in
So have technology help me to become a better pitmaster and to get some great BBQ?
Some may say it’s cheating but why not use tech to help?
…. I let the result speak for it self!
EventBridge Choreographs
Four bounded contexts represented by each service (orchestrated)
Stepfunctions
The service has unique business logic that need to be implemented and happen in a certain order, when a event is invoked.
StepFunctions Express Workflow has an invocation model of “At least once” which means that it’s possible that your workflow get invoked twice.
In dev and test it’s unlikely that it will happen, even in my small project I never saw it. But if you run it in a large enough scale it will happen.
So make sure your workload is idempotent and can handle it.
This makes them ideal for orchestrating idempotent actions such as transforming input data and storing via PUT in Amazon DynamoDB.
When building on eventbridge I would recommend that you create subscriptions.
With that I mean that you create one event rule for one target. Even if EB support up to 5 targets per rule I still say this should be a 1-2-1 mapping.
Why?
If you create one rule with multiple targets you will create a coupling on the event filter. And if you hit the 5 target limit what are you suppose to do then? Create a second rule with the same filter and start adding new targets?
What if you need to update the filter? You will impact several targets that might not be what you wanted to do in the first place, leaving you to start breaking things apart again.
So instead we create subscriptions where we create the coupling on the event it self, and we set one target for one rule.
It’s easy to add more target, just create a second rule and add the target to it. And the filter in every rule can change without affecting any other target.
However…. This of course can lead to several rules having the same filter and that could create problems on it’s own.
But in my opinion it’s still better to create subscriptions and deal with a rule explosion.
there is a default EB bus in the region already existing that AWS services post events to.
Should you use that as your bus in your application? My recommendation is NO. Leave that bus to be used by AWS services and create your own custom buses for you application.
The reason for that is that in that case will become easier to extend and ass busses later. I have seen the default bus being used and the mess it was when then moving to a custom bus.
So can’t repeat it enough… Leave the default bus alone! Create custom buses! It’s like using the default VPC…. We don’t do that either!
So instead of using SQS + Lambda to transport.
It would be possible to have IoT Core invoke a Express StepFunction with only one step which would be a SDK integration and that integration would just post the event to EB.
No code, only 100% managed AWS services doing all of the work for me behind the scene. That is a really nice way to be able to use the SDK integration in StepFunctions.
This wcould however break my storage first approach, but since there is no code involved, just AWS calling the SDK on my behalf I could totally live with that.
It truly show the power and flexibility in a serverless approach and a service like AWS StepFunctions.
So what is next in this project then?
So to summarize the last 40 minutes.
Building an IoT system
Serverless and event-driven
Get great BBQ with help of technology
And with that I say!
CLICK!!