Automation is what takes IoT projects further than visualisation dashboards and offline analysis into real-world actions that drive results. Rule engines are automation frameworks that enable companies to accelerate application development and support the complexity and scale that IoT automation requires.
We will have a practical look at how you can evaluate any rules engine by immediately matching your unique business logic requirements with the necessary rules engine capabilities.
Solving the weak spots of serverless with directed acyclic graph modelVeselin Pizurica
So far Finite State Machine (AWS Step Functions) and Flow Engines have been used functions orchestration. They both have difficulties in dealing with modelling complex logic, stream merging, async processing, task coordination, state sharing, data dependency etc. In this talk I will present a novel approach to serverless orchestration based on Directed Acyclic Graph model.
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrig...Data Science Milan
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrigoni, Senior Data Scientist, Pirelli (pirelli.com)
Abstract:
Pirelli, a global performance tire manufacturer, uses data science in its 20 factories to improve quality and efficiency, and reduce energy consumption. For this “Smart Manufacturing” initiative, Pirelli’s data science team has developed predictive models and analytics tools to monitor processes, machines and materials on the factory floors. In this talk we will show some of the solutions we deploy, demonstrate how we used Domino’s data science platform and Plot.ly to build these solutions, and discuss the next steps in this journey towards predictive maintenance.
Bio:
Alberto Arrigoni is a data scientist at Pirelli, where he works to process sensors and telemetry data for IoT, Smart Factories and connected-vehicle applications.
He works closely with all major business units such as R&D, industrial engineering and BI to develop tailored machine learning algorithms and production systems.
He holds a PhD in biostatistics from the University of Milan Bicocca and prior to joining Pirelli was a staff data scientist at the National Institute of Molecular Genetics (Milan), as well as a Fulbright student at the Santa Clara University and visiting PhD student at Pacific Biosciences (Menlo Park, CA).
Tech talk by Serena Signorelli (https://www.linkedin.com/in/serenasignorelli/) in the event ''Tensorflow and Sparklyr: Scaling Deep Learning and R to the Big Data ecosystem'', May 15, 2017 at ICTeam Grassobbio (BG). The event was part of the Data Science Milan Meetup (https://www.meetup.com/it-IT/Data-Science-Milan/).
Google Cloud infrastructure in Conrad Connect by Google & waylayVeselin Pizurica
Conrad Connect lets users interconnect smart devices from different ecosystems with online services. It provides customized dashboards to visualise data from different vendors. It also allows users to build advanced automation rules or to control devices and services using voice and smart bots.
Conrad Connect application is built on top of the waylay platform and it is managed and deployed in the Google cloud.
With close to 100K connected devices, 20 million API calls a day and few billion metrics per week stored, many challenges need to be addressed: How to constantly scale up the platform with exponential growth of the users? How to manage deployments, new releases and upgrades?
In this talk you will learn more how waylay leverages some of the latest Google technologies to address these challenges
International Journal of Computer Science, Engineering and Information Techn...ijcseit
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT)
will provide an excellent international forum for sharing knowledge and results in theory,
methodology and applications of Computer Science, Engineering and Information Technology.
The Journal looks for significant contributions to all major fields of the Computer Science and
Information Technology in theoretical and practical aspects. The aim of the Journal is to provide
a platform to the researchers and practitioners from both academia as well as industry to meet and
share cutting-edge development in the field.
All submissions must describe original research, not published or currently under review for another
conference or journal.
Solving the weak spots of serverless with directed acyclic graph modelVeselin Pizurica
So far Finite State Machine (AWS Step Functions) and Flow Engines have been used functions orchestration. They both have difficulties in dealing with modelling complex logic, stream merging, async processing, task coordination, state sharing, data dependency etc. In this talk I will present a novel approach to serverless orchestration based on Directed Acyclic Graph model.
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrig...Data Science Milan
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrigoni, Senior Data Scientist, Pirelli (pirelli.com)
Abstract:
Pirelli, a global performance tire manufacturer, uses data science in its 20 factories to improve quality and efficiency, and reduce energy consumption. For this “Smart Manufacturing” initiative, Pirelli’s data science team has developed predictive models and analytics tools to monitor processes, machines and materials on the factory floors. In this talk we will show some of the solutions we deploy, demonstrate how we used Domino’s data science platform and Plot.ly to build these solutions, and discuss the next steps in this journey towards predictive maintenance.
Bio:
Alberto Arrigoni is a data scientist at Pirelli, where he works to process sensors and telemetry data for IoT, Smart Factories and connected-vehicle applications.
He works closely with all major business units such as R&D, industrial engineering and BI to develop tailored machine learning algorithms and production systems.
He holds a PhD in biostatistics from the University of Milan Bicocca and prior to joining Pirelli was a staff data scientist at the National Institute of Molecular Genetics (Milan), as well as a Fulbright student at the Santa Clara University and visiting PhD student at Pacific Biosciences (Menlo Park, CA).
Tech talk by Serena Signorelli (https://www.linkedin.com/in/serenasignorelli/) in the event ''Tensorflow and Sparklyr: Scaling Deep Learning and R to the Big Data ecosystem'', May 15, 2017 at ICTeam Grassobbio (BG). The event was part of the Data Science Milan Meetup (https://www.meetup.com/it-IT/Data-Science-Milan/).
Google Cloud infrastructure in Conrad Connect by Google & waylayVeselin Pizurica
Conrad Connect lets users interconnect smart devices from different ecosystems with online services. It provides customized dashboards to visualise data from different vendors. It also allows users to build advanced automation rules or to control devices and services using voice and smart bots.
Conrad Connect application is built on top of the waylay platform and it is managed and deployed in the Google cloud.
With close to 100K connected devices, 20 million API calls a day and few billion metrics per week stored, many challenges need to be addressed: How to constantly scale up the platform with exponential growth of the users? How to manage deployments, new releases and upgrades?
In this talk you will learn more how waylay leverages some of the latest Google technologies to address these challenges
International Journal of Computer Science, Engineering and Information Techn...ijcseit
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT)
will provide an excellent international forum for sharing knowledge and results in theory,
methodology and applications of Computer Science, Engineering and Information Technology.
The Journal looks for significant contributions to all major fields of the Computer Science and
Information Technology in theoretical and practical aspects. The aim of the Journal is to provide
a platform to the researchers and practitioners from both academia as well as industry to meet and
share cutting-edge development in the field.
All submissions must describe original research, not published or currently under review for another
conference or journal.
IoT-Daten: Mehr und schneller ist nicht automatisch besser.
Über optimale Sampling-Strategien, wie man rechnen kann, ob IoT sich rechnet, und warum es nicht immer Deep Learning und Real-Time-Analytics sein muss. (Folien Deutsch/Englisch)
Image Caption Generation: Intro to Distributed Tensorflow and Distributed Sco...ICTeam S.p.A.
Tech talk by Luca Grazioli (https://www.linkedin.com/in/luca-grazioli-a74927bb/) in the event ''Tensorflow and Sparklyr: Scaling Deep Learning and R to the Big Data ecosystem'', May 15, 2017 at ICTeam Grassobbio (BG). The event was part of the Data Science Milan Meetup (https://www.meetup.com/it-IT/Data-Science-Milan/).
Right now in institutions around the world, some of the greatest minds in computer science and statistics are coming up with amazing new algorithms and mathematically beautiful solutions. However it's entirely possible that the solutions they conceive will be impracticable in industry. The reason is simple; "the best answer is useless if it arrives too late to do anything with it". The key principle here is the compromise between 'accuracy' and 'latency'. In this talk I will describe examples where this holds true, and how I am using real-time machine learning models to solve challenges in eCommerce, Financial Services and Media companies.
http://tumra.com/blog/real-time-machine-learning-at-industrial-scale
DN18 | Applied Machine Learning in Cybersecurity: Detect malicious DGA Domain...Dataconomy Media
Abstract of the Presentation:
Malware like GameOver Zeus and CryptoLocker Botnets are a massive threat for organizations. They use domain generation algorithms (DGAs) to create URLs that host malicious websites or command and control servers. Traditional approaches fail to detect and stop them early. In this Talk you learn in a live demo how you can use machine learning to detect malicious domains in your environment and learn how to implement a full end to end data science use case leveraging the Splunk Machine Learning Toolkit.
About the Author:
Philipp works as Staff Machine Learning Architect at Splunk. His background is in data sciene, visualization and analytics with experience in automotive, transportation and software industries. He enjoys working with Splunk customers and partners across EMEA.
EclipseCon France 2015 - Science TrackBoris Adryan
Software is increasingly playing a big part in scientific research, but in most cases the growth is organic. The life time of research software is often as short as the duration of a postdoctoral contract: Once the researcher moves on, custom-written niche code is frequently not well documented, components are not reusable, and the overall development effort is likely lost.
This is a case study in looking at the evolution of software for research in the field of genomics within my research group at the Department of Genetics at Cambridge University. While our research questions changed over the past decade, we moved from Perl code and regular expressions to R and statistical analysis, and from there to agent-based simulations in Java. Not only will I discuss the languages and tools used as well as the processes and how they have evolved over the years. It also covers the factors that influence the nature of the growth, such as funding, but also how 'open source' as a default has changed our development work. We also take a look into the future to see how we predict the software usage will grow.
Also, in presenting the problems and discussing possible solution, this talk will look at the role institutions play in helping address these issues. In particular the Software Sustainability Institute (SSI, http://software.ac.uk/) works in the UK to promote the development, maintenance and (re)use of research software.
The Eclipse Foundation, with the Science Working Group, works to facilitate software sharing and reuse. How can organisations like the SSI and Eclipse align their strategies and activities for maximum effect?
Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.
In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.
How large-scale image analytics (near-real time analysis of satellite images, machine learning) could help (re-)insurer anticipate natural catastrophes and estimate damages more precisely
Scaling AI in production using PyTorchgeetachauhan
Slides from my talk at MLOps World' 21
Deploying AI models in production and scaling the ML services is still a big challenge. In this talk we will cover details of how to deploy your AI models, best practices for the deployment scenarios, and techniques for performance optimization and scaling the ML services. Come join us to learn how you can jumpstart the journey of taking your PyTorch models from Research to production.
DN18 | The Evolution and Future of Graph Technology: Intelligent Systems | Ax...Dataconomy Media
Abstract of the Prersentation:
The field of graph technology has developed rapidly in recent years and established itself as an independent technology sector that will probably even receive its own query language standard (GQL). As almost any business benefits from graph platforms it is no wonder that adoption is broad and fast. There must be good reasons for that. In his talk Axel will give an overview of the evolution of technology and products in the Graph Space from the early beginnings up to current developments in machine learning and artificial intelligence. He will also give some examples and explain why graph technology is so well suited for most use cases and to build intelligent systems.
About the Author:
Axel Morgner started Structr in 2010 to create the next-gen CMS. Previously, he worked for Oracle and founded an ECM company. Axel loves Open Source. As CEO, he’s responsible for the company behind Structr and the project itself, with focus on the front end.
Emerging Dynamic TUW-ASE Summer 2015 - Distributed Systems and Challenges for...Hong-Linh Truong
This is a lecture from the advanced service engineering course from the Vienna University of Technology. See http://dsg.tuwien.ac.at/teaching/courses/ase/
Machine learning, or predictive analytics have started entering into our daily life. Businesses and enterprises could use predictive analytics to improve efficiency, improve user experience, as well as to create new business opportunities. This talk will present WSO2 Machine Learner, our experiences of predicting Super Bowl winners, and few real life use cases. Furthermore, talk will discuss open challenges and problems people are working on.
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
Optimizing connected system performance md&m-anaheim-sandhi bhide 02-07-2017sandhibhide
Sandhiprakash Bhide presenting at the Smart Manufacturing Innovation Summit/Industry 4.0 event on "Optimizing Connected System Performance and Establishing Tangible Goals for Sensor Use"
Curiosity and Sauce Labs present - When to stop testing: 3 dimensions of test...Curiosity Software Ireland
This webinar was co-hosted by Curiosity Software and Sauce Labs on the 28th of September, 2021. Watch the webinar on demand today: https://opentestingplatform.curiositysoftware.ie/stop-testing-test-coverage-webinar
A definition of “done” is one of the hardest and most valuable things to come by in testing. Faced with fast-changing, massively complex systems, there’s no time to test everything in short sprints. Even defining “everything” is hard enough, given the vast and often unknown system logic, user devices, and integrated technologies that must be factored into rigorous testing. Too often, a lack of measurability combines with unsystematic test design, forcing testers to guess or hope that testing is “done”. This introduces uncertainty with every rapid release. Tests leave logic exposed to costly bugs and performance issues, while untested devices warp UIs and user experiences.
This webinar will set out how testing can rapidly identify, generate, and run the tests needed to de-risk rapid software releases. It will define functional test coverage in three dimensions, considering the system logic and data that must be tested, the optimal device mix, and the need to test across different system tiers. James Walker, Curiosity’s Director of Technology, and Marcus Merrell, Senior Director of Technology Strategy at Sauce Labs, will then demonstrate how in-sprint testing can target tests based on this multifaceted measure. You will see how:
1. Generating optimised tests, data and scripts from visual flowcharts avoids slow test creation and maintenance, while testing system logic rigorously based on time and risk.
2. Pushing tests to cloud-based device labs minimises environment and device limitations, enabling the right mix for each stage of the testing lifecycle.
3. Updating central flows regenerates tests in-sprint, targeting impacted and risky logic across APIs, UIs and back-end systems.
IoT-Daten: Mehr und schneller ist nicht automatisch besser.
Über optimale Sampling-Strategien, wie man rechnen kann, ob IoT sich rechnet, und warum es nicht immer Deep Learning und Real-Time-Analytics sein muss. (Folien Deutsch/Englisch)
Image Caption Generation: Intro to Distributed Tensorflow and Distributed Sco...ICTeam S.p.A.
Tech talk by Luca Grazioli (https://www.linkedin.com/in/luca-grazioli-a74927bb/) in the event ''Tensorflow and Sparklyr: Scaling Deep Learning and R to the Big Data ecosystem'', May 15, 2017 at ICTeam Grassobbio (BG). The event was part of the Data Science Milan Meetup (https://www.meetup.com/it-IT/Data-Science-Milan/).
Right now in institutions around the world, some of the greatest minds in computer science and statistics are coming up with amazing new algorithms and mathematically beautiful solutions. However it's entirely possible that the solutions they conceive will be impracticable in industry. The reason is simple; "the best answer is useless if it arrives too late to do anything with it". The key principle here is the compromise between 'accuracy' and 'latency'. In this talk I will describe examples where this holds true, and how I am using real-time machine learning models to solve challenges in eCommerce, Financial Services and Media companies.
http://tumra.com/blog/real-time-machine-learning-at-industrial-scale
DN18 | Applied Machine Learning in Cybersecurity: Detect malicious DGA Domain...Dataconomy Media
Abstract of the Presentation:
Malware like GameOver Zeus and CryptoLocker Botnets are a massive threat for organizations. They use domain generation algorithms (DGAs) to create URLs that host malicious websites or command and control servers. Traditional approaches fail to detect and stop them early. In this Talk you learn in a live demo how you can use machine learning to detect malicious domains in your environment and learn how to implement a full end to end data science use case leveraging the Splunk Machine Learning Toolkit.
About the Author:
Philipp works as Staff Machine Learning Architect at Splunk. His background is in data sciene, visualization and analytics with experience in automotive, transportation and software industries. He enjoys working with Splunk customers and partners across EMEA.
EclipseCon France 2015 - Science TrackBoris Adryan
Software is increasingly playing a big part in scientific research, but in most cases the growth is organic. The life time of research software is often as short as the duration of a postdoctoral contract: Once the researcher moves on, custom-written niche code is frequently not well documented, components are not reusable, and the overall development effort is likely lost.
This is a case study in looking at the evolution of software for research in the field of genomics within my research group at the Department of Genetics at Cambridge University. While our research questions changed over the past decade, we moved from Perl code and regular expressions to R and statistical analysis, and from there to agent-based simulations in Java. Not only will I discuss the languages and tools used as well as the processes and how they have evolved over the years. It also covers the factors that influence the nature of the growth, such as funding, but also how 'open source' as a default has changed our development work. We also take a look into the future to see how we predict the software usage will grow.
Also, in presenting the problems and discussing possible solution, this talk will look at the role institutions play in helping address these issues. In particular the Software Sustainability Institute (SSI, http://software.ac.uk/) works in the UK to promote the development, maintenance and (re)use of research software.
The Eclipse Foundation, with the Science Working Group, works to facilitate software sharing and reuse. How can organisations like the SSI and Eclipse align their strategies and activities for maximum effect?
Artificial Intelligence (AI) is nowadays used frequently in many application domains. Although sometimes considered only as an afterthought in the public discussion compared to other domains such as health, transportation, and manufacturing, the media domain is also transformed by AI enabling new opportunities, from content creation e.g. “robojournalism” and individualised content to optimisation of the content production and distribution. Underlaying many of these new opportunities is the use of AI in its current reincarnation as deep learning for understanding the audio-visual content by extracting structured information from the unstructured data, the audio-visual content.
In this talk the current understanding and trends of AI will therefore be discussed, what can be done, what is done, and what challenges remain in the use of AI especially in the context of media applications and services. The talk is not so much focused on the details and fundamentals of deep learning, but rather on a practical perspective on how recent advances in this field can be utilised in use-cases in the media domain, especially with respect to audio-visual content and in the broadcasting domain.
How large-scale image analytics (near-real time analysis of satellite images, machine learning) could help (re-)insurer anticipate natural catastrophes and estimate damages more precisely
Scaling AI in production using PyTorchgeetachauhan
Slides from my talk at MLOps World' 21
Deploying AI models in production and scaling the ML services is still a big challenge. In this talk we will cover details of how to deploy your AI models, best practices for the deployment scenarios, and techniques for performance optimization and scaling the ML services. Come join us to learn how you can jumpstart the journey of taking your PyTorch models from Research to production.
DN18 | The Evolution and Future of Graph Technology: Intelligent Systems | Ax...Dataconomy Media
Abstract of the Prersentation:
The field of graph technology has developed rapidly in recent years and established itself as an independent technology sector that will probably even receive its own query language standard (GQL). As almost any business benefits from graph platforms it is no wonder that adoption is broad and fast. There must be good reasons for that. In his talk Axel will give an overview of the evolution of technology and products in the Graph Space from the early beginnings up to current developments in machine learning and artificial intelligence. He will also give some examples and explain why graph technology is so well suited for most use cases and to build intelligent systems.
About the Author:
Axel Morgner started Structr in 2010 to create the next-gen CMS. Previously, he worked for Oracle and founded an ECM company. Axel loves Open Source. As CEO, he’s responsible for the company behind Structr and the project itself, with focus on the front end.
Emerging Dynamic TUW-ASE Summer 2015 - Distributed Systems and Challenges for...Hong-Linh Truong
This is a lecture from the advanced service engineering course from the Vienna University of Technology. See http://dsg.tuwien.ac.at/teaching/courses/ase/
Machine learning, or predictive analytics have started entering into our daily life. Businesses and enterprises could use predictive analytics to improve efficiency, improve user experience, as well as to create new business opportunities. This talk will present WSO2 Machine Learner, our experiences of predicting Super Bowl winners, and few real life use cases. Furthermore, talk will discuss open challenges and problems people are working on.
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
Optimizing connected system performance md&m-anaheim-sandhi bhide 02-07-2017sandhibhide
Sandhiprakash Bhide presenting at the Smart Manufacturing Innovation Summit/Industry 4.0 event on "Optimizing Connected System Performance and Establishing Tangible Goals for Sensor Use"
Curiosity and Sauce Labs present - When to stop testing: 3 dimensions of test...Curiosity Software Ireland
This webinar was co-hosted by Curiosity Software and Sauce Labs on the 28th of September, 2021. Watch the webinar on demand today: https://opentestingplatform.curiositysoftware.ie/stop-testing-test-coverage-webinar
A definition of “done” is one of the hardest and most valuable things to come by in testing. Faced with fast-changing, massively complex systems, there’s no time to test everything in short sprints. Even defining “everything” is hard enough, given the vast and often unknown system logic, user devices, and integrated technologies that must be factored into rigorous testing. Too often, a lack of measurability combines with unsystematic test design, forcing testers to guess or hope that testing is “done”. This introduces uncertainty with every rapid release. Tests leave logic exposed to costly bugs and performance issues, while untested devices warp UIs and user experiences.
This webinar will set out how testing can rapidly identify, generate, and run the tests needed to de-risk rapid software releases. It will define functional test coverage in three dimensions, considering the system logic and data that must be tested, the optimal device mix, and the need to test across different system tiers. James Walker, Curiosity’s Director of Technology, and Marcus Merrell, Senior Director of Technology Strategy at Sauce Labs, will then demonstrate how in-sprint testing can target tests based on this multifaceted measure. You will see how:
1. Generating optimised tests, data and scripts from visual flowcharts avoids slow test creation and maintenance, while testing system logic rigorously based on time and risk.
2. Pushing tests to cloud-based device labs minimises environment and device limitations, enabling the right mix for each stage of the testing lifecycle.
3. Updating central flows regenerates tests in-sprint, targeting impacted and risky logic across APIs, UIs and back-end systems.
Webinar - Transforming Manufacturing with IoTHARMAN Services
The Manufacturing industry is realizing the tremendous benefits in the “Internet of Things” (IoT), an inevitable evolution to traditional M2M solutions. Innovations across embedded devices, advanced analytics, and enriched user experiences all powered by cloud, has enabled new opportunities for both perpetual revenue and perpetual customer value. In this session we will break down benefits of IoT for Manufacturing with real-world examples.
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
The presentation focuses on how enterprises can turn Internet-of-Things-Data into Action and outlines the 5-A Model for Data Actionability. 5A stands for Action, Assignment, Analysis, Aggregation and Acquisition.
Central questions such as “How do I identify bad quality during or before the process?” or “How do I prevent unplanned downtime?” are addressed in this presentation by Prof. Michael Capone, at the Capgemini Week of Innovation Networks 2016.
Skynet project: Monitor, analyze, scale, and maintain a system in the CloudSylvain Kalache
The goal of Skynet is to avoid human doing repetitive things and make a system doing them in a better way. System automation should be the way to go for any system management so that human can focus on stuff that really matters.
Related blog post for more informations https://engineering.linkedin.com/slideshare/skynet-project-_-monitor-scale-and-auto-heal-system-cloud
On the Application of AI for Failure Management: Problems, Solutions and Algo...Jorge Cardoso
Artificial Intelligence for IT Operations (AIOps) is a class of software which targets the automation of operational tasks through machine learning technologies. ML algorithms are typically used to support tasks such as anomaly detection, root-causes analysis, failure prevention, failure prediction, and system remediation. AIOps is gaining an increasing interest from the industry due to the exponential growth of IT operations and the complexity of new technology. Modern applications are assembled from hundreds of dependent microservices distributed across many cloud platforms, leading to extremely complex software systems. Studies show that cloud environments are now too complex to be managed solely by humans. This talk discusses various AIOps problems we have addressed over the years and gives a sketch of the solutions and algorithms we have implemented. Interesting problems include hypervisor anomaly detection, root-cause analysis of software service failures using application logs, multi-modal anomaly detection, root-cause analysis using distributed traces, and verification of virtual private cloud networks.
The large O’Reilly survey on serverless adoption indicated that the majority of enterprises have not yet adopted serverless. They have cited the following concerns as main factors: security, the steep learning curve, vendor lock-in, integration/debugging and observability of serverless applications.
In this talk, I will share my views on these concerns and present how Waylay IO has addressed these challenges. Waylay IO’s mission is to finally unlock all promised benefits of serverless computation, with an intuitive and developer-friendly low-code platform.
How to use probabilistic inference programming for application orchestration ...Veselin Pizurica
As companies are adopting serverless architectures and moving away from monolithic and microservice-based deployments, they realise that the challenge lies not only in the rewrite of an old application, but also in the shift towards a new way of thinking. We see many serverless architecture patterns today, such as function chaining, function chaining with rollback (for transaction), ASync HTTP, fan-out and more. We also have a number of tools on the market that ease application development using serverless, of which Apache OpenWhisk (via action chaining or using function composites) and Amazon Step Functions are some of the more popular. In this talk, we will present a new alternative way of building serverless applications based on the orchestration of typed functions, using the probabilistic inference programing paradigm. Inference-based programming brings about the best of the current modelling approaches: the expressiveness and simplicity of decision trees, the high debugging capabilities of state machines, the scalability and flexibility of flow based programming and superior logic expressions to forward chaining approaches. The talk will include a live demo of how to use probabilistic inference programming for a complex IoT application.
Automation, intelligence and knowledge modellingVeselin Pizurica
Automation, intelligence and knowledge modelling,
My talk at http://web11.org/
Numerous talks, news articles and blog posts have been written about impact of recent advances in technology to our society. To a layman, it is all mix of "good news/bad news" show: from improvements in transport, agriculture or health, to jobs disappearing, or wealth inequality, just to name a few. But to techies like myself, the real question is somehow different: How far we can go?.
The Internet-of-Things provides us with lots of sensor data. However, the data by themselves do not provide value unless we can turn them into actionable, contextualized information. Big data and data visualization techniques allow us to gain new insights by batch-processing and off-line analysis. Real-time sensor data analysis and decision-making is often done manually but to make it scalable, it is preferably automated. Artificial Intelligence provides us the framework and tools to go beyond trivial real-time decision and automation use cases for IoT.
My talk on webRTC from June 2013
Demo application using XMPP for signalling
open source webRTC using websockets is here: implenentationhttps://github.com/pizuricv/webRTC-over-websockets
A Cloud-Based Bayesian Smart Agent Architecture for Internet-of-Things Applic...Veselin Pizurica
The First International Conference on Cognitive Internet of Things Technologies
Talk: A Cloud-Based Bayesian Smart Agent Architecture for Internet-of-Things Applications
Authors: Veselin Pizurica, Piet Vandaele
Company: waylay
Website: http://coiot.org/2014/show/program-final
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
A practical look at how to build & run IoT business logic
1. A practical look at how to build &
run IoT business logic
Veselin Pizurica, CTO and co-founder
2.
3. Belgian B2B software company, founded in 2014
Automation and time series analytics for IoT
Deployed at 40 enterprise customers in USA, Europe,
Australia
6. 1. Typical/standard IoT Architecture is not designed with
automation in mind
2. Difficulties (of complexity, scale etc.) in implementing
automation scenarios using existing Rules Engines
10. Automation requires constant connections to:
● Stream data
● Time series (historical) data
● Anomaly detection/prediction models
● Meta model (digital twins, relations etc.)
● ERP (IT) systems
● Notifications (email, SMS, calls …)
● ML (REST)
● API (external services)
11. 1. Typical/standard IoT Architecture is not designed with
automation in mind
2. Difficulties (in complexity, scale etc.) to implement
automation scenarios using existing Rules Engines
16. 1. Combining multiple non-binary outcomes of functions (observations) in
the rule, beyond Boolean true/false states.
2. Dealing with majority voting conditions in the rule
3. Handling conditional executions of functions based on the outcomes of
previous observations
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
17. 1. Set up a condition where any 2 out of 3 measurements are out of range:
a. Is the room temperature below 18 or above 26?
b. Is the humidity below 60 or above 80?
c. Is the CO2 level above 500?
2. If the condition is met, check the weather outside.
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
18. Combining multiple non-binary outcomes and
dealing with majority voting conditions in the rule (54 possible outcomes)
Handling conditional executions of functions
based on the outcomes of previous
observations
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
CO2 temperature humidity
Above In Range Below
Above In Range Above
Above Below In Range
In Range Below Below
Below Below Below
Above Below Below
In Range Below Above
Below Below Above
Above Below Above
Above Above In Range
In Range Above Below
Below Above Below
Above Above Below
In Range Above Above
Below Above Above
Above Above Above
19. 1. Dealing with the past
→ handling expired or soon-to-expire information
2. Dealing with the present
→ combining asynchronous and synchronous information
3. Dealing with the future
→ forecasting for prediction and anomaly detection
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
20. Two door motions sensors would trigger further
processing only if they both happen within 10 seconds
Searching for “on/off/on” events, before taking
further actions
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
Dealing with time in the rule
21. Stream data sensor will be executed as soon as it receives stream data, while the polling sensor
will be checking outside temperature every 5 minutes.
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
Synch and asynch events in the rule
23. rule
Using anomalies and predictions inside the
rules, just like any other sensory input
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
Anomaly detection & prediction
25. 1. Modeling the utility function
→ as we rank and define our preferences among alternative uncertain outcomes, we need rules
where for the same outcome of an observation, different actions can be taken.
2. Support for probabilistic reasoning
→ for even more advanced use cases, the rule engine should support logic building based on
the likelihood of different outcomes for one given sensory output.
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
26. 1. Fridge is not open before 10AM,
2. Medicines were not taken in the morning,
3. There was no motion detected in the bathroom for the past 8 hours
If two out of three indicators are present, send SMS to children (so they can try
to call and reach out), if all three indicators are present, call the ambulance.
Modeling
complex logic
Modeling
time
Modeling
uncertainty
Is the technology powerful enough?
28. 1. The intent of the rule should be easily understandable by all users,
developers and business owners alike.
2. The representation of the logic should be compact.
3. Simulation and debugging (exploration) should be available:
a. during design time - verify the intended logic by testing rules against data logs or
simulating logic statements to verify outcomes.
b. at runtime - reconstruct decisions made by the rule engine based on the rules logs and
the states of observations.
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
29. Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
Explainability - simple translation from rules schema to UI
API rule
30. Fridge temperature goes above 15 degrees
a. We need to check the asset location in order to create a ticket
b. In case we find person responsible for maintenance and location is
known:
i. Check who is on support call over the week
ii. Send him an email
iii. Send SMS
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
31. Find asset in SAP database
Fridge temperature above
Create ticket
Get the day in the week
Find a support person and his contact details
We know the location
and contact person
Notify that person
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
35. 1. Flexibility
→ changing and updating rules should be easy and performing these changes at runtime
should be possible with no service interruption or downtime.
2. Extensibility
→ in order to account for future growth, the rule engine should be capable to support
extensions and integration with external systems, such as third-party API services.
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
36. A water sample is evaluated for compliance with an established water
standard. Sometimes, the presence of one condition modifies another.
For example, consider a case in which new research shows that the
permissible concentration of benzene (nominally at 0.005 mg/L) should be
reduced in the presence of carbofuran (permissible limit of 0.04 mg/L) by 50
percent (new limit = 0.0025 mg/L).
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
37. Benzene level monitoring template
Running rule in production
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
Create rule with only one threshold
38. 1. Template update:
A. change the threshold for benzene,
B. add additional condition for carbofuran levels
C. change the alarm logic
3. New logic applied
2. Update rules at runtime, with zero downtime
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
Update rule at runtime
39. Customer starts a POC with one ERP system and wants to roll out another one
in production
Customer wants to switch from one CRM provider to another one
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
40. Use case with one set of IT systems
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
The same use case with SalesForce
41. ● Templating
.. so that you can apply the same rule to multiple of devices, or to similar use cases
● Versioning
.. of both templates and running rules, for snapshotting and rollbacks
● Searchability
.. to easily search rules by name, API in use, type of device and other filters
● Rules analytics
.. to understand which of your rules triggered the most, most common actions
● Bulk upgrades
.. to perform lifecycle mngt across groups of rules, useful for updates or end-of-life
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
43. Bulk template upgrade
Version changes
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
Versioning & upgrades
44. Fuzzy search, based on
template or task names
Search by sensors or
actuators that are in use
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
Searching rules
45. To enable easy sharding, the rules engine should provide a good initial
framework and abstractions for distributed computing.
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
46. 1. Sharded inference engine for rules evaluation
2. CEP engine for fast stream in memory processing
3. API calls delegated to sharded sandbox executors - stateless
serverless pattern
4. Sharded broker (protocol bridge with sharded stream forwarders)
5. Time Series database based on Cassandra
6. Metamodels backed by Elastic
7. Anomaly detection and prediction module are sharded
microservices
Explainability Adaptability Operability ScalabilityCan it easily be deployed in IoT use cases?
52. Use the benchmark to evaluate any automation
tool for IoT
https://www.waylay.io/download-how-to-choose-a-rules-engine.html
Check out the in-depth scores for popular
automation frameworks
https://www.waylay.io/download-the-guide-to-iot-rules-engines.html