A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
Vortex II -- The Industrial IoT Connectivity StandardAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.4 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.4, (2) explain how Vortex 2.4 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Smart, Secure and Efficient Data Sharing in IoTAngelo Corsaro
The value of the Internet of Things is the data and the insights derived from it to optimise and improve potentially every aspect of our modern society. As IoT extends its application from consumer to ever more demanding industrial applications, the ability to smartly, securely and efficiently share data makes the difference between success and failure.
This presentation will (1) introduce the data sharing challenges posed by a large class of IoT applications often referred as Industrial IoT (IIoT) applications, (2) highlight how the standards identified by the Industrial Internet of Things Reference Architecture, such as DDS, address the need of smart, secure and efficient data sharing, and (3) showcase how this technology is used today in several IoT systems for ensuring smart, secure and efficient data sharing.
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
Building IoT Applications with Vortex and the Intel Edison Starter KitAngelo Corsaro
Whilst there isn’t a universal agreement on what exactly is IoT, nor on the line that separates Consumer and Industrial IoT, everyone unanimously agrees that unconstrained access to data is the game changing dimension of IoT.
Vortex positions as the best data sharing platform for IoT enabling data to flow unconstrained across devices and at any scale.
This presentation, will demonstrate how quickly and effectively you can build real-world IoT applications that scale using Vortex and the Intel Edison Starter Kit. Specifically, you will learn how to leverage vortex to virtualise devices, integrate different protocols, flexibly execute analytics where it makes the most sense and leverage Cloud as well as Fog computing architectures.
Throughout the webcast we will leverage Intel’s Edison starter kit, available at https://software.intel.com/en-us/iot/hardware/edison, you will be able to download our code examples before the webcast to particulate to the live demo!
Vortex II -- The Industrial IoT Connectivity StandardAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.4 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.4, (2) explain how Vortex 2.4 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Smart, Secure and Efficient Data Sharing in IoTAngelo Corsaro
The value of the Internet of Things is the data and the insights derived from it to optimise and improve potentially every aspect of our modern society. As IoT extends its application from consumer to ever more demanding industrial applications, the ability to smartly, securely and efficiently share data makes the difference between success and failure.
This presentation will (1) introduce the data sharing challenges posed by a large class of IoT applications often referred as Industrial IoT (IIoT) applications, (2) highlight how the standards identified by the Industrial Internet of Things Reference Architecture, such as DDS, address the need of smart, secure and efficient data sharing, and (3) showcase how this technology is used today in several IoT systems for ensuring smart, secure and efficient data sharing.
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
Building IoT Applications with Vortex and the Intel Edison Starter KitAngelo Corsaro
Whilst there isn’t a universal agreement on what exactly is IoT, nor on the line that separates Consumer and Industrial IoT, everyone unanimously agrees that unconstrained access to data is the game changing dimension of IoT.
Vortex positions as the best data sharing platform for IoT enabling data to flow unconstrained across devices and at any scale.
This presentation, will demonstrate how quickly and effectively you can build real-world IoT applications that scale using Vortex and the Intel Edison Starter Kit. Specifically, you will learn how to leverage vortex to virtualise devices, integrate different protocols, flexibly execute analytics where it makes the most sense and leverage Cloud as well as Fog computing architectures.
Throughout the webcast we will leverage Intel’s Edison starter kit, available at https://software.intel.com/en-us/iot/hardware/edison, you will be able to download our code examples before the webcast to particulate to the live demo!
Micro services Architecture with Vortex -- Part IAngelo Corsaro
Microservice Architectures — which are the norm in some domains — have recently received lots of attentions in general computing and are becoming the mainstream architectural style to develop distributed systems. As suggested by the name, the main idea behind micro services is to decompose complex applications in, small, autonomous and loosely coupled processes communicating through a language and platform independent API. This architectural style facilitates a modular approach to system-building.
This webcast will (1) introduce the main principles of the Microservice Architecture, (2) showcase how the Global Data Space abstraction provided by Vortex ideally support thee microservices architectural pattern, and (3) walk you through the design and implementation of a micro service application for a real-world use case.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
The Data Distribution Service: The Communication Middleware Fabric for Scala...Angelo Corsaro
This paper introduces DDS, explains its extensible type system, and provides a set of guidelines on how to design extensible and efficient DDS data models. Throughout the paper the applicability of DDS to SoS is motivated and discussed.
Zenoh is rapidly growing Eclipse project that unifies data in motion, data at rest and computations. It elegantly blends traditional pub/sub with geo distributed storage, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks. This presentation will provide an introduction to Eclipse Zenoh along with a crisp explanation of the challenges that motivated the creation of this project. We will go through a series of real-world use cases that demonstrate the advantages brought by Zenoh in enabling and optimising typical edge scenarios and in simplifying the development of any scale distributed applications.
An increasing number of applications, such as smart cities, mobile-health and smart grids, require to ubiquitously distribute and access real-time information from, and across, a vast variety of devices, ranging from embedded sensors to mobile devices. While the problem of ubiquity is solved at a computing and network connectivity level, it is by no means solved with respect to (1) real-time, and (2) resource efficient (e.g. battery life and network), data distribution.
This webcast will unveil PrismTech’s “DDS Everywhere” product strategy and will introduces a series of Innovations that have extended the OpenSplice ecosystem to seamlessly share data between embedded devices, traditional IT infrastructures, cloud applications and mobile devices.
Fog computing has emerged as a new paradigm for architecting IoT applications that require greater scalability, performance and security. This talk will motivate the need to Fog Computing and explain what it is and how it differs from other initiatives in Telco such as Mobile/Multiple-Access Edge Computing.
Reactive Data Centric Architectures with Vortex, Spark and ReactiveXAngelo Corsaro
An increasing number of Software Architects are realising that data is the most important asset of a system and are staring to embrace the Data-Centric revolution (datacentricmanifesto.org) — setting data at the center of their architecture and modelling applications as “visitors” to the data. At the same time, architect have also realised that reactive architectures (reactivemanifesto.org) facilitates the design of scalable, fault-tolerant and high performance systems.
Yet, few architects have realised that reactive and data-centric architectures are the two sides to the same coin and should always go hand in hand.
This presentation shows how reactive data-centric systems can be designed and built taking advantage of Vortex data sharing capabilities along with its integration with reactive and data-centric processing technologies such as Apache Spark and ReactiveX.
The OMG DDS standard has recently received an incredible level of attention and press coverage due to its relevance for Consumer and Industrial IoT applications and its adoption as part of the Industrial Internet Consortium Reference Architecture. The main reason for the excitement in DDS stems from its data-centricity, efficiency, Internet-wide scalability, high-availability and configurability.
Although DDS provides a very feature rich platform for architecting distributed systems, it focuses on doing one thing well — namely data-sharing. As such it does not provide first-class support for abstractions such as distributed mutual exclusion, distributed barriers, leader election, consensus, atomic multicast, distributed queues, etc.
As a result, many architects tend to devise by themselves – assuming the DDS primitives as a foundation – the (hopefully correct) algorithms for classical problems such as fault-detection, leader election, consensus, distributed mutual exclusion, distributed barriers, atomic multicast, distributed queues, etc.
This Webcast explores DDS-based distributed algorithms for many classical, yet fundamental, problems in distributed systems. By attending the webcast you will learn how recurring problems arising in the design of distributed systems can be addressed using algorithm that are correct and perform well.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
This presentation introduced Vortex by means of a running example. Throughout the presentation we will show how Vortex makes it easy to build a micro-blogging platform a la Twitter.
IoT Protocols Integration with Vortex GatewayAngelo Corsaro
Not all Consumer and Industrial Internet of Things (CIoT, IIoT) applications have the luxury of starting with blank sheet of paper and design the system ground-up. Often IoT features have to be built around existing systems designed using proprietary technologies or vertical standards. As a consequence the ability to easily integrate communication standards, proprietary protocols and data stores is key in accelerating the development of IoT capabilities.
This webcast will showcase how the Vortex Gateway can be used to easily integrate different communication standards, data stores as well as quickly develop connectors for proprietary technologies.
Vortex 2.0 -- The Industrial Internet of Things PlatformAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.0 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.0, (2) explain how Vortex 2.0 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
The Object Management Group (OMG) Data Distribution Service (DDS) and the OPC Foundation OLE for Process Control Unified Architecture (OPC-UA) are commonly considered as two of the most relevant technologies for data and information management in the Industrial Internet of Things. Although several articles and quotes on the two technologies have appeared on various medias in the past six months, there is still an incredible confusion on how the two technology compare and what’s their applicability.
This presentation, was motivated by the author's frustration with reading and hearing so many mis-conceptions as well as “apple-to-oranges” comparisons. Thus to contribute to clarity and help with positioning and applicability this webcast will (1) explain the key concepts behind DDS and OPC-UA and relate them with the reason why these technologies were created in the first place, (2) clarify the differences and applicability in IoT for DDS and OPC-UA, and (3) report on the ongoing standardisation activities that are looking at DDS/OPC-UA inter-working.
High Performance Distributed Computing with DDS and ScalaAngelo Corsaro
The past few years have witnessed a tremendous increase in the amount of real-time data that applications in domains, such as, web analytics, social media, automated trading, smart grids, etc., have to deal with. The challenges faced by these applications, commonly called Big Data Applications, are manifold as the staggering growth in volumes is com- plicating the collection, storage, analysis and distribution of data. This paper focuses on the collection and distri- bution of data and introduces the Data Distribution Ser- vice for Real-Time Systems (DDS) an Object Management Group (OMG) standard for high performance data dissemi- nation used today in several Big Data applications. The pa- per then shows how the combination of DDS with functional programming languages, or at least incorporating functional features like the Scala programming language, makes a very natural and effective combination for dealing with Big Data applications.
Reactive Data Centric Architectures with DDSAngelo Corsaro
An increasing number of Software Architects realise that data is the most important asset of a system and start embracing the Data-Centric revolution (datacentricmanifesto.org) - setting data at the centre of their architecture and modelling applications as "visitors" to the data. At the same time, architects have also realised how reactive architectures (reactivemanifesto.org) facilitate the design of scalable, fault-tolerant and high performance systems. Few architects have yet realised how reactive and data-centric architectures are two sides of the same coin and as such should always go together. This presentation explains why reactive data-centric architecture is the future and how the OMG DDS Standard is uniquely positioned to support this paradigm shift. A series of case studies will highlight how the data-centric revolution is being applied in practice and what measurable benefits it is providing.
Edge computing from standard to actual infrastructure deployment and software...DESMOND YUEN
Edge Computing is widely recognized as a key technology, supporting innovative services for a wide
ecosystem of companies, ranging from operators, infrastructure owners, technology leaders, application
and content providers, innovators, startups, etc.
The advent of 5G systems has also increased the attention to Edge Computing [1][2], as this technology
supports many of the most challenging 5G use cases, especially those for which high end-to-end (E2E)
performances are required. In fact, due to the presence of processing platforms and application
endpoints in close proximity to end-users, Edge Computing offers a low latency environment and
significant gains in terms of network bandwidth, which are key benefits for the deployment of 5G
networks
Fog Computing: A Platform for Internet of Things and AnalyticsHarshitParkar6677
Internet of Things (IoT) brings more than an explosive proliferation of
endpoints. It is disruptive in several ways. In this chapter we examine those disruptions,
and propose a hierarchical distributed architecture that extends from the edge
of the network to the core nicknamed Fog Computing. In particular, we pay attention
to a new dimension that IoT adds to Big Data and Analytics: a massively distributed
number of sources at the edge.
Micro services Architecture with Vortex -- Part IAngelo Corsaro
Microservice Architectures — which are the norm in some domains — have recently received lots of attentions in general computing and are becoming the mainstream architectural style to develop distributed systems. As suggested by the name, the main idea behind micro services is to decompose complex applications in, small, autonomous and loosely coupled processes communicating through a language and platform independent API. This architectural style facilitates a modular approach to system-building.
This webcast will (1) introduce the main principles of the Microservice Architecture, (2) showcase how the Global Data Space abstraction provided by Vortex ideally support thee microservices architectural pattern, and (3) walk you through the design and implementation of a micro service application for a real-world use case.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
The Data Distribution Service: The Communication Middleware Fabric for Scala...Angelo Corsaro
This paper introduces DDS, explains its extensible type system, and provides a set of guidelines on how to design extensible and efficient DDS data models. Throughout the paper the applicability of DDS to SoS is motivated and discussed.
Zenoh is rapidly growing Eclipse project that unifies data in motion, data at rest and computations. It elegantly blends traditional pub/sub with geo distributed storage, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks. This presentation will provide an introduction to Eclipse Zenoh along with a crisp explanation of the challenges that motivated the creation of this project. We will go through a series of real-world use cases that demonstrate the advantages brought by Zenoh in enabling and optimising typical edge scenarios and in simplifying the development of any scale distributed applications.
An increasing number of applications, such as smart cities, mobile-health and smart grids, require to ubiquitously distribute and access real-time information from, and across, a vast variety of devices, ranging from embedded sensors to mobile devices. While the problem of ubiquity is solved at a computing and network connectivity level, it is by no means solved with respect to (1) real-time, and (2) resource efficient (e.g. battery life and network), data distribution.
This webcast will unveil PrismTech’s “DDS Everywhere” product strategy and will introduces a series of Innovations that have extended the OpenSplice ecosystem to seamlessly share data between embedded devices, traditional IT infrastructures, cloud applications and mobile devices.
Fog computing has emerged as a new paradigm for architecting IoT applications that require greater scalability, performance and security. This talk will motivate the need to Fog Computing and explain what it is and how it differs from other initiatives in Telco such as Mobile/Multiple-Access Edge Computing.
Reactive Data Centric Architectures with Vortex, Spark and ReactiveXAngelo Corsaro
An increasing number of Software Architects are realising that data is the most important asset of a system and are staring to embrace the Data-Centric revolution (datacentricmanifesto.org) — setting data at the center of their architecture and modelling applications as “visitors” to the data. At the same time, architect have also realised that reactive architectures (reactivemanifesto.org) facilitates the design of scalable, fault-tolerant and high performance systems.
Yet, few architects have realised that reactive and data-centric architectures are the two sides to the same coin and should always go hand in hand.
This presentation shows how reactive data-centric systems can be designed and built taking advantage of Vortex data sharing capabilities along with its integration with reactive and data-centric processing technologies such as Apache Spark and ReactiveX.
The OMG DDS standard has recently received an incredible level of attention and press coverage due to its relevance for Consumer and Industrial IoT applications and its adoption as part of the Industrial Internet Consortium Reference Architecture. The main reason for the excitement in DDS stems from its data-centricity, efficiency, Internet-wide scalability, high-availability and configurability.
Although DDS provides a very feature rich platform for architecting distributed systems, it focuses on doing one thing well — namely data-sharing. As such it does not provide first-class support for abstractions such as distributed mutual exclusion, distributed barriers, leader election, consensus, atomic multicast, distributed queues, etc.
As a result, many architects tend to devise by themselves – assuming the DDS primitives as a foundation – the (hopefully correct) algorithms for classical problems such as fault-detection, leader election, consensus, distributed mutual exclusion, distributed barriers, atomic multicast, distributed queues, etc.
This Webcast explores DDS-based distributed algorithms for many classical, yet fundamental, problems in distributed systems. By attending the webcast you will learn how recurring problems arising in the design of distributed systems can be addressed using algorithm that are correct and perform well.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
This presentation introduced Vortex by means of a running example. Throughout the presentation we will show how Vortex makes it easy to build a micro-blogging platform a la Twitter.
IoT Protocols Integration with Vortex GatewayAngelo Corsaro
Not all Consumer and Industrial Internet of Things (CIoT, IIoT) applications have the luxury of starting with blank sheet of paper and design the system ground-up. Often IoT features have to be built around existing systems designed using proprietary technologies or vertical standards. As a consequence the ability to easily integrate communication standards, proprietary protocols and data stores is key in accelerating the development of IoT capabilities.
This webcast will showcase how the Vortex Gateway can be used to easily integrate different communication standards, data stores as well as quickly develop connectors for proprietary technologies.
Vortex 2.0 -- The Industrial Internet of Things PlatformAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.0 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.0, (2) explain how Vortex 2.0 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
The Object Management Group (OMG) Data Distribution Service (DDS) and the OPC Foundation OLE for Process Control Unified Architecture (OPC-UA) are commonly considered as two of the most relevant technologies for data and information management in the Industrial Internet of Things. Although several articles and quotes on the two technologies have appeared on various medias in the past six months, there is still an incredible confusion on how the two technology compare and what’s their applicability.
This presentation, was motivated by the author's frustration with reading and hearing so many mis-conceptions as well as “apple-to-oranges” comparisons. Thus to contribute to clarity and help with positioning and applicability this webcast will (1) explain the key concepts behind DDS and OPC-UA and relate them with the reason why these technologies were created in the first place, (2) clarify the differences and applicability in IoT for DDS and OPC-UA, and (3) report on the ongoing standardisation activities that are looking at DDS/OPC-UA inter-working.
High Performance Distributed Computing with DDS and ScalaAngelo Corsaro
The past few years have witnessed a tremendous increase in the amount of real-time data that applications in domains, such as, web analytics, social media, automated trading, smart grids, etc., have to deal with. The challenges faced by these applications, commonly called Big Data Applications, are manifold as the staggering growth in volumes is com- plicating the collection, storage, analysis and distribution of data. This paper focuses on the collection and distri- bution of data and introduces the Data Distribution Ser- vice for Real-Time Systems (DDS) an Object Management Group (OMG) standard for high performance data dissemi- nation used today in several Big Data applications. The pa- per then shows how the combination of DDS with functional programming languages, or at least incorporating functional features like the Scala programming language, makes a very natural and effective combination for dealing with Big Data applications.
Reactive Data Centric Architectures with DDSAngelo Corsaro
An increasing number of Software Architects realise that data is the most important asset of a system and start embracing the Data-Centric revolution (datacentricmanifesto.org) - setting data at the centre of their architecture and modelling applications as "visitors" to the data. At the same time, architects have also realised how reactive architectures (reactivemanifesto.org) facilitate the design of scalable, fault-tolerant and high performance systems. Few architects have yet realised how reactive and data-centric architectures are two sides of the same coin and as such should always go together. This presentation explains why reactive data-centric architecture is the future and how the OMG DDS Standard is uniquely positioned to support this paradigm shift. A series of case studies will highlight how the data-centric revolution is being applied in practice and what measurable benefits it is providing.
Edge computing from standard to actual infrastructure deployment and software...DESMOND YUEN
Edge Computing is widely recognized as a key technology, supporting innovative services for a wide
ecosystem of companies, ranging from operators, infrastructure owners, technology leaders, application
and content providers, innovators, startups, etc.
The advent of 5G systems has also increased the attention to Edge Computing [1][2], as this technology
supports many of the most challenging 5G use cases, especially those for which high end-to-end (E2E)
performances are required. In fact, due to the presence of processing platforms and application
endpoints in close proximity to end-users, Edge Computing offers a low latency environment and
significant gains in terms of network bandwidth, which are key benefits for the deployment of 5G
networks
Fog Computing: A Platform for Internet of Things and AnalyticsHarshitParkar6677
Internet of Things (IoT) brings more than an explosive proliferation of
endpoints. It is disruptive in several ways. In this chapter we examine those disruptions,
and propose a hierarchical distributed architecture that extends from the edge
of the network to the core nicknamed Fog Computing. In particular, we pay attention
to a new dimension that IoT adds to Big Data and Analytics: a massively distributed
number of sources at the edge.
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...csandit
Cloud computing promises good opportunities for economies around the world, as it can help reduce capital expenditure and administration costs, and improve resource utilization. However there are challenges regarding the adoption of cloud computing, key amongst those are security and privacy, reliability and liability, access and usage restriction. Some of these challenges lead to a need for cloud computing policy so that they can be addressed. The purpose of this paper is
twofold. First is to discuss challenges that prompt a need for cloud computing policy. Secondly, is to look at South African ICT policies and regulatory laws in relation to the emergence of cloud computing. Since this is literature review paper, the data was collected mainly through literature reviews. The findings reveals that indeed cloud computing raises policy challenges that needs to be addressed by policy makers. A lack of policy that addresses cloud computing challenges can
negatively have an impact on areas such as security and privacy, competition, intellectual property and liability, consumer protection, cross border and juridical challenges.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog ComputingHarshitParkar6677
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
The introduction of Internet of Things (IoT) applications into daily life has raised serious privacy concerns
among consumers, network service providers, device manufacturers, and other parties involved. This paper
gives a high-level overview of the three phases of data collecting, transmission, and storage in IoT systems
as well as current privacy-preserving technologies. The following elements were investigated during these
three phases:(1) Physical and data connection layer security mechanisms(2) Network remedies(3)
Techniques for distributing and storing data. Real-world systems frequently have multiple phases and
incorporate a variety of methods to guarantee privacy. Therefore, for IoT research, design, development,
and operation, having a thorough understanding of all phases and their technologies can be beneficial. In
this Study introduced two independent methodologies namely generic differential privacy (GenDP) and
Cluster-Based Differential privacy ( Cluster-based DP) algorithms for handling metadata as intents and
intent scope to maintain privacy and security of IoT data in cloud environments. With its help, we can
virtual and connect enormous numbers of devices, get a clearer understanding of the IoT architecture, and
store data eternally. However, due of the dynamic nature of the environment, the diversity of devices, the
ad hoc requirements of multiple stakeholders, and hardware or network failures, it is a very challenging
task to create security-, privacy-, safety-, and quality-aware Internet of Things apps. It is becoming more
and more important to improve data privacy and security through appropriate data acquisition. The
proposed approach resulted in reduced loss performance as compared to Support Vector Machine (SVM) ,
Random Forest (RF) .
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
The fast emerging of internet of things (IoTs) has introduced fog computing as an intermediate layer between end-users and the cloud datacenters. Fog computing layer characterized by its closeness to end users for service provisioning than the cloud. However, security challenges are still a big concern in fog and cloud computing paradigms as well. In fog computing, one of the most destructive attacks is man-in-the-middle (MitM). Moreover, MitM attacks are hard to be detected since they performed passively on the network level. This paper proposes a MitM mitigation scheme in fog computing architecture. The proposal mapped the fog layer on software-defined network (SDN) architecture. The proposal integrated multi-path transmission control protocol (MPTCP), moving target defense (MTD) technique, and reinforcement learning agent (RL) in one framework that contributed significantly to improving the fog layer resources utilization and security. The proposed schema hardens the network reconnaissance and discovery, thus improved the network security against MitM attack. The evaluation framework was tested using a simulation environment on mininet, with the utilization of MPTCP kernel and Ryu SDN controller. The experimental results shows that the proposed schema maintained the network resiliency, improves resource utilization without adding significant overheads compared to the traditional transmission control protocol (TCP).
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term
master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
A brief review: security issues in cloud computing and their solutionsTELKOMNIKA JOURNAL
Cloud computing is an Internet-based, emerging technology, tends to be prevailing in our environment especially in the field of computer sciences and information technologies which require network computing on large scale. Cloud Computing is a shared pool of services which is gaining popularity due to its cost, effectiveness, avilability and great production. Along with its numerous benefits, cloud computing brings much more challenging situation regarding data privacy, data protection, authenticated access, Intellectual property rights etc. Due to these issues, adoption of cloud computing is becoming difficult in today’s world. In this review paper, various security issues regarding data privacy and reliability, key factors which are affecting cloud computing, have been addressed and also suggestions on particular areas have been discussed.
Similar to Data Decentralisation: Efficiency, Privacy and Fair Monetisation (20)
This was the opening presentation of the Zenoh Summit in June 2022. The presentation goes through the motivations that lead to the design of the zenoh protocol and provides an introduction of its core concepts. This is the place to start to understand why you should care about zenoh and the way in which is disrupts existing technologies.
The recording for this presentation is available at https://bit.ly/3QOuC6i
zenoh: zero overhead pub/sub store/query computeAngelo Corsaro
Unifies data in motion, data in-use, data at rest and computations.
It carefully blends traditional pub/sub with distributed queries, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks.
It provides built-in support for geo-distributed storages and distributed computations
Fog computing aims at providing horizontal, system-level, abstractions to distribute computing, storage, control and networking functions closer to the user along a cloud-to-thing continuum. Whilst fog computing is increasingly recognised as the key paradigm at the foundation of Consumer and Industrial Internet of Things (IoT), most of the initiatives on fog computing focus on extending cloud infrastructure. As a consequence, these infrastructure fall short in addressing heterogeneity and resource constraints characteristics of fog computing environments.
fog⌀5 (read as fog O-five or fog OS) is an Eclipse IoT Project that is building a fog computing infrastructure from first principle. In other terms, fog⌀5 has been designed to address the challenges induced by fog computing in terms of heterogeneity, decentralisation, resource constraints, geographical scale and security.
This webcast will introduce fog⌀5, motivate its architecture and building blocks as well as provide a demonstration of fog⌀5 provisioning applications that span from the cloud to the things.
The video recording for this presentation is available at https://www.youtube.com/watch?v=Osl3O5DxHF8
RUSTing is not a tutorial on the Rust programming language.
I decided to create the RUSTing series as a way to document and share programming idioms and techniques.
From time to time I’ll draw parallels with Haskell and Scala, having some familiarity with one of them is useful but not indispensable.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
The Cloudy, Foggy and Misty Internet of Things -- Toward Fluid IoT Architect...Angelo Corsaro
Early Internet of Things(IoT) applications have been build around cloud-centric architectures where information generated at the edge by the “things” in conveyed and processed in a cloud infrastructure. These architectures centralise processing and decision on the data-centre assuming sufficient connectivity, bandwidth and latency.
As applications of the Internet of Things extend to industrial and more demanding consumer applications, the assumptions underlying cloud-centric architectures start to be violated as, — for several of these applications — connectivity, bandwidth and latency to the data-centre are a challenge.
Fog and Mist computing have emerged as forms of “Cloud Computing” closer to the “Edge” and to the “Things” that should alleviate the connectivity, bandwidth and latency challenges faced by Industrial and extremely demanding Consumer Internet of Things Applications.
This presentation, will (1) introduce Cloud, Fog and Mist Computing architectures for the Internet of Things, (2) motivate their need and explain their applicability with real-world use cases, and (3) introduce the concept of fluid IoT architectures and explain how these can be architected and built.
Microservices Architecture with Vortex — Part IIAngelo Corsaro
As we saw in Part I, Microservice Architectures are a modular approach to system-building where we decompose complex applications in, small, autonomous and loosely coupled processes communicating through a language and platform independent API. In this webcast we will briefly refresh the key ideas behind Microservice Architectures and then look at how they can be easily implemented in Vortex. Specifically we will (1) look into the key idioms and patterns that are used when implementing Vortex Microservices and (2) walk you through the design and implementation of a micro service application for a real-world use case.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. The Cloud or The Internet?
Cloud computing had made it easy to
provide a large class of “Internet Services”,
yet we should not confuse a speci
fi
c
architectural style with the service itself
This essential difference is important as the
value is provided by the Service not by the
underpinning architecture (i.e. the cloud).
Example:
Cloud Storage is just a way of implementing an “Internet
Storage Service”
3. Cloud Computing Challenges
The privacy abuses revealed by
Snowden have been heavily
covered by the press
What is not so much known, is
that today the vast majority of
servers powering clouds suffer
from MDS and are as a
consequence inherently insecure.
[Security and Privacy]
4. Cloud Computing Challenges
As a consequence of centralisation Cloud vendors
have the lion share on data monetisation
opportunities.
This leads to an extremely biased market in which a
few control too much
If individuals have full ownership and control over
their digital footprint they have higher chance to
monetise it — if desired.
[Monetisation]
Example:
Your browsing history, your electrical consumption, your
running data, etc., is (often) owned, controlled and
monetised by third parties.
5. Cloud Computing Challenges
Cloud computing poses major
energetic challenges not simply
because of data-center but more
importantly for the traf
fi
c it generates.
All data gets in and out of a remote
data center — that consumes energy
[Energy Consumption]
6. The Austerity Dilemma
Whilst there there a growing
community advocating data austerity,
the solution is not simply is being
conscious on our use of precious
resources — such energie — but we
need to realise that the current
architectures, infrastructures and
technologies where not designed for
addressing the data age
The solution relies combining innovation to improve the
ef
fi
ciency of data distribution technologies with the
sensibilisation of European on the value of parsimony
[Energy Consumption]
8. Back to Distributed Systems
Cloud architectures have introduced a major element
of centralisation and have promoted an ecosystem of
technologies that
fi
ts and supports this model
The only way to overcome the limitations posed by this
model is to innovate by providing distributed
internet services to power the Digital Society.
We can’t continue relying and promoting architectural
styles and technologies that make data travel around
half globe when the source and the interested party
are in close proximity. Neither we continue promoting
technologies that have inherently security and privacy
implications and far from optimal energy use
9. Technology Stack
We have established the
Edge Native Working
Group in Eclipse to incubate
technologies to address the
aforementioned problems
Two key projects in this
context are zenoh which
implements a data plane for
the edge and fogOS which
de
fi
nes the control and
management plane
[Open Infrastructure]
10. Going Forward, Going Distributed
The way for Europe to innovate
and disrupt the market is not to
try to compete with the GAFAM,
but to make them obsolete
For EU economic interest, digital
sovereignty and sustainability
we should Go Distributed