1. OpenSplice DDS can be used for in-memory big data analytics by providing ultra-high performance in-memory storage, data distribution, and connectivity to other data stores and analytics tools.
2. OpenSplice DDS implements the Data Distribution Service (DDS) standard for real-time systems, which provides a topic-based publish/subscribe model to automatically match data writers and readers.
3. A DDS application uses domain participants, topics, publishers, subscribers, and datawriters/readers to define the data model and distribute data.
Zenoh is rapidly growing Eclipse project that unifies data in motion, data at rest and computations. It elegantly blends traditional pub/sub with geo distributed storage, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks. This presentation will provide an introduction to Eclipse Zenoh along with a crisp explanation of the challenges that motivated the creation of this project. We will go through a series of real-world use cases that demonstrate the advantages brought by Zenoh in enabling and optimising typical edge scenarios and in simplifying the development of any scale distributed applications.
What is Hadoop brief intro for Georgian Partners CTO Conference. This outlines the origins of Open Source Apache Hadoop and how Hortonworks fits into this picture. There is also a brief introduction to YARN, the new resource negotiation layer.
The Data Distribution Service: The Communication Middleware Fabric for Scala...Angelo Corsaro
This paper introduces DDS, explains its extensible type system, and provides a set of guidelines on how to design extensible and efficient DDS data models. Throughout the paper the applicability of DDS to SoS is motivated and discussed.
Its time to memorize everything about Object Storage from this exciting chart from A to Z. Later, get ready to take the fun quiz Here is the link: https://itblog.sandisk.com/object-storage-quiz/
Big Data to SMART Data : Process scenario
Scenario of an implementation of a transformation process of the Data towards exploitable data and representative with treatments of the streaming, the distributed systems, the messages, the storage in an NoSQL environment, a management with an ecosystem Big Data graphic visualization of the data with the technologies:
Apache Storm, Apache Zookeeper, Apache Kafka, Apache Cassandra, Apache Spark and Data-Driven Document.
This presentation introduced Vortex by means of a running example. Throughout the presentation we will show how Vortex makes it easy to build a micro-blogging platform a la Twitter.
Vortex is a platform that provides seamless, ubiquitous, efficient and timely data sharing across mobile, embedded, desktop, cloud and web applications. Today Vortex is the enabling technology at the core the most innovative Internet of Things and Industrial Internet applications, such as Smart Cities, Smart Grids, and Smart Traffic.
This two parts tutorial (1) introduces the key concepts of Vortex, (2) gets you started with using Vortex to efficiently exchange data across mobile, embedded, desktop, cloud and web applications, and (3) provides a series of best practices, patterns and idiom to get the best our of Vortex.
The only prerequisite to fully exploit this tutorial is a basic understanding of Java, C++ and JavaScript. Some knowledge of Scala and CoffeScript will be a plus.
Be proactive, optimize your SQL Server. Many companies do not know the actual health of their SQL servers, they\'re more or less waiting for a disaster to happen. With SQL Server Optimizer, much can be done to prevent an SQL Server instance from failing.
BJF.Delta.Nov Redesigned ELA Curriculum K-3Faye Brownlie
An evening in Delta, supported by the Barbara Jarvis Foundation, with K-3 teachers and led by myself, Lisa Schwartz and Michelle Hikida. Focus on explicit teaching of reading strategies in whole class, small groups and individually, and on building a literacy community of readers who belong, read and think about big ideas in a 2/3 class.
Zenoh is rapidly growing Eclipse project that unifies data in motion, data at rest and computations. It elegantly blends traditional pub/sub with geo distributed storage, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks. This presentation will provide an introduction to Eclipse Zenoh along with a crisp explanation of the challenges that motivated the creation of this project. We will go through a series of real-world use cases that demonstrate the advantages brought by Zenoh in enabling and optimising typical edge scenarios and in simplifying the development of any scale distributed applications.
What is Hadoop brief intro for Georgian Partners CTO Conference. This outlines the origins of Open Source Apache Hadoop and how Hortonworks fits into this picture. There is also a brief introduction to YARN, the new resource negotiation layer.
The Data Distribution Service: The Communication Middleware Fabric for Scala...Angelo Corsaro
This paper introduces DDS, explains its extensible type system, and provides a set of guidelines on how to design extensible and efficient DDS data models. Throughout the paper the applicability of DDS to SoS is motivated and discussed.
Its time to memorize everything about Object Storage from this exciting chart from A to Z. Later, get ready to take the fun quiz Here is the link: https://itblog.sandisk.com/object-storage-quiz/
Big Data to SMART Data : Process scenario
Scenario of an implementation of a transformation process of the Data towards exploitable data and representative with treatments of the streaming, the distributed systems, the messages, the storage in an NoSQL environment, a management with an ecosystem Big Data graphic visualization of the data with the technologies:
Apache Storm, Apache Zookeeper, Apache Kafka, Apache Cassandra, Apache Spark and Data-Driven Document.
This presentation introduced Vortex by means of a running example. Throughout the presentation we will show how Vortex makes it easy to build a micro-blogging platform a la Twitter.
Vortex is a platform that provides seamless, ubiquitous, efficient and timely data sharing across mobile, embedded, desktop, cloud and web applications. Today Vortex is the enabling technology at the core the most innovative Internet of Things and Industrial Internet applications, such as Smart Cities, Smart Grids, and Smart Traffic.
This two parts tutorial (1) introduces the key concepts of Vortex, (2) gets you started with using Vortex to efficiently exchange data across mobile, embedded, desktop, cloud and web applications, and (3) provides a series of best practices, patterns and idiom to get the best our of Vortex.
The only prerequisite to fully exploit this tutorial is a basic understanding of Java, C++ and JavaScript. Some knowledge of Scala and CoffeScript will be a plus.
Be proactive, optimize your SQL Server. Many companies do not know the actual health of their SQL servers, they\'re more or less waiting for a disaster to happen. With SQL Server Optimizer, much can be done to prevent an SQL Server instance from failing.
BJF.Delta.Nov Redesigned ELA Curriculum K-3Faye Brownlie
An evening in Delta, supported by the Barbara Jarvis Foundation, with K-3 teachers and led by myself, Lisa Schwartz and Michelle Hikida. Focus on explicit teaching of reading strategies in whole class, small groups and individually, and on building a literacy community of readers who belong, read and think about big ideas in a 2/3 class.
A focus on the importance of encouraging oral language with young children to enhance their thinking and their exploration of self and others. Also a focus on the language we use as teachers and how this influences a child's world. Strategies to use in the class.
An increasing number of applications, such as smart cities, mobile-health and smart grids, require to ubiquitously distribute and access real-time information from, and across, a vast variety of devices, ranging from embedded sensors to mobile devices. While the problem of ubiquity is solved at a computing and network connectivity level, it is by no means solved with respect to (1) real-time, and (2) resource efficient (e.g. battery life and network), data distribution.
This webcast will unveil PrismTech’s “DDS Everywhere” product strategy and will introduces a series of Innovations that have extended the OpenSplice ecosystem to seamlessly share data between embedded devices, traditional IT infrastructures, cloud applications and mobile devices.
OpenSplice DDS enables seamless, timely, scalable and dependable data sharing between distributed applications and network-connected devices. Its technical and operational benefits have propelled adoption across multiple industries, such as Defence and Aerospace, SCADA, Gaming, Cloud Computing, Automotive, etc.
If you want to learn about OpenSplice DDS or discover some of its advanced features, this webcast is for you!
In this two-parts webcast we will cover all the aspects tied to architecting and developing OpenSplice DDS systems. We will look into Quality of Services, data selectors concurrency and scalability concerns.
We will present the brand-new, and recently finalized, C++ and Java APIs for DDS, including examples of how this can be used with C++11 features. We will show how, increasingly popular, functional languages such as Scala can be used to efficiently and elegantly exploit the massive HW parallelism provided by modern multi-core processors.
Finally we will present some OpenSplice specific extensions for dealing very high-volumes of data – meaning several millions of messages per seconds.
DDS in SCADA, Utilities, Smart Grid and Smart CitiesAngelo Corsaro
This presentation introduces the challenges faced by next generation SCADA, Utilities, and Smart-* applications and show how OpenSplice DDS addresses theses. The presentation also showcases the use of OpenSplice DDS in some relevan use cases.
The OMG DDS standard has been witnessing a very strong adoption as the distribution middleware of choice for a large class of mission and business critical systems, such as Air Traffic Control, Automated Trading, SCADA, Smart Energy, etc.
The main reason for choosing DDS lies in its efficiency, scalability, high-availability and configurability -- through the 20+ QoS policy. Yet, all of these nice properties come at the cost of a relaxed consistency model no strong guarantees over global invariants.
As a result, many architects have to devise, by themselves – assuming the DDS primitives as a foundation – the correct algorithms for classical problems such as fault-detection, leader election, consensus, distributed mutual exclusion, atomic multicast, distributed queues, etc.
In this presentation we will explore DDS-based distributed algorithms for many classical, yet fundamental, problems in distributed systems. For simplicity, we'll start with algorithms that ignore the presence of failures. Then we will (1) demonstrate how these algorithms can be extended to deal with failures, and (2) introduce Paxos as one of the fundamental algorithm for consensus and atomic broadcast.
Finally, we'll show how these classical algorithms can be used to implement useful extensions of the DDS semantics, such as multi-writer / multi-reader distributed queues.
Expand a Data warehouse with Hadoop and Big Datajdijcks
After investing years in the data warehouse, are you now supposed to start over? Nope. This session discusses how to leverage Hadoop and big data technologies to augment the data warehouse with new data, new capabilities and new business models.
Solutions Linux 2013: SpagoBI and Talend jointly support Big Data scenarios SpagoWorld
This presentation supported the speech entitled "SpagoBI and Talend jointly support Big Data scenarios" delivered by Monica Franceschini, SpagoBI Architect, during the OW2 track at Solutions Linux 2013 (Paris, 28th-29th May 2013).
Data Pioneers - Roland Haeve (Atos Nederland) - Big data in organisatiesMultiscope
Roland Haeve is cross competence manager Big Data voor Atos Nederland. Roland heeft ruim 18 jaar ICT-ervaring in het aanbieden van complete oplossingen binnen onder andere Business Intelligence (BI) en Big Data (Analytics). Big Data is voor veel bedrijven nog pionieren en uitzoeken wat de mogelijkheden zijn. In zijn presentatie zal Roland ingaan op succesvolle Big Data cases. Hij zal hierbij niet enkel inzoomen op Nederland, maar ook bredere, Europese voorbeelden meenemen.
Webinar - The Agility Challenge - Powering Cloud Apps with Multi-Model & Mixe...DataStax
Building and managing cloud applications is not easy. Teams come face to face with these challenges: agility, manageability, performance, scalability, continuous availability and of course, security. Join us for “The Agility Challenge: Powering Cloud Applications with Multi-Model & Mixed Workloads” webinar where we will deep dive into challenges customers face with multiple data models such as graph, mixed workloads and how DataStax Enterprise can help.
Video: https://youtu.be/1tKDxkexzFE
This was the opening presentation of the Zenoh Summit in June 2022. The presentation goes through the motivations that lead to the design of the zenoh protocol and provides an introduction of its core concepts. This is the place to start to understand why you should care about zenoh and the way in which is disrupts existing technologies.
The recording for this presentation is available at https://bit.ly/3QOuC6i
Data Decentralisation: Efficiency, Privacy and Fair MonetisationAngelo Corsaro
A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
zenoh: zero overhead pub/sub store/query computeAngelo Corsaro
Unifies data in motion, data in-use, data at rest and computations.
It carefully blends traditional pub/sub with distributed queries, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks.
It provides built-in support for geo-distributed storages and distributed computations
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Fog computing aims at providing horizontal, system-level, abstractions to distribute computing, storage, control and networking functions closer to the user along a cloud-to-thing continuum. Whilst fog computing is increasingly recognised as the key paradigm at the foundation of Consumer and Industrial Internet of Things (IoT), most of the initiatives on fog computing focus on extending cloud infrastructure. As a consequence, these infrastructure fall short in addressing heterogeneity and resource constraints characteristics of fog computing environments.
fog⌀5 (read as fog O-five or fog OS) is an Eclipse IoT Project that is building a fog computing infrastructure from first principle. In other terms, fog⌀5 has been designed to address the challenges induced by fog computing in terms of heterogeneity, decentralisation, resource constraints, geographical scale and security.
This webcast will introduce fog⌀5, motivate its architecture and building blocks as well as provide a demonstration of fog⌀5 provisioning applications that span from the cloud to the things.
The video recording for this presentation is available at https://www.youtube.com/watch?v=Osl3O5DxHF8
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
RUSTing is not a tutorial on the Rust programming language.
I decided to create the RUSTing series as a way to document and share programming idioms and techniques.
From time to time I’ll draw parallels with Haskell and Scala, having some familiarity with one of them is useful but not indispensable.
Vortex II -- The Industrial IoT Connectivity StandardAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.4 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.4, (2) explain how Vortex 2.4 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
3. Big Data 3Vs
(pull, k/v, small) (push, k/v, small)
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
(pull, k/v, big) (push, k/v, big)
Big Data is best understood
by looking at the 3Vs Cube
variety
OpenSplice DDS
velocity
e
ol um (pull, pk/fk, small) (push, pk/fk, small)
v
(pull, pk/fk, big) (push, pk/fk, big)
4. OpenSplice DDS
Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
5. Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Collect the data from the cyber-
physical world
OpenSplice DDS
☐ Depending on applications this
could be:
☐ Sensor data
☐ Market Data
☐ Web page statistics
☐ ...
6. Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Store the data for on-line and off-line
processing
OpenSplice DDS
7. Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Validate, Correct and Enrich Data
OpenSplice DDS
8. Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Make sense of the data
☐ Detect short term / long term trends
OpenSplice DDS
☐ ...
9. Big Data Applications
Collect | Store | Organize | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Distribute Analytics -- or any
other kind of clues about the
data -- to applications that
OpenSplice DDS
are supposed to act, display,
publish, store, etc.
11. What is Big Data Analytics?
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Big Data Analytics is the process of examining large amounts of
data to uncover hidden patterns, unknown correlations and other
useful information.
OpenSplice DDS
☐ Such information can provide competitive advantages over rival
organizations and result in business benefits, such as more effective
marketing and increased revenue.
[quoted from; Big Data Analytics]
12. Traditional Big Data Analytics
Collect | Store | ... | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
OpenSplice DDS
Messaging / Big Data Store Map Reduce Messaging /
Web Svcs (e.g. HBase) (e.g. Hadoop) Web Svcs
13. In-Memory Analytics
☐ RAM declining costs and widespread availability of 64-bit processors has created
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
the opportunity for maintaining large sets of data in main memory. [Today 64-bit
machines with up to 1TB of RAM can be easily found]
☐ In-Memory Analytics exploits the availability of RAM to maintain some -- or all --
the Big Data in memory so to reduce query response times and allow faster
OpenSplice DDS
Business Intelligence (BI) and analytics decisions.
☐ In-Memory Analytics can reduce or eliminate the need for data indexing and
allows more effective implementation of BI and analytic applications. Analysts
anticipate that as BI and analytic applications will embrace in-memory analytics,
traditional data stores may eventually be used only for data that is not queried
frequently.
14. In-Memory Big Data Analytics
Collect | Store | ... | Analyze | Share
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
OpenSplice DDS
Data Window Real-time Analytics
Big Data In-Memory
Messaging / In-Memory Store Analitics Messaging /
Web Svcs Full Data Set Mid/Long Term Predictions Web Svcs
Big Data Store Map Reduce
(e.g. HBase) (e.g. Hadoop)
15. In-Memory Big Data Analytics
Data Window Real-time Analytics
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Big Data In-Memory Messaging /
In-Memory Store Analitics Web Svcs
OpenSplice DDS
Messaging /
Messaging / Web Svcs
Web Svcs Model Parameters
Full Data Set Mid/Long Term Predictions
etc.
Big Data Store Map Reduce
(e.g. HBase) (e.g. Hadoop)
16. How about the Cube?
(pull, k/v, small) (push, k/v, small)
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
(pull, k/v, big) (push, k/v, big)
variety
OpenSplice DDS
velocity
e
ol um (pull, pk/fk, small) (push, pk/fk, small)
v
(pull, pk/fk, big) (push, pk/fk, big)
17. In-Memory Big Data Analytics
Data Window (_, _ , big) Real-time Analytics (push, _ , big)
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Big Data In-Memory Messaging /
(push, _ , big) In-Memory Store Analitics Web Svcs
(push, _ , big)
OpenSplice DDS
Messaging /
Messaging / Web Svcs
Web Svcs Model Parameters
Full Data Set Mid/Long Term Predictions
(push, _ , big) etc.
Big Data Store Map Reduce
(e.g. HBase) (e.g. Hadoop)
(pull, k/v, big)
19. OpenSplice & Big Data Analytics
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ OpenSplice DDS can be used to address several Big Data --
specifically In-Memory Big Data -- Analytics applications
☐ OpenSplice’s key features of interest are:
OpenSplice DDS
☐ Ultra-High Performance In-Memory Store
☐ Ultra- High Performance Data Distribution
☐ Ultra- High Performance Persistent Store
☐ High Availability
☐ Connectivity (e.g. HBase, CouchDB, REST, WebSockets, etc.)
20. OpenSplice & Big Data Analytics
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ OpenSplice DDS can be used to address several Big Data, and
more specifically In-Memory Big Data, Analytics applications
☐ The key features OpenSplice can provide are:
OpenSplice DDS
☐ Ultra-High Performance In-Memory Store
☐ Ultra-High Performance Data Distribution
☐ Ultra- High Performance Persistent Store
☐ High Availability
☐ Connectivity (e.g. HBase, CouchDB, REST, WebSockets, etc.)
22. Data Distribution Service
For Real-Time Systems
DDS provides a Topic-Based Publish/
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Subscribe abstraction based on: Data
Reader
Data
Writer
☐ Topics: data distribution subject’s
OpenSplice DDS
Data
Reader
Data TopicD
Writer
DataWriters: data producers
TopicA
☐
Data
TopicB Reader
Data
Writer
☐ DataReaders: data consumers TopicC
...
Data Data
Writer Reader
DDS Global Data Space
23. Data Distribution Service
For Real-Time Systems
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ DataWriters and DataReaders
are automatically and Data
Reader
dynamically matched by the Data
Writer
DDS Dynamic Discovery
OpenSplice DDS
Data
Reader
Data TopicD
Writer
TopicA
☐ A rich set of QoS allows to Data
Reader
TopicB
control existential, temporal,
Data
Writer
TopicC
...
and spatial properties of data Data Data
Writer Reader
DDS Global Data Space
25. Domain Participant
Domain (e.g. Domain 123)
Domain
Participant
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Topic
Partition (e.g. “Telemetry”, “Shapes”, )
T1 Publisher
T1 T3 Subscriber
Topic Instances/Samples Tx
OpenSplice DDS
Ty
Tb
Ta DataWrter DataReader
Tc
☐ A DomainParticipant provides access to a DDS Domain
☐ DDS Domains are identified by natural numbers, such as 0, 1, 2, ...
☐ Each DDS Domain has associated some unique communication and
discovery endpoints, automatically deduced by the Domain ID
27. DDS Topics
“Circle”, “Square”, “Triangle”, ...
☐ A Topic defines a class of streams,
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
also called topic instances Name
☐ A Topic has associated a unique Topic
name, a user defined extensible
Typ
S
OpenSplice DDS
type and a set of QoS policies DURABILITY,
Qo
e
DEADLINE,
ShapeType
☐ QoS Policies capture the Topic PRIORITY,
…
non-functional invariants
struct ShapeType {
☐ Topics can be discovered or @Key
string color;
locally defined long x;
long y;
long shapesize;
};
28. Topic Instances
☐ Each unique key value
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Instances Instances
identifies a unique stream
of data color =”Green”
DDS not only
struct ShapeType {
☐ color =”red” @Key string color;
Topic
OpenSplice DDS
long x; long y;
demultiplexes “streams” color = “Blue”
long shapesize;};
but provides also lifecycle
information
☐ A DDS DataWriter can
write multiple instances
30. Anatomy of a DDS Application
[Scala API]
Domain
Domain
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
val dp = DomainParticipant(domainId)
Participant
Publisher
Topic
Subscriber
OpenSplice DDS
DataWriter
DataReader
31. Anatomy of a DDS Application
[Scala API]
Domain
Domain
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
val dp = DomainParticipant(domainId)
Participant
Session
// Create a Topic
val topic = Topic[ShapeType](dp, “Circle”) Publisher
Topic
Subscriber
OpenSplice DDS
// Create a Publisher / Subscriber
val pub = Publisher(dp)
val sub = Subscriber(dp)
DataWriter
DataReader
32. Anatomy of a DDS Application
[Scala API]
Domain
Domain
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
val dp = DomainParticipant(domainId)
Participant
Session
// Create a Topic
val topic = Topic[ShapeType](dp, “Circle”) Publisher
Topic
Subscriber
OpenSplice DDS
// Create a Publisher / Subscriber
val pub = Publisher(dp)
val sub = Subscriber(dp)
Reader/Writers for User Defined for Types DataWriter
DataReader
// Create a DataWriter/DataWriter
val writer = DataWriter[ShapeType](pub, topic) Reader/Writer for
val reader = DataReader[ShapeType](sub, topic) application defined
Topic Types
33. Anatomy of a DDS Application
[Scala API]
Domain
Domain
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
val dp = DomainParticipant(domainId)
Participant
Session
// Create a Topic
val topic = Topic[ShapeType](dp, “Circle”) Publisher
Topic
Subscriber
OpenSplice DDS
// Create a Publisher / Subscriber
val pub = Publisher(dp)
val sub = Subscriber(dp)
Reader/Writers for User Defined for Types DataWriter
DataReader
// Write data
val data = new ShapeType(“RED”, 131, 107, 75)
writer write data
Reader/Writer for
// But you can also write like this... application defined
writer ! data Topic Types
// Read new data and print it on the screen
(reader read) foreach (prinln)
48. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
QoS Policies
‣ History
‣ Destination Order Network
‣ Presentation
‣ Partition QoS Policies
‣ Time Based Filter ‣ Reliability ‣ Durability
‣ Ownership ‣ History ‣ Transport Priority
‣ Latency Budget ‣ Time Based Filter
49. Reader Cache Properties
☐ An entry for each topic instance
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Each cache entry might contain one or
more samples, the number of samples is
controlled by the History QoS Policy DataReader
☐ History.KeepLast(n) keeps the last n samples
OpenSplice DDS
per instance
...
☐ History.KeepAll maintains all the samples,
modulo resource constraints
☐ For a given instance: DataReader Cache
☐ Samples from the same writer are inserted in
write-order
☐ Samples from multiple-writers are ordered using
time-stamps (source or reception depending
on configurable a QoS Policy)
51. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
52. Reader/Writer Caches
write( 1 ) Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
53. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
54. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
1 Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
55. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
1 1 1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
56. Reader/Writer Caches
write( 2 ) Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
1 1 1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
57. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
2 1 1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
58. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
2 1 1
...
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
2 Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
59. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
2 1 1
2 ... 2
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
60. Reader/Writer Caches
write( 3 ) Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
2 1 1
2 ... 2
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
61. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
3 1 1
2 ... 2
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
62. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
3 1 1
2 ... 2
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
3 Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
63. Reader/Writer Caches
Application Application Application
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
DataWriter DataReader DataReader
3 2 2
3 ... 3
OpenSplice DDS
DataWriter Cache DataReader Cache DataReader Cache
Network
Assumptions: DataReader configured with history depth of 2 and
DataWriter with default history depth of 1
65. Reading Data Samples
☐ Samples can be read from the Data Reader History Cache
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ The action of reading a sample is non-destructive. Samples are
not removed from the cache
OpenSplice DDS
DataReader DataReader
... read ...
DataReader Cache DataReader Cache
66. Taking Data Samples
☐ Samples can be taken from the Data Reader History Cache
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ The action of taking a sample is destructive. Samples are
removed from the cache
OpenSplice DDS
DataReader DataReader
... take ...
DataReader Cache DataReader Cache
67. Read vs. Take
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ The read operation should always be access the latest know value
for topics that represent distributed state
OpenSplice DDS
☐ The take operation should be used to get the last notification from a
topic that represent an event
69. Cherry Picking in DDS
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ DDS provides some very flexible mechanisms for selecting the data
to be read:
OpenSplice DDS
☐ Data Content
☐ Data Status
☐ These mechanisms are composable
71. Filters and Queries Application
☐ DDS Filters allow to control what gets
into a DataReader cache
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
Query
☐ DDS Queries allow to control what gets
out of a DataReader cache DataReader
Filters are defined by means of
OpenSplice DDS
☐
...
ContentFilteredTopics
...
...
...
☐ Queries operate in conjunction with DataReader Cache
read operations
Filter
☐ Filters and Queries are expressed as SQL
where clauses
73. Sample, Instance, and View State
☐ The samples included in the DataReader cache have associated
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
some meta-information which, among other things, describes the
status of the sample and its associated stream/instance
☐ The Sample State (READ, NOT_READ) allows to distinguish between
OpenSplice DDS
new samples and samples that have already been read
☐ The View State (NEW, NOT_NEW) allows to distinguish a new
instance from an existing one
☐ The Intance State (ALIVE, NOT_ALIVE_DISPOSED,
NOT_ALIVE_NO_WRITERS) allows to track the life-cycle transitions of
the instance to which a sample belongs
75. List Comprehension in DDS
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ When using the Scala API, the DDS readers expose data as
LinearSeqOptimized thus making possible list comprehension to
address quite a few task
OpenSplice DDS
☐ Example
☐ dr read foreach (println)
☐ dr read map (transformer)
☐ ...
76. Examples
Compute the average shape and write it as a Square sample:
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐
val data = dr history
val s0 = new ShapeType(“GRAY”, 0, 0 ,0)
dw ! scaleShape((data : s0) (sumShape), data.size)
OpenSplice DDS
☐ Filter samples:
(dr read) filter (_.x <= 100 && _.y <= 200)
77. High Performance
Persistent Store
with OpenSplice
OpenSplice DDS
78. OpenSplice Durability
☐ OpenSplice DDS provides a highly available high-performance
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
persistent store optimized for storing DDS data
☐ This store can be queries to retrieve data using, content as well as
temporal filters
OpenSplice DDS
☐ Ex: val topic = Topic[ShapeType](dp, “Circle”)
val dr = DataReader[ShapeType](topic)
// ...
val query = Query(“x < %0 AND y < %1”, List(“200”, “300”))
val t0 = new Time(...)
val t1 = new Time(...)
// Retrieve all data for the Circle Topic that matches
// the query and has timestamp within [t0, t1]
dr waitForHistoricalData(query, t0, t1)
80. OpenSplice Gateway
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ The OpenSplice Gateway supports Apache Camel Connectors for
Big Data Stores thus making it very simple to stream data to/from
HBase, CouchDB, etc.
OpenSplice DDS
☐ Example:
val circleURI = "ddsi:Circle:0/org.opensplice.demo.ShapeType”
val hbaseURI = "hbase:ishapes?mappingStrategyName=body&operation=CamelHBasePut"
// ...
// Put incoming circles into an HBase Table
circleURI unmarshal(cdrData) process { e2d (_, "circle") } to(hbaseURI)
82. iShapes
In-Memory Analytics Real-time Analytics Display
Data Window (_, _ , big) Real-time Analytics
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
OpenSplice In-Memory
Reader Cache Analitics
(push, _ , big)
OpenSplice DDS
OpenSplice
JMS
Full Data Set
XM
ST
PP
RE
OpenSplice
Gateway
P
Cu
AMQ
sto
m
JMS
DDSI-RTPS
XM
ST
PP
RE
OpenSplice
Gateway
P
Cu
AMQ
sto
m
DDSI-RTPS
83. Shapes to HBase Tables
struct ShapeType {
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
@Key
string color;
long x;
create 'ishapes', 'color'
long y;
long shapesize;
};
OpenSplice DDS
☐ Shapes are mapped into the HBase ‘ishapes’
☐ The row key is the topic name, e.g ‘circle’, ‘square’, etc.
☐ The ‘color” column family is used to distinguish about different instances of a given shape
☐ Different samples for a given instance are distinguished using HBase versions
84. OpenSplice DDS
Shapes to HBase
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
85. OpenSplice DDS
Versions in HBase
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
86. Scale Out Techniques
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Scale out in OpenSplice DDS is achieved through data partitioning
☐ OpenSplice DDS provides two orthogonal mechanisms for data
OpenSplice DDS
partitioning
☐ Data Partitions
☐ Content Filters
88. Concluding Remarks
☐ OpenSplice DDS provide Big Data Applications with
☐ Ultra-High Performance In-Memory Big Data Store
Copyright
2011,
PrismTech
–
All
Rights
Reserved.
☐ Ultra-High Performance Data Distribution
☐ High Performance Persistent Store
☐ Connectivity to Big Data Stores
OpenSplice DDS
☐ OpenSplice DDS is an ideal fit for In-Memory Big Data Analytics
Applications
OpenSplice DDS
Collect Store | Organize | Analyze Share
OpenSplice DDS