The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
DDS is a very powerful technology built around a few simple and orthogonal concepts. If you understand the core concepts then you can really quickly get up to speed and start exploiting all of its power. On the other hand, if you haven’t grasped the key abstractions you might not be able to exploit all the benefits that DDS can bring.
This presentation provides you with an introduction to the core DDS concepts and illustrates how to program DDS applications. The new C++ and Java API will be explained and used throughout the webcast for coding examples thus giving you a chance to learn the new API from one of the main authors!
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
DDS is a very powerful technology built around a few simple and orthogonal concepts. If you understand the core concepts then you can really quickly get up to speed and start exploiting all of its power. On the other hand, if you haven’t grasped the key abstractions you might not be able to exploit all the benefits that DDS can bring.
This presentation provides you with an introduction to the core DDS concepts and illustrates how to program DDS applications. The new C++ and Java API will be explained and used throughout the webcast for coding examples thus giving you a chance to learn the new API from one of the main authors!
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
The DDS specification provides fine-grained control over the real-time behaviour, dependability, and performance of DDS applications by means of a rich set of QoS Policies. The challenge for many DDS users is that the specifications explains very clearly how each QoS allows to control very specific aspects of data distribution yet it provides no hints on how different QoS should be composed to control complex properties such as the consistency model, or to impose end-to-end real-time scheduling decision. This half-day tutorial will fill this gap by providing attendees with (1) an explanation of how the various QoS compose, and (2) providing attendees with a series of QoS-composition Patters that can be used to control macro-properties of an application, such as the consistency model.
SimD is a safe, productive and efficient C++ API for the OMG DDS. This presentation introduces the basic concepts of SimD and guides you through the steps required to write your first SimD application.
Presentation to the Robotics Task Force of the Object Management Group (OMG) introducing the members to the Data Distribution Service (DDS), another OMG-standard technology.
DDS Advanced Tutorial - OMG June 2013 Berlin MeetingJaime Martin Losa
An extended, in-depth tutorial explaining how to fully exploit the standard's unique communication capabilities.Presented at the OMG June 2013 Berlin Meeting.
Users upgrading to DDS from a homegrown solution or a legacy-messaging infrastructure often limit themselves to using its most basic publish-subscribe features. This allows applications to take advantage of reliable multicast and other performance and scalability features of the DDS wire protocol, as well as the enhanced robustness of the DDS peer-to-peer architecture. However, applications that do not use DDS's data-centricity do not take advantage of many of its QoS-related, scalability and availability features, such as the KeepLast History Cache, Instance Ownership and Deadline Monitoring. As a consequence some developers duplicate these features in custom application code, resulting in increased costs, lower performance, and compromised portability and interoperability.
This tutorial will formally define the data-centric publish-subscribe model as specified in the OMG DDS specification and define a set of best-practice guidelines and patterns for the design and implementation of systems based on DDS.
The OMG has recently standardized a UML Profile for DDS. This brief tutorial, which was presented at the OMG RTWS 2009, provides you with an introduction to the standard.
Fundamental to any distributed system are communication patterns: point-to-point, request-reply, transactional queues, and publish-subscribe. Large distributed systems often employ two or more communication patterns. Using a single middleware that supports multiple communication patterns is a very cost-effective way of developing and maintaining large distributed systems. This talk will begin with an introduction of Data Distribution Service (DDS) – an OMG standard – that supports data-centric publish-subscribe communication for real-time distributed systems. DDS separates state management and distribution from application logic and supports discoverable data models. The talk will then describe how RTI Connext Messaging goes beyond vanilla DDS and implements various communication patterns including request-reply, command-response, and guaranteed delivery. You will also learn how these patterns can be combined to create interesting variations when the underlying substrate is as powerful as DDS. We’ll also discuss APIs for creating high-performance applications using the request-reply communication pattern.
By John Breitenbach, RTI Field Applications Engineer
Contents
Introduction to RTI
Introduction to Data Distribution Service (DDS)
DDS Secure
Connext DDS Professional
Real-World Use Cases
RTI Professional Services
This presentation provides an overview of the initial submission to the OMG RFP on DDS Security. The presentation introduces the overall security model proposed for DDS and the protocols.
When bringing any new technology into an enterprise, security is of course a paramount concern. Let’s go “under the hood” and examine in detail how to use data encryption in Azure Storage Service
Symantec Data Loss Prevention 11 simplifies the detection and protection of intellectual property. Symantec’s market-leading data security suite features Vector Machine Learning, which makes it easier to detect hard-to-find intellectual property, and enhancements to Data Insight that streamline remediation, increasing the effectiveness of an organization’s data protection initiatives.
Geek Sync I The Importance of Data Model Change ManagementIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/nuyN50A5dJi
In today’s development environments, it is of critical importance to ensure that data models and databases are aligned to the user stories and tasks being created. Data architects must proactively collaborate with DBAs and designers, and take the initiative to track data model changes and correlate them against development and database updates.
Join IDERA and Joy Ruff in this webinar to learn about these trends and considerations for implementing model change management in your enterprise.
About Joy Ruff: Joy is the product marketing manager for ER/Studio, IDERA’s flagship data modeling and architecture platform, plus several database management and security products. With nearly 25 years of experience in high-tech hardware and software, Joy enjoys communicating product value to customers.
OpenSplice DDS enables seamless, timely, scalable and dependable data sharing between distributed applications and network-connected devices. Its technical and operational benefits have propelled adoption across multiple industries, such as Defence and Aerospace, SCADA, Gaming, Cloud Computing, Automotive, etc.
If you want to learn about OpenSplice DDS or discover some of its advanced features, this webcast is for you!
In this two-parts webcast we will cover all the aspects tied to architecting and developing OpenSplice DDS systems. We will look into Quality of Services, data selectors concurrency and scalability concerns.
We will present the brand-new, and recently finalized, C++ and Java APIs for DDS, including examples of how this can be used with C++11 features. We will show how, increasingly popular, functional languages such as Scala can be used to efficiently and elegantly exploit the massive HW parallelism provided by modern multi-core processors.
Finally we will present some OpenSplice specific extensions for dealing very high-volumes of data – meaning several millions of messages per seconds.
OpenSplice DDS enables seamless, timely, scalable and dependable data sharing between distributed applications and network-connected devices. Its technical and operational benefits have propelled adoption across multiple industries, such as Defence and Aerospace, SCADA, Gaming, Cloud Computing, Automotive, etc.
If you want to learn about OpenSplice DDS or discover some of its advanced features, this webcast is for you!
In this two-parts presentation we will cover most of the aspects tied to architecting and developing OpenSplice DDS systems. We will look into Quality of Services, data selectors concurrency and scalability concerns.
We will present the brand-new, and recently finalized, C++ and Java APIs for DDS, including examples of how this can be used with C++11 features. We will show how, increasingly popular, functional languages such as Scala can be used to efficiently and elegantly exploit the massive HW parallelism provided by modern multi-core processors.
Finally we will present some OpenSplice specific extensions for dealing very high-volumes of data – meaning several millions of messages per seconds.
The DDS specification provides fine-grained control over the real-time behaviour, dependability, and performance of DDS applications by means of a rich set of QoS Policies. The challenge for many DDS users is that the specifications explains very clearly how each QoS allows to control very specific aspects of data distribution yet it provides no hints on how different QoS should be composed to control complex properties such as the consistency model, or to impose end-to-end real-time scheduling decision. This half-day tutorial will fill this gap by providing attendees with (1) an explanation of how the various QoS compose, and (2) providing attendees with a series of QoS-composition Patters that can be used to control macro-properties of an application, such as the consistency model.
SimD is a safe, productive and efficient C++ API for the OMG DDS. This presentation introduces the basic concepts of SimD and guides you through the steps required to write your first SimD application.
Presentation to the Robotics Task Force of the Object Management Group (OMG) introducing the members to the Data Distribution Service (DDS), another OMG-standard technology.
DDS Advanced Tutorial - OMG June 2013 Berlin MeetingJaime Martin Losa
An extended, in-depth tutorial explaining how to fully exploit the standard's unique communication capabilities.Presented at the OMG June 2013 Berlin Meeting.
Users upgrading to DDS from a homegrown solution or a legacy-messaging infrastructure often limit themselves to using its most basic publish-subscribe features. This allows applications to take advantage of reliable multicast and other performance and scalability features of the DDS wire protocol, as well as the enhanced robustness of the DDS peer-to-peer architecture. However, applications that do not use DDS's data-centricity do not take advantage of many of its QoS-related, scalability and availability features, such as the KeepLast History Cache, Instance Ownership and Deadline Monitoring. As a consequence some developers duplicate these features in custom application code, resulting in increased costs, lower performance, and compromised portability and interoperability.
This tutorial will formally define the data-centric publish-subscribe model as specified in the OMG DDS specification and define a set of best-practice guidelines and patterns for the design and implementation of systems based on DDS.
The OMG has recently standardized a UML Profile for DDS. This brief tutorial, which was presented at the OMG RTWS 2009, provides you with an introduction to the standard.
Fundamental to any distributed system are communication patterns: point-to-point, request-reply, transactional queues, and publish-subscribe. Large distributed systems often employ two or more communication patterns. Using a single middleware that supports multiple communication patterns is a very cost-effective way of developing and maintaining large distributed systems. This talk will begin with an introduction of Data Distribution Service (DDS) – an OMG standard – that supports data-centric publish-subscribe communication for real-time distributed systems. DDS separates state management and distribution from application logic and supports discoverable data models. The talk will then describe how RTI Connext Messaging goes beyond vanilla DDS and implements various communication patterns including request-reply, command-response, and guaranteed delivery. You will also learn how these patterns can be combined to create interesting variations when the underlying substrate is as powerful as DDS. We’ll also discuss APIs for creating high-performance applications using the request-reply communication pattern.
By John Breitenbach, RTI Field Applications Engineer
Contents
Introduction to RTI
Introduction to Data Distribution Service (DDS)
DDS Secure
Connext DDS Professional
Real-World Use Cases
RTI Professional Services
This presentation provides an overview of the initial submission to the OMG RFP on DDS Security. The presentation introduces the overall security model proposed for DDS and the protocols.
When bringing any new technology into an enterprise, security is of course a paramount concern. Let’s go “under the hood” and examine in detail how to use data encryption in Azure Storage Service
Symantec Data Loss Prevention 11 simplifies the detection and protection of intellectual property. Symantec’s market-leading data security suite features Vector Machine Learning, which makes it easier to detect hard-to-find intellectual property, and enhancements to Data Insight that streamline remediation, increasing the effectiveness of an organization’s data protection initiatives.
Geek Sync I The Importance of Data Model Change ManagementIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/nuyN50A5dJi
In today’s development environments, it is of critical importance to ensure that data models and databases are aligned to the user stories and tasks being created. Data architects must proactively collaborate with DBAs and designers, and take the initiative to track data model changes and correlate them against development and database updates.
Join IDERA and Joy Ruff in this webinar to learn about these trends and considerations for implementing model change management in your enterprise.
About Joy Ruff: Joy is the product marketing manager for ER/Studio, IDERA’s flagship data modeling and architecture platform, plus several database management and security products. With nearly 25 years of experience in high-tech hardware and software, Joy enjoys communicating product value to customers.
OpenSplice DDS enables seamless, timely, scalable and dependable data sharing between distributed applications and network-connected devices. Its technical and operational benefits have propelled adoption across multiple industries, such as Defence and Aerospace, SCADA, Gaming, Cloud Computing, Automotive, etc.
If you want to learn about OpenSplice DDS or discover some of its advanced features, this webcast is for you!
In this two-parts webcast we will cover all the aspects tied to architecting and developing OpenSplice DDS systems. We will look into Quality of Services, data selectors concurrency and scalability concerns.
We will present the brand-new, and recently finalized, C++ and Java APIs for DDS, including examples of how this can be used with C++11 features. We will show how, increasingly popular, functional languages such as Scala can be used to efficiently and elegantly exploit the massive HW parallelism provided by modern multi-core processors.
Finally we will present some OpenSplice specific extensions for dealing very high-volumes of data – meaning several millions of messages per seconds.
OpenSplice DDS enables seamless, timely, scalable and dependable data sharing between distributed applications and network-connected devices. Its technical and operational benefits have propelled adoption across multiple industries, such as Defence and Aerospace, SCADA, Gaming, Cloud Computing, Automotive, etc.
If you want to learn about OpenSplice DDS or discover some of its advanced features, this webcast is for you!
In this two-parts presentation we will cover most of the aspects tied to architecting and developing OpenSplice DDS systems. We will look into Quality of Services, data selectors concurrency and scalability concerns.
We will present the brand-new, and recently finalized, C++ and Java APIs for DDS, including examples of how this can be used with C++11 features. We will show how, increasingly popular, functional languages such as Scala can be used to efficiently and elegantly exploit the massive HW parallelism provided by modern multi-core processors.
Finally we will present some OpenSplice specific extensions for dealing very high-volumes of data – meaning several millions of messages per seconds.
This presentation provides 10 reasons why you should choose OpenSplice DDS as you OMG DDS compliant technology. It analyzes standard compliance, technology, service, use cases and pedigree.
The OMG DDS standard has been witnessing a very strong adoption as the distribution middleware of choice for a large class of mission and business critical systems, such as Air Traffic Control, Automated Trading, SCADA, Smart Energy, etc.
The main reason for choosing DDS lies in its efficiency, scalability, high-availability and configurability -- through the 20+ QoS policy. Yet, all of these nice properties come at the cost of a relaxed consistency model no strong guarantees over global invariants.
As a result, many architects have to devise, by themselves – assuming the DDS primitives as a foundation – the correct algorithms for classical problems such as fault-detection, leader election, consensus, distributed mutual exclusion, atomic multicast, distributed queues, etc.
In this presentation we will explore DDS-based distributed algorithms for many classical, yet fundamental, problems in distributed systems. For simplicity, we'll start with algorithms that ignore the presence of failures. Then we will (1) demonstrate how these algorithms can be extended to deal with failures, and (2) introduce Paxos as one of the fundamental algorithm for consensus and atomic broadcast.
Finally, we'll show how these classical algorithms can be used to implement useful extensions of the DDS semantics, such as multi-writer / multi-reader distributed queues.
Desktop, Embedded and Mobile Apps with Vortex CaféAngelo Corsaro
In the past few years we have been experiencing an amazing proliferation of mobile and embedded platforms. Contemporary developers are increasingly faced with the challenge of writing applications that can run on desktop, mobile (e.g. Android), and on low-cost embedded platforms (e.g. Raspberry-Pi and Beaglebone). This is causing a rejuvenated interest in the Java platform as the mean to achieve the holy grail of write-once and run-everywhere. With the availability of Java environments supporting almost any kind of device in several different form factors, the missing element to the picture is an effective way of enabling communication between them.
Vortex Café is a pure Java implementation of the OMG Data Distribution Service (DDS) that enables seamless, efficient and timely data sharing across many-core machines, mobile and embedded devices.
This presentation will (1) introduce the main abstractions provided by Vortex Café, (2) provide an overview of its architecture and explain how it exploits Staged Event Driven Architectures to optimize its runtime depending of the target hardware, (3) provide an overview of the typical performance delivered by Vortex Café, and (3) get you started developing distributed Java and Scala applications with Vortex Café.
This presentation introduced Vortex by means of a running example. Throughout the presentation we will show how Vortex makes it easy to build a micro-blogging platform a la Twitter.
Vortex is a platform that provides seamless, ubiquitous, efficient and timely data sharing across mobile, embedded, desktop, cloud and web applications. Today Vortex is the enabling technology at the core the most innovative Internet of Things and Industrial Internet applications, such as Smart Cities, Smart Grids, and Smart Traffic.
This two parts tutorial (1) introduces the key concepts of Vortex, (2) gets you started with using Vortex to efficiently exchange data across mobile, embedded, desktop, cloud and web applications, and (3) provides a series of best practices, patterns and idiom to get the best our of Vortex.
The only prerequisite to fully exploit this tutorial is a basic understanding of Java, C++ and JavaScript. Some knowledge of Scala and CoffeScript will be a plus.
This presentations explains the foundations of Stream Processing and shows how elegant Stream Processing Architectures can be built by using in synergy DDS and CEP.
Building Real-Time Web Applications with Vortex-WebAngelo Corsaro
The Real-Time Web is rapidly growing and as a consequence an increasing number of applications require soft-real time interactions with the server-side as well as with peer web applications. In addition, real-time web technologies are experiencing swift adoption in traditional systems as a means of providing portable and ubiquitously accessible thin client applications.
In spite of this trend, few high level communication frameworks exist that allow efficient and timely data exchange between web applications as well as with the server-side and the back-end system. Vortex Web is one of the first technologies to bring the powerful OMG Data Distribution Service (DDS) abstractions to the world of HTML5 / JavaScript applications. With Vortex Web, HTML5 / JavaScript applications can seamlessly and efficiently share data in a timely manner amongst themselves as well as with any other kind of device or system that supports the standard DDS Interoperability wire protocol (DDSI).
This presentation will (1) introduce the key abstractions provided by Vortex Web, (2) provide an overview of its architecture and explain how Vortex Web uses Web Sockets and Web Workers to provide low latency and high throughput, and (3) get you started developing real-time web applications.
Connected Mobile and Web Applications with VortexAngelo Corsaro
The widespread availability of high-end mobile devices such as smart-phones, tablets and phablets, along with the availability of browser enabled devices, has imposed these platforms as one of the main target for user interfaces. As a result mobile and web applications need now to be easily “connected”to the rest of the system.
This presentations will (1) showcase how the the Vortex Data Sharing Platform can be effectively and productively used to create connected mobile and web-applications, (2) take you through the steps required to use Vortex in mobile and web applications.
OpenSplice DDS v6 is a major leap forward with respect to the state of the art of DDS implementations; v6 is the first DDS implementation on the market to introduce (1) multiple deployment options, namely daemon-based and library-based, and (2) multiple programming paradigms, such as Pub/Sub, Distributed Object Caches and Client/Server, (3) universal connectivity to over 80 communication technologies via the new OpenSplice Gateway. All of this combined with an Open Source model, an active community and a strong technology ecosystem.
This presentation introduces the coordination model at the foundation of Vortex and explains its foundational concepts and features. Then it provides an overview of the various technological element that implement the model and how they are deployed in IoT applications such as connected vehicles, smart cities, smart grids and connected medical devices.
Reactive architectures are emerging as the way to build systems that are responsive, scalable, resilient and event-driven. In other terms, systems that deliver highly responsive user experiences with a real-time feel, that are scalable, resilient, and ready to be deployed on multicore and cloud computing architectures. The Reactive Manifesto (see http://www.reactivemanifesto.org/) captures the key traits that characterize reactive architectures.
The Data Distribution Service (DDS) incarnates the principles enumerated by the reactive manifesto and provides a very good platform for building reactive systems. In this webcast I will (1) introduce the key principles of Reactive Architectures, (2) explain the DDS features that are essential to build reactive systems, and (3) introduce some programming techniques that remove inversion of control while maintaining applications even-driven.
Vortex Lite brings DDS connectivity to resource constrained embedded systems. As a a first class citizen of the Vortex platform it can be used for peer-to-peer fog/edge computing between embedded devices as well as gateways and well as for very efficient device to cloud data sharing. Vortex Lite has been designed with efficiency and portability in mind. This makes it the fastest DDS implementation on the market on enterprise grade hardware and the most lightweight on embedded targets. Likewise its architecture structurally facilitates porting across computing and networking stacks.
This presentation introduces Vortex Lite , provides an overview of its architecture, its design choices as well as report about its performance. The webcast will also explain the role played by Lite within the Vortex family and how it can be used for both device-to-device (fog/edge computing) as well as device-to-cloud.
Building and Scaling Internet of Things Applications with Vortex CloudAngelo Corsaro
Cloud Messaging is one of the most critical elements at the core of any Internet of Things and Industrial Internet application. The degree of efficiency and connectivity provided by the cloud messaging technology usually drives the overall efficiency and reach of the entire system.
Vortex Cloud is a Cloud Messaging implementation that targets public as well as private clouds and enables embedded, mobile, web, enterprise and cloud applications to efficiently and securely share data across the Internet. Vortex Cloud has been designed ground up to address easy of connectivity, wire-efficiency, scalability, elasticity and security.
This presentation will (1) introduce the Vortex Cloud architecture and explain how it provides elasticity and fault-tolerance, (2) explain the different deployment models supported for public-cloud, private-cloud and no-cloud (3) get you started developing a simple Internet of Things Application.
Computer Science - Harvard and Von Neumann Architecture
The aspects of both architectures are highlighted through the presentation along with their advantages and disadvantages.
PrismTech's Vortex is a platform that provides seamless, ubiquitous, efficient and timely data sharing across mobile, embedded, desktop, cloud and web applications. Today Vortex is the enabling technology at the core the most innovative Internet of Things and Industrial Internet applications, such as Smart Cities, Smart Grids, and Smart Traffic.
This two part tutorial presentation (1) introduces the key concepts of Vortex, (2) gets you started with using Vortex to efficiently exchange data across mobile, embedded, desktop, cloud and web applications, and (3) provides a series of best practices, patterns and idioms to get the best out of Vortex.
RUSTing is not a tutorial on the Rust programming language.
I decided to create the RUSTing series as a way to document and share programming idioms and techniques.
From time to time I’ll draw parallels with Haskell and Scala, having some familiarity with one of them is useful but not indispensable.
Best practices for long-term support and security of the device-treeAlison Chaiken
Considerations in design of Linux kernel device-tree source, maintenance of source repositories and helpful tools for validation, source examination and over-the-area updates, particular for vehicular and IVI applications.
How Secure Is Your Container? ContainerCon Berlin 2016Phil Estes
A conference talk at ContainerCon Europe in Berlin, Germany, given on October 5th, 2016. This is a slightly modified version of my talk first used at Docker London in July 2016.
A talk given at Docker London on Wednesday, July 20th, 2016. This talk is a fast-paced overview of the potential threats faced when containerizing applications, married to a quick run-through of the "security toolbox" available in the Docker engine via Linux kernel capabilities and features enabled by OCI's libcontainer/runc and Docker.
A video recording of this talk is available here: https://skillsmatter.com/skillscasts/8551-container-security
Groovy Domain Specific Languages - SpringOne2GX 2012Guillaume Laforge
Paul King, Andrew Eisenberg and Guillaume Laforge present about implementation of Domain-Specific Languages in Groovy, while at the SpringOne2GX 2012 conference in Washington DC.
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...NETWAYS
ZFS is the next generation filesystem originally developed at Sun Microsystems. Available under the CDDL, it uniquely combines volume manager and filesystem into a powerful storage management solution for Unix systems. Regardless of big or small storage requirements. ZFS offers features, for free, that are usually found only in costly enterprise storage solutions. This talk will introduce ZFS and give an overview of its features like snapshots and rollback, compression, deduplication as well as replication. We will demonstrate how these features can make a difference in the datacenter, giving administrators the power and flexibility to adapt to changing storage requirements.
Real world examples of ZFS being used in production for video streaming, virtualization, archival, and research are shown to illustrate the concepts. The talk is intended for people considering ZFS for their data storage needs and those who are interested in the features ZFS provides.
Containers for Science and High-Performance ComputingDmitry Spodarets
Within this talk, we will explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators, etc.) allowing users to take full control to set-up and run in their native environments. This talk explores how Singularity combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.
DataEngConf: Uri Laserson (Data Scientist, Cloudera) Scaling up Genomics with...Hakka Labs
New DNA sequencing technologies are revolutionizing the life sciences by generating extremely large data sets. Traditional tools for processing this data will have difficulty scaling to the coming deluge of genomics data. We discuss how the innovations of Hadoop and Spark are solving core problems that enable scientists to address questions that were previously out of reach.
Useful Linux and Unix commands handbookWave Digitech
This article provides practical examples for most frequently used commands in Linux / UNIX. Helpful for Engineers and trainee engineers, Software developers. A handy notes for all Linux & Unix commands.
This was the opening presentation of the Zenoh Summit in June 2022. The presentation goes through the motivations that lead to the design of the zenoh protocol and provides an introduction of its core concepts. This is the place to start to understand why you should care about zenoh and the way in which is disrupts existing technologies.
The recording for this presentation is available at https://bit.ly/3QOuC6i
Zenoh is rapidly growing Eclipse project that unifies data in motion, data at rest and computations. It elegantly blends traditional pub/sub with geo distributed storage, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks. This presentation will provide an introduction to Eclipse Zenoh along with a crisp explanation of the challenges that motivated the creation of this project. We will go through a series of real-world use cases that demonstrate the advantages brought by Zenoh in enabling and optimising typical edge scenarios and in simplifying the development of any scale distributed applications.
Data Decentralisation: Efficiency, Privacy and Fair MonetisationAngelo Corsaro
A presentation give at the European H-Cloud Conference to motivate decentralisation as a mean to improve energy efficiency, privacy, and opportunity for monetisation for your digital footprint.
zenoh: zero overhead pub/sub store/query computeAngelo Corsaro
Unifies data in motion, data in-use, data at rest and computations.
It carefully blends traditional pub/sub with distributed queries, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks.
It provides built-in support for geo-distributed storages and distributed computations
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Fog computing aims at providing horizontal, system-level, abstractions to distribute computing, storage, control and networking functions closer to the user along a cloud-to-thing continuum. Whilst fog computing is increasingly recognised as the key paradigm at the foundation of Consumer and Industrial Internet of Things (IoT), most of the initiatives on fog computing focus on extending cloud infrastructure. As a consequence, these infrastructure fall short in addressing heterogeneity and resource constraints characteristics of fog computing environments.
fog⌀5 (read as fog O-five or fog OS) is an Eclipse IoT Project that is building a fog computing infrastructure from first principle. In other terms, fog⌀5 has been designed to address the challenges induced by fog computing in terms of heterogeneity, decentralisation, resource constraints, geographical scale and security.
This webcast will introduce fog⌀5, motivate its architecture and building blocks as well as provide a demonstration of fog⌀5 provisioning applications that span from the cloud to the things.
The video recording for this presentation is available at https://www.youtube.com/watch?v=Osl3O5DxHF8
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
Data Sharing in Extremely Resource Constrained EnvionrmentsAngelo Corsaro
This presentation introduces XRCE a new protocol for very efficiently distributing data in resource constrained (power, network, computation, and storage) environments. XRCE greatly improves the wire efficiency of existing protocol and in many cases provides higher level abstractions.
Vortex II -- The Industrial IoT Connectivity StandardAngelo Corsaro
The large majority of commercial IoT platforms target consumer applications and fall short in addressing the requirements characteristic of Industrial IoT. Vortex has always focused on addressing the challenges characteristic of Industrial IoT systems and with 2.4 release sets a the a new standard!
This presentation will (1) introduce the new features introduced in with Vortex 2.4, (2) explain how Vortex 2.4 addresses the requirements of Industrial Internet of Things application better than any other existing platform, and (3)showcase how innovative companies are using Vortex for building leading edge Industrial Internet of Things applications.
Fog computing has emerged as a new paradigm for architecting IoT applications that require greater scalability, performance and security. This talk will motivate the need to Fog Computing and explain what it is and how it differs from other initiatives in Telco such as Mobile/Multiple-Access Edge Computing.
Introduced in 2004, the Data Distribution Service (DDS) has been steadily growing in popularity and adoption. Today, DDS is at the heart of a large number of mission and business critical systems, such as, Air Traffic Control and Management, Train Control Systems, Energy Production Systems, Medical Devices, Autonomous Vehicles, Smart Cities and NASA’s Kennedy Space Centre Launch System.
Considered the technological trends toward data-centricity and the rate of adoption, tomorrow, DDS will be at the at the heart of an incredible number of Industrial IoT systems.
To help you become an expert in DDS and exploit your skills in the growing DDS market, we have designed the DDS in Action webcast series. This series is a learning journey through which you will (1) discover the essence of DDS, (2) understand how to effectively exploit DDS to architect and program distributed applications that perform and scale, (3) learn the key DDS programming idioms and architectural patterns, (4) understand how to characterise DDS performances and configure for optimal latency/throughput, (5) grow your system to Internet scale, and (6) secure you DDS system.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
3. Tip #0
Domains, Partitions, Topics
Domain (e.g. Domain 123)
¨ All DDS applications publish
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
and subscribe data on a
Partition belonging to a
given domain
Partitions (e.g. Partition “Telemetry”)
¨ Partitions are defined by
means for strings and can Topic Instances/
be matched with regular Samples
expressions
¨ If not explicitly specified the
default partition in the
default domain is
automatically chosen
4. Tip #0
Partitions Matching
"building-1.floor-3.room-51"
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
"building-1.floor-1.room-*"
... building-1.floor-3.room-5 ...
¨ Partitions are
defined by means building-1.floor-1.room-111 building-1.floor15-.room-51
for strings and can
be matched with building-1.floor-3.room-11
regular building-1.floor-1.room-1 building-1.floor-10.room-100
expressions] ...
"building-1.floor-*.room-11?" Domain
6. Tip #1
Choosing a Domain
¨ In OpenSplice DDS v5.x a domain is selected by:
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ Defining the OSPL_URI environment variable
¨ Passing a URI pointing at the the domain XML configuration file at
OpenSplice startup
¨ Passing the URI of the configuration file as a string parameter of the
DomainParticipantFactory::create_participant method
¨ Passing the name of the domain as specified in the configuration file
to the DomainParticipantFactory::create_participant method
¨ Passing the empty string to represent the default domain to the
DomainParticipantFactory::create_participant method
7. Tip #1
Choosing a Domain
Domain specified via URI
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
Defining Domain Configuration at Startup
dpf.create_participant(
“file:///some/path/myospl.xml”,
$ ospl start file:///some/path/myospl.xml
qos,
listener,
mask);
Default Domain
dpf.create_participant( Domain specified via a Domain Name
“”,
qos, dpf.create_participant(
listener, “MyDomainNameAsSpecifiedOnTheXMLFile”,
mask); qos,
listener,
mask);
8. Tip #1
Domains on OpenSplice v6.x
¨ The DDS specification did not originally define the type for the
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
DomainId, as a result vendors where free to choose their on types
¨ As the DDSI/RTPS specification defines a DomainId with and integer it
makes sense to uniform the DDS API to use an integer DomainID
¨ As a result, starting with OpenSplice DDS the domain will be selected
specifying its associated id:
dpf.create_participant(
15,
qos,
listener,
mask);
9. Tip #2
Start OpenSplice First
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ OpenSplice v5.x runs by default on a shared/
memory + daemon configuration
¨ As such, if you forget to start the infrastructure your
application will fail at start up
¨ Thus always recall to run:
$ ospl start
10. Tip #3
Shared Memory Size
OpenSplice DDS shared memory size is defined in its
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨
configuration file. The size defined by the default
configuration file is 10MBytes
¨ Beware that different OS have different limitations w.r.t. the
maximum shared memory segment that can be allocated
¨ If you want to go beyond the OS limits you need to change
the configuration of your kernel
11. Tip #3
Linux
¨ The default value for the maximum shared memory segment is
32MBytes
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ This default can be changed in several ways
(1) Adding this line to your /etc/rc.d/rc.local file:
echo “your_max_shared_memory_size” > /proc/sys/kernel/shmmax
(2) Changing the settings for the sys-limits (save changes on /etc/
sysctl.conf to maintain them across reboots):
$ sysctl -w kernel.shmmax=yourMaxValue
12. Tip #3
Windows
The default maximum size for Shared Memory
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨
segments on Windows is 2GB
¨ To exend it, say to 3GB, add the /3GB the boot.ini
as shown below:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(1)WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition
(1)WINDOWS="Windows NT
Workstation Version 4.00"
/3GB
14. Tip #4
Topic Types & Keys
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ Topic Types can define some of their attributes as keys
¨ Yet, even when a Topic type does not define a key the
keylist directive has to be provided -- just to tell the IDL
compiler that this is a topic
25. Tip #7
Default Lifecycle Settings
The WriterDataLifecycle controls when instances are
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨
disposed. By default DDS disposes unregistered
instances
¨ Automatically disposing an instance perhaps is not
what your want to do when terminating your
application, as this would remove persistent data for
the given instance!
27. Tip #8
Understand the QoS Model
¨ DDS defines 22 QoS DURABILITY LIVELINESS DEST. ORDER TIME-BASED FILTER
policies that can be
HISTORY OWENERSHIP PARTITION RESOURCE LIMITS
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
LIFESPAN OWN. STRENGTH PRESENTATION
applied to RELIABILITY DW LIFECYCLE
communication USER DATA DEADLINE DR LIFECYCLE
entities to control their TOPIC DATA
GROUP DATA
LATENCY BUDGET
TRANSPORT PRIO
ENTITY FACTORY
local as well as end-
to-end behaviour RxO QoS Local QoS
¨ Most of the QoS Policies that control an end-to-end property
follow the so-called Request vs. Offered (RxO) Model based on
which the QoS requested by the Consumer should not exceed
the QoS Provided by the Producer
28. Tip #9
History + Reliability Interplay
¨ The History QoS controls the number of samples that
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
are maintained by DDS for a given topic
¨ DDS can keep the last n samples or keep all samples
up to when they are not taken by the application
¨ The History setting has an impact on the reliability of
data delivery as perceived by the application. Thus
beware of your settings!
29. Tip #9
History + Reliability Interplay
struct Counter { QoS Settings
int cID;
Reliability = Reliable
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
int count;
};
#pragma keylist Counter cID
History = KeepLast(1)
History Depth = 1 History Depth = 1
(DDS Default) (DDS Default)
Network
1 1 1 2
DataReader 2 1 1 2 2 3
DataWriter
3 1 2 2 2 3 3 1
Topic Topic
DataReader Cache DataWriter Cache
30. Tip #9
History + Reliability Interplay
struct Counter { QoS Settings
int cID;
Reliability = Reliable
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
int count;
};
#pragma keylist Counter cID
History = KeepLast(1)
History Depth = 1 History Depth = 1
(DDS Default) (DDS Default)
Network
1 2 1 2
DataReader 2 2 2 3
DataWriter
3 1 2 3 3 1
Topic Topic
DataReader Cache DataWriter Cache
31. Tip #9
History + Reliability Interplay
struct Counter { QoS Settings
int cID;
Reliability = Reliable
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
int count;
};
#pragma keylist Counter cID
History = KeepLast(1)
History Depth = 1 History Depth = 1
(DDS Default) (DDS Default)
Network
1 2 1 2
DataReader 2 3 2 3
DataWriter
3 1 3 1
Topic Topic
DataReader Cache DataWriter Cache
32. Tip #7
#10
Define Resource Limits
¨ DDS Provides a QoS that allow to control the amount of resources used by
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
DataReaders and DataWriters
¨ By default, DDS does not imposes any limit, with the results that if you have a
buggy application or an asymmetry in your system you might end-up
consuming unbounded amount of memory -- and in the OpenSplice DDS case
filling the Shared Memory
¨ To avoid this problem, always set appropriate Resource Limits for your
application by defining:
¨ max_samples
¨ max_instances
¨ max_samples_per_instance
34. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
¨ DataReader::read };
iterates over the available History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
sample instances
¨ Samples are not removed
from the local cache as 1 1 1 2 1 3 1 4
result of a read DataReader
2 1 2 2 2 3
3 1 3 2 3 3 3 4 3 5
¨ Read samples can be Topic
read again, by accessing
the cache with the Samples Read Samples not Read
proper options (more DataReader Cache
later)
35. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
¨ DataReader::read };
iterates over the available History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
sample instances
¨ Samples are not removed
from the local cache as 1 1 1 2 1 3 1 4
result of a read DataReader
2 1 2 2 2 3
3 1 3 2 3 3 3 4 3 5
¨ Read samples can be Topic
read again, by accessing
Samples not Read
the cache with the Samples Read
proper options (more DataReader Cache
later)
36. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
¨ DataReader::read };
iterates over the available History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
sample instances
¨ Samples are not removed
from the local cache as 1 1 1 2 1 3 1 4
result of a read DataReader
2 1 2 2 2 3
3 1 3 2 3 3 3 4 3 5
¨ Read samples can be Topic
read again, by accessing
Samples Read Samples not Read
the cache with the
proper options (more DataReader Cache
later)
37. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
};
¨ DataReader::take History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
iterates over the
available sample
instances
1 1 1 2 1 3 1 4
Taken Samples are
2 1 2 2 2 3
¨ DataReader
3 1 3 2 3 3 3 4 3 5
removed from the Topic
local cache as result Samples not Taken
of a take
DataReader Cache
38. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
};
¨ DataReader::take History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
iterates over the
available sample
instances
1 2 1 3 1 4
Taken Samples are
2 2 2 3
¨ DataReader
3 2 3 3 3 4 3 5
removed from the Topic
local cache as result Samples not Taken
of a take
DataReader Cache
39. Tip #11
Read vs. Take struct Counter {
int cID;
QoS Settings int count;
};
¨ DataReader::take History = KeepLast(k)
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
#pragma keylist Counter cID
iterates over the
available sample
instances
1 3 1 4
Taken Samples are
2 3
¨ DataReader
3 3 3 4 3 5
removed from the Topic
local cache as result Samples not Taken
of a take
DataReader Cache
40. Tip #12
Sample, Instance, View State
History Depth = 2
¨ Along with data samples, DataReaders provides
state information allowing to detect relevant
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
1 1
transitions in the life-cycle of data as well as data
writers
2 2
3 ¨ Sample State (READ | NOT_READ): Determines
DataReader
SampleInfo wether a sample has already been read by this
DataReader or not.
1 1 1 2 ¨ Instance State (ALIVE, NOT_ALIVE, DISPOSED).
2 2 2 3 Determines wether (1) writer exist for the specific
3 1 instance, or (2) no matched writers are currently
Samples available, or (3) the instance has been disposed
Topic ¨ View State (NEW, NOT_NEW). Determines wether this
is the first sample of a new (or re-born) instance
DataReader Cache
41. Tip #13
Beware of Invalid Samples!
¨ For each data sample accessed via a read or take
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
DDS provides you with a SampleInfo
¨ The SampleInfo contains meta-information about the
Sample, such as timestamp, lifecycle information, etc.,
but most importantly tells you if the data is valid or not!
¨ Data is not valid, when the sample you are receiving
notifies things like an instance being unregistered or
disposed
42. Tip #14
Reading Only “Fresh” Data
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
dr.read(samples,
infos,
LENGTH_UNLIMITED, // read all available samples
NOT_READ_SAMPLE_STATE,
ANY_VIEW_STATE,
ALIVE_INSTANCE_STATE);
43. Tip #15
Reading All Data
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
dr.read(samples,
infos,
LENGTH_UNLIMITED, // read all available samples
ANY_SAMPLE_STATE,
ANY_VIEW_STATE,
ALIVE_INSTANCE_STATE);
44. Tip #16
Getting Everything
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
dr.read(samples,
infos,
LENGTH_UNLIMITED, // read all available samples
ANY_SAMPLE_STATE,
ANY_VIEW_STATE,
ANY_INSTANCE_STATE);
NOTE: As explained on the Tip #13 in this case you might get invalid data
samples, thus have to check on their validity via the
SampleInfo.valid_data attribute
45. Tip #17
Status vs. Read Condition
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ Both a Status as well as a Read Condition can be
used to wait for data to be available on a
DataReader
¨ The main difference is that a ReadCondition allows
to set the exact SAMPLE, VIEW and INSTANCE status
for which the condition should trigger, while the
StatusCondition triggers when a sample is received
46. Tip #17
Status vs. Read Condition
Guidelines
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ If what you really care is getting a condition that triggers
for the state ANY_SAMPLE_STATE, ANY_VIEW_STATE and
ANY_INSTANCE_STATE than use a StatusCondition as this
is more efficient than a ReadCondition
¨ If you are interested in having a condition that triggers
for specific a status then use the ReadCondition
48. Tip #18
Return Memory Loans
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ The DataReader read/take allocate loan memory
to the application when the length of the containers
passed for storing samples and info is zero
¨ In this case the loaned memory must be returned
via a return_loan operation!
49. Return Memory Loans
Tip #18
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
50. Tip #19
Beware of Strings Ownership
¨ The DDS C++ API takes ownership of the string you pass
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ As a result, you need to understand when it is necessary
to “duplicate” a string
¨ To this end, DDS provides the DDS:string_dup call to
facilitate this task
subQos.partition.name.length (1);
subQos.partition.name[0] = DDS::string_dup (read_partition);
52. Tip #20
Read, Write, Ask
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¨ Read the Manuals and if possible the Specification
¨ Write your own code examples
¨ Don’t be shy to ask questions on the OpenSplice
mailing list
53. :: Connect with Us ::
Copyright
2010,
PrismTech
–
All
Rights
Reserved.
¥ opensplice.com ¥ forums.opensplice.org
¥ @acorsaro
¥ opensplice.org ¥ opensplicedds@prismtech.com ¥ @prismtech
¥ crc@prismtech.com
¥ sales@prismtech.com
¥ youtube.com/opensplicetube ¥ slideshare.net/angelo.corsaro