This document provides a summary of a masterclass on building distributed real-time systems using the Data Distribution Service (DDS). The class covers DDS concepts and technology, including runtime services, development tools, and standards. It discusses how DDS enables a data-centric model and global data space to support high-performance, scalable, and reliable real-time systems that interact directly with the physical world.
Data-Centric and Message-Centric System ArchitectureRick Warren
Presentation from April, 2010 summarizing the principles of data-centric design and how they apply to DDS technology. Message-centric design is presented by way of contrast.
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
View On-Demand http://ecast.opensystemsmedia.com/403
Repeat Success, Not Mistakes; Use DDS Best Practices to Design Your Complex Distributed Systems
RTI Connext DDS is a powerful tool that lets you efficiently build and integrate complex distributed systems like no other technology – if you use it right. Be aware of how to get the most out of DDS and how to avoid common pitfalls when developing your system. We've developed RTI Connext best practices over the course of hundreds of customer projects and many years. In this webinar, you will learn how to apply the best practices we have developed to use RTI Connext DDS in ways that will enable your system to scale effectively with optimal performance, while avoiding missteps that will cause poor performance, non-determinism and scalability problems.
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
By John Breitenbach, RTI Field Applications Engineer
Contents
Introduction to RTI
Introduction to Data Distribution Service (DDS)
DDS Secure
Connext DDS Professional
Real-World Use Cases
RTI Professional Services
Data-Centric and Message-Centric System ArchitectureRick Warren
Presentation from April, 2010 summarizing the principles of data-centric design and how they apply to DDS technology. Message-centric design is presented by way of contrast.
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
View On-Demand http://ecast.opensystemsmedia.com/403
Repeat Success, Not Mistakes; Use DDS Best Practices to Design Your Complex Distributed Systems
RTI Connext DDS is a powerful tool that lets you efficiently build and integrate complex distributed systems like no other technology – if you use it right. Be aware of how to get the most out of DDS and how to avoid common pitfalls when developing your system. We've developed RTI Connext best practices over the course of hundreds of customer projects and many years. In this webinar, you will learn how to apply the best practices we have developed to use RTI Connext DDS in ways that will enable your system to scale effectively with optimal performance, while avoiding missteps that will cause poor performance, non-determinism and scalability problems.
Making the right data available at the right time, at the right place, securely, efficiently, whilst promoting interoperability, is a key need for virtually any IoT application. After all, IoT is about leveraging access data – that used to be unavailable – in order to improve the ability to react, manage, predict and preserve a cyber-physical system.
The Data Distribution Service (DDS) is a standard for interoperable, secure, and efficient data sharing, used at the foundation of some of the most challenging Consumer and Industrial IoT applications, such as Smart Cities, Autonomous Vehicles, Smart Grids, Smart Farming, Home Automation and Connected Medical Devices.
In this presentation we will (1) introduce the Eclipse Cyclone DDS project, (2) provide a quick intro that will get you started with Cyclone DDS, (3) present a few Cyclone DDS use cases, and (4) share the Cyclone DDS development road-map.
By John Breitenbach, RTI Field Applications Engineer
Contents
Introduction to RTI
Introduction to Data Distribution Service (DDS)
DDS Secure
Connext DDS Professional
Real-World Use Cases
RTI Professional Services
The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
Presentation to the Robotics Task Force of the Object Management Group (OMG) introducing the members to the Data Distribution Service (DDS), another OMG-standard technology.
Even though the U.S. Department of Defense budget is shrinking and the country's military footprint worldwide is receding the need for the warfighter to have accurate and actionable intelligence has never been more critical. Data from Intelligence, Surveillance, and Reconnaissance (C4ISR) systems such as radar, image processing payloads on Unmanned Aerial Vehicles, and more will be used and fused together to provide commanders with real-time situational awareness. Each system will also need to embrace open architectures and the latest commercial standards to meet the DoD's performance, size, and cost requirements. This e-cast will discuss how embedded defense suppliers are meeting these challenges.
Applying MBSE to the Industrial IoT: Using SysML with Connext DDS and SimulinkGerardo Pardo-Castellote
The benefits of Model-Based Systems Engineering (MBSE) and SysML are well established. As a result, users want to apply MBSE to larger and more complex Industrial IoT applications.
Industrial IoT applications can be very challenging: They are distributed. They deploy components across nodes spanning from small Devices to Edge computers to the Cloud. They often need mathematically-complex software. Moreover, they have strict requirements in terms of performance, robustness, and security.
SysML can model requirements, system components, behavior, interactions, and more. However, SysML does not provide a robust way to connect components running across different computers, especially when the security and quality of service of individual data-flows matter. SysML also does not provide all the tools needed to model and generate the (mathematical) code for complex dynamic systems.
A new “DDS + Simulink” MagicDraw SysML plugin has been developed to addresses these needs. It brings to MagicDraw users the capabilities of Connext DDS from RTI and Simulink from Mathworks:
The OMG Data-Distribution Service (DDS) is a secure and Qos-aware connectivity “databus”. DDS is considered the core connectivity framework for Software Integration and Autonomy by the Industrial Internet Consortium. Connext DDS is the leading implementation of the DDS standard, proven in 1000s of critical deployments.
Simulink is a tool for modeling and implementing the code needed for complex dynamic systems. It is widely deployed in many application domains including Automotive, Robotics, and Control Systems.
The new MagicDraw plugin defines a “DDS profile” for SysML that can model a distributed application connected using the DDS databus. The plugin can also generate the artifacts that configure the DDS databus (Topics, Data Types, Qos, etc.) and the adapters to Simulink and native code (e.g. C++ or Java).
By integrating three best-of class technologies: SysML, DDS and Simulink it is now possible to do MBSE for a wide range of Industrial IoT applications.
DDS Advanced Tutorial - OMG June 2013 Berlin MeetingJaime Martin Losa
An extended, in-depth tutorial explaining how to fully exploit the standard's unique communication capabilities.Presented at the OMG June 2013 Berlin Meeting.
Users upgrading to DDS from a homegrown solution or a legacy-messaging infrastructure often limit themselves to using its most basic publish-subscribe features. This allows applications to take advantage of reliable multicast and other performance and scalability features of the DDS wire protocol, as well as the enhanced robustness of the DDS peer-to-peer architecture. However, applications that do not use DDS's data-centricity do not take advantage of many of its QoS-related, scalability and availability features, such as the KeepLast History Cache, Instance Ownership and Deadline Monitoring. As a consequence some developers duplicate these features in custom application code, resulting in increased costs, lower performance, and compromised portability and interoperability.
This tutorial will formally define the data-centric publish-subscribe model as specified in the OMG DDS specification and define a set of best-practice guidelines and patterns for the design and implementation of systems based on DDS.
One of the major trends in data warehousing/data engineering is the transition from click-based ETL tools to using code for defining data pipelines. Nowadays, the vast majority of projects either start with a set of simple shell/ bash scripts or with platforms such as Luigi or Apache Airflow, with the latter clearly becoming the dominant player. In the past 6 years, Project A also followed this approach when building data warehouses for more than 20 of its portfolio companies and we are now open sourcing the underlying infrastructure (https://github.com/mara). Basically, it is a lightweight, opinionated Airflow, with a focus on transparency and complexity reduction. In this talk, I will guide you through some of the design decisions behind the platform and some general learnings for setting up successful data engineering teams.
Talend Interview Questions and Answers | Talend Online Training | Talend Tuto...Edureka!
( Talend Training: https://www.edureka.co/talend-for-big-data)
This Edureka tutorial on Talend Interview Questions will help you to learn about the most frequently asked Talend questions and their answers which will set you apart in the interview process. This video helps you to learn the following topics:
1. Talend MCQ
2. General Talend Questions
3. Talend for Data Integration Questions
4. Talend for Big Data Questions
Designing and Building a Graph Database Application – Architectural Choices, ...Neo4j
Ian closely looks at design and implementation strategies you can employ when building a Neo4j-based graph database solution, including architectural choices, data modelling, and testing.g
Property graph vs. RDF Triplestore comparison in 2020Ontotext
This presentation goes all the way from intro "what graph databases are" to table comparing the RDF vs. PG plus two different diagrams presenting the market circa 2020
Communication Patterns Using Data-Centric Publish/SubscribeSumant Tambe
Fundamental to any distributed system are communication patterns: point-to-point, request-reply, transactional queues, and publish-subscribe. Large distributed systems often employ two or more communication patterns. Using a single middleware that supports multiple communication patterns is a very cost-effective way of developing and maintaining large distributed systems. This talk will begin with an introduction of Data Distribution Service (DDS) – an OMG standard – that supports data-centric publish-subscribe communication for real-time distributed systems. DDS separates state management and distribution from application logic and supports discoverable data models. The talk will then describe how RTI Connext Messaging goes beyond vanilla DDS and implements various communication patterns including request-reply, command-response, and guaranteed delivery. You will also learn how these patterns can be combined to create interesting variations when the underlying substrate is as powerful as DDS. We’ll also discuss APIs for creating high-performance applications using the request-reply communication pattern.
The Data Distribution Service (DDS) is a standard for efficient and ubiquitous data sharing built upon the concept of a, strongly typed, distributed data space. The ability to scale from resource constrained embedded systems to ultra-large scale distributed systems, has made DDS the technology of choice for applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
This two part webcast provides an in depth introduction to DDS – the universal data sharing technology. Specifically, we will introduce (1) the DDS conceptual model and data-centric design, (2) DDS data modeling fundamentals, (3) the complete set of C++ and Java API, (4) the most important programming, data modeling and QoS Idioms, and (5) the integration between DDS and web applications.
After attending this webcast you will understand how to exploit DDS architectural features when designing your next system, how to write idiomatic DDS applications in C++ and Java and what are the fundamental patterns that you should adopt in your applications.
Presentation to the Robotics Task Force of the Object Management Group (OMG) introducing the members to the Data Distribution Service (DDS), another OMG-standard technology.
Even though the U.S. Department of Defense budget is shrinking and the country's military footprint worldwide is receding the need for the warfighter to have accurate and actionable intelligence has never been more critical. Data from Intelligence, Surveillance, and Reconnaissance (C4ISR) systems such as radar, image processing payloads on Unmanned Aerial Vehicles, and more will be used and fused together to provide commanders with real-time situational awareness. Each system will also need to embrace open architectures and the latest commercial standards to meet the DoD's performance, size, and cost requirements. This e-cast will discuss how embedded defense suppliers are meeting these challenges.
Applying MBSE to the Industrial IoT: Using SysML with Connext DDS and SimulinkGerardo Pardo-Castellote
The benefits of Model-Based Systems Engineering (MBSE) and SysML are well established. As a result, users want to apply MBSE to larger and more complex Industrial IoT applications.
Industrial IoT applications can be very challenging: They are distributed. They deploy components across nodes spanning from small Devices to Edge computers to the Cloud. They often need mathematically-complex software. Moreover, they have strict requirements in terms of performance, robustness, and security.
SysML can model requirements, system components, behavior, interactions, and more. However, SysML does not provide a robust way to connect components running across different computers, especially when the security and quality of service of individual data-flows matter. SysML also does not provide all the tools needed to model and generate the (mathematical) code for complex dynamic systems.
A new “DDS + Simulink” MagicDraw SysML plugin has been developed to addresses these needs. It brings to MagicDraw users the capabilities of Connext DDS from RTI and Simulink from Mathworks:
The OMG Data-Distribution Service (DDS) is a secure and Qos-aware connectivity “databus”. DDS is considered the core connectivity framework for Software Integration and Autonomy by the Industrial Internet Consortium. Connext DDS is the leading implementation of the DDS standard, proven in 1000s of critical deployments.
Simulink is a tool for modeling and implementing the code needed for complex dynamic systems. It is widely deployed in many application domains including Automotive, Robotics, and Control Systems.
The new MagicDraw plugin defines a “DDS profile” for SysML that can model a distributed application connected using the DDS databus. The plugin can also generate the artifacts that configure the DDS databus (Topics, Data Types, Qos, etc.) and the adapters to Simulink and native code (e.g. C++ or Java).
By integrating three best-of class technologies: SysML, DDS and Simulink it is now possible to do MBSE for a wide range of Industrial IoT applications.
DDS Advanced Tutorial - OMG June 2013 Berlin MeetingJaime Martin Losa
An extended, in-depth tutorial explaining how to fully exploit the standard's unique communication capabilities.Presented at the OMG June 2013 Berlin Meeting.
Users upgrading to DDS from a homegrown solution or a legacy-messaging infrastructure often limit themselves to using its most basic publish-subscribe features. This allows applications to take advantage of reliable multicast and other performance and scalability features of the DDS wire protocol, as well as the enhanced robustness of the DDS peer-to-peer architecture. However, applications that do not use DDS's data-centricity do not take advantage of many of its QoS-related, scalability and availability features, such as the KeepLast History Cache, Instance Ownership and Deadline Monitoring. As a consequence some developers duplicate these features in custom application code, resulting in increased costs, lower performance, and compromised portability and interoperability.
This tutorial will formally define the data-centric publish-subscribe model as specified in the OMG DDS specification and define a set of best-practice guidelines and patterns for the design and implementation of systems based on DDS.
One of the major trends in data warehousing/data engineering is the transition from click-based ETL tools to using code for defining data pipelines. Nowadays, the vast majority of projects either start with a set of simple shell/ bash scripts or with platforms such as Luigi or Apache Airflow, with the latter clearly becoming the dominant player. In the past 6 years, Project A also followed this approach when building data warehouses for more than 20 of its portfolio companies and we are now open sourcing the underlying infrastructure (https://github.com/mara). Basically, it is a lightweight, opinionated Airflow, with a focus on transparency and complexity reduction. In this talk, I will guide you through some of the design decisions behind the platform and some general learnings for setting up successful data engineering teams.
Talend Interview Questions and Answers | Talend Online Training | Talend Tuto...Edureka!
( Talend Training: https://www.edureka.co/talend-for-big-data)
This Edureka tutorial on Talend Interview Questions will help you to learn about the most frequently asked Talend questions and their answers which will set you apart in the interview process. This video helps you to learn the following topics:
1. Talend MCQ
2. General Talend Questions
3. Talend for Data Integration Questions
4. Talend for Big Data Questions
Designing and Building a Graph Database Application – Architectural Choices, ...Neo4j
Ian closely looks at design and implementation strategies you can employ when building a Neo4j-based graph database solution, including architectural choices, data modelling, and testing.g
Property graph vs. RDF Triplestore comparison in 2020Ontotext
This presentation goes all the way from intro "what graph databases are" to table comparing the RDF vs. PG plus two different diagrams presenting the market circa 2020
Communication Patterns Using Data-Centric Publish/SubscribeSumant Tambe
Fundamental to any distributed system are communication patterns: point-to-point, request-reply, transactional queues, and publish-subscribe. Large distributed systems often employ two or more communication patterns. Using a single middleware that supports multiple communication patterns is a very cost-effective way of developing and maintaining large distributed systems. This talk will begin with an introduction of Data Distribution Service (DDS) – an OMG standard – that supports data-centric publish-subscribe communication for real-time distributed systems. DDS separates state management and distribution from application logic and supports discoverable data models. The talk will then describe how RTI Connext Messaging goes beyond vanilla DDS and implements various communication patterns including request-reply, command-response, and guaranteed delivery. You will also learn how these patterns can be combined to create interesting variations when the underlying substrate is as powerful as DDS. We’ll also discuss APIs for creating high-performance applications using the request-reply communication pattern.
Watch the replay: http://ecast.opensystemsmedia.com/392
Looking to build a complex distributed system based on the latest and greatest technical innovations? The Data Distribution Service (DDS) standard from the Object Management Group (OMG) provides the software infrastructure for a diverse range of systems, from small networking appliances to massive wind farms. Since its adoption in 2004, the DDS standard and its implementations have evolved to address the needs of this broad application base. Attend this presentation to learn about 7 critical DDS innovations that will significantly improve the development of your next distributed system:
Type extensibility to support long-term system evolution
Rich communication patterns to simplify development and integration
Small footprint DDS implementation for resource-constrained platforms
Certification for safety-critical applications including avionics
Flexible, scalable and efficient security
Web Integration Service (HTTP / REST) interface
Integration with visual development environments: Simulink, Artisan Studio and LabView
Speaker: Bert Farabaugh, Worldwide Field Applications Engineering Manager
Bert Farabaugh is Worldwide Field Applications Engineering Manager at RTI. He works with customers to identify and develop solutions and design patterns tailored for their projects. Bert has over 16 years of experience developing networking protocols and communications design patterns from scratch for robotics and embedded systems. He has been a field applications engineer for the past 10 years, with hundreds of different applications in his portfolio.
The OMG has recently standardized a UML Profile for DDS. This brief tutorial, which was presented at the OMG RTWS 2009, provides you with an introduction to the standard.
High-level introduction to the OMG Data Distribution Service (DDS) standard and how it provides values beyond what is possible with traditional messaging middleware such as JMS or AMQP.
Introduction to Puppet Enterprise - Jan 30, 2019Puppet
If you're new to Puppet Enterprise, this is the webinar for you. You'll learn why thousands of companies rely on Puppet to automate the delivery and operation of their software, and see it in action with a live demo.
We'll cover how to use Puppet Enterprise to:
Discover what you have using Puppet Discovery
Orchestrate changes to infrastructure and applications
Continually enforce your desired state and remediate any unexpected changes
Get real-time visibility and reporting to prove compliance
Automatically build, test and promote Puppet code changes using Continuous Delivery for Puppet Enterprise
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
In the ever-evolving landscape of data management, Zero-ETL is an approach that is reshaping how businesses handle and integrate their data. This webinar explores Zero-ETL, a paradigm shift from the traditional Extract, Transform, Load (ETL) process, offering a more streamlined, efficient, and real-time data integration method.
We will begin with an introduction to the concept of Zero-ETL, including how it allows direct access to data in its native environment and real-time data transformation, providing up-to-date information with significantly reduced data redundancy.
Next, we'll take you through several demonstrations showing how Zero-ETL can deliver real-time data and enable the free movement of data between systems. We will also discuss the various tools that support all aspects of Zero-ETL, providing attendees with an understanding of how they can adopt this innovative approach in their organizations.
Lastly, the session will conclude with an interactive Q&A segment, allowing participants to gain deeper insights into how Zero-ETL can be tailored to their specific business needs and how they can get started today.
Join us to discover how Zero-ETL can elevate your organization's data strategy.
“Lights Out”Configuration using Tivoli Netcool AutoDiscovery ToolsAntonio Rolle
Review why a CMDB is essential to and is the foundation of your BSM strategy
Outline the known challenges that require planning at the outset of a CMDB initiative
Drill down into the approach and lessons learned in the initial stages of a CMDB rollout for one of the largest financial institutions in North America
Introduction to Puppet Enterprise 10/03/2018Puppet
Register today and learn more about Puppet Enterprise
Join Puppet on Wednesday, 3 October 2018 at 9:00 a.m. PDT for our upcoming webinar, Introduction to Puppet Enterprise.
If you're new to Puppet Enterprise, this is the webinar for you. You'll learn why thousands of companies rely on Puppet to automate the delivery and operation of their software and see it in action with a live demo.
We'll cover how to use Puppet Enterprise to:
Gain situational awareness and drive change with confidence
Orchestrate changes to infrastructure and applications
Continually enforce your desired state and remediate any unexpected changes
Get real-time visibility and reporting to prove compliance
We will also explore our new products, Puppet Discovery and Puppet Pipelines and what’s new in 2018.1 and will leave plenty of time to answer your questions.
Featured Speakers: Abir Majumdar, Sales Engineer, and Anthony Rodriguez, Sales Development.
Designing a Scalable Twitter - Patterns for Designing Scalable Real-Time Web ...Nati Shalom
Twitter is a good example for next generation real-time web applications, but building such an application imposes challenges such as handling an every growing volume of tweets and responses, as well as a large number of concurrent users, who continually *listen* for tweets from users (or topics) they follow. During this session we will review some of the key design principles addressing these challenges, including alternatives *NoSQL* alternatives and blackboard patterns. We will be using Twitter as a use case, while learning how to apply these to any real-time we application
Why Should Nonprofits Care About Cloud ComputingTechSoup Global
What is cloud computing and why should you understand it? This presentation defines the different types of cloud computing, discusses how it is impacting nonprofits, outlines some criteria for use, and mentions some challenges of which you should be aware
Monitoring as an entry point for collaborationJulien Pivotto
In the last years, we have been building complex stacks, made from lots of components. All of this backed by multiple teams. This talk will present how you can use monitoring to look at the business side and have everyone looking at the same dashboards, making cooperation a reality.
Spirent: Datum User Experience Analytics SystemSailaja Tennati
Data Services:
Whether in the lab or the live network, Datum efficiently measures the user experience of data services. Evaluate user experience with a unified approach across all major mobile OS platforms and access technologies including LTE, 3G, and Wi-Fi.
Click here to find out more about Spirent Communication's User Experience Evaluation System Suite:
http://www.spirent.com/Products/User_Experience_Evaluation
The term cloud computing is being used more and more, but what is it and why should you understand it? In this free webinar we will explain what cloud computing means, define the different types, discuss how it is impacting nonprofits and libraries, and outline some criteria for use. The challenges of using the “cloud” will be discussed, as well as whether cloud computing will simplify your life and reduce software and IT staffing costs.
Hear from Anna Jaeger, Co-Director, GreenTech at TechSoup Global, and Peter Campbell, Nonprofit Technologist at Earthjustice, who will help you understand this topic in order to better communicate with your consultants, staff and board. This webinar is applicable for any size organization and ideal for decision makers who need to communicate about cloud computing with tech consultants, and who are interested in making more informed technology decisions.
At some point in time, organizations of all sizes would find some advantage in implementing Cloud computing. It is inevitable and we may ask ourselves what is actually holding us back in implementing it today. Perhaps fear, lack of resources or the most common issue of convincing the higher management about the underlying benefits. This presentation can be used to address the last issue.
Legacy monitoring and troubleshooting tools can limit visibility and control over your infrastructure and applications. Organizations must find monitoring and troubleshooting tools that can scale with the volume, variety and velocity of data generated by today’s complex applications in order to keep pace with business demands. Our upcoming webinar will discuss how Sumo Logic helped Scripps Networks harness cloud-native machine data analytics to improve application quality and reliability on AWS. Sumo Logic allows IT operations teams to visualize and monitor workloads in real-time, identify issues and expedite root-cause analysis across the AWS environment.
Join us to learn:
• How to migrate from traditional on-premises data centers to AWS with confidence
• How to improve the monitoring and troubleshooting of modern applications
• How Scripps Networks, a leading content developer, used Sumo Logic to optimize their transition to AWS
Who should attend: Developers, DevOps Director/Manager, IT Operations Director/Manager, Director of Cloud/Infrastructure, VP of Engineering
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
From its first use case that enabled distributed communications for US Navy ships to the autonomous systems of today, the DDS family of standards has enabled new generations of applications to run reliably, rapidly and securely, regardless of distance or scale.
To commemorate the 20th year milestone, the DDS Foundation is creating presentations that highlight the 14 specifications in the DDS standard, along with selected real-world use cases.
This presentation introduces some of the original use-cases and experiments, along with a brief history of the Standards.
A recorded video of the presentation is available at this URL
https://www.brighttalk.com/webcast/12231/602966
Introduction to DDS: Context, Information Model, Security, and Applications.Gerardo Pardo-Castellote
Introduction to the Data-Distribution Service (DDS): Context and Applications.
This 50 minute presentation summarizes the main features of DDS including the information model, the type system, and security as well as how typical applications use DDS.
It was presented at the Canadian Government Information Day in Ottawa on September 2018.
There is also a video of this presentation at https://www.youtube.com/watch?v=6iICap5G7rw.
This Object Management Group (OMG) RFP solicits submissions identifying and defining mechanisms to achieve integration between DDS infrastructures and TSN networks. The goal is to provide all artifacts needed to support the design, deployment and execution of DDS systems over TSN networks.
The DDS-TSN integration specification sought shall realize the following functionality:
● Define mechanisms that provide the information required for TSN-enabled networks to calculate any network schedules needed to deploy a DDS system.
OMG RFP
● Identify those parts of the set of the IEEE TSN standards that are relevant for a DDS-TSN integration and indicate how the DDS aspects are mapped onto, or related to, the associated TSN aspects. Examples include TSN- standardized information models for calculating system-wide schedules and configuring network equipment.
● Identify and specify necessary extensions to the [DDSI-RTPS] and [DDS- SECURITY] specifications, if any, to allow DDS infrastructures to use TSN- enabled networks as their transport while maintaining interoperability between different DDS implementations.
● Identify and specify necessary extensions to the DDS and DDS- XML specification, if any, to allow declaration of TSN-specific properties or quality of service attributes.
A NEW ARCHITECTURE PROPOSAL TO INTEGRATE OPC UA, DDS & TSN.
Suppliers and end users need a complete solution to address the complexity of future industrial automation systems. These systems require:
• Interoperability to allow devices and independent software applications from multiple suppliers to work together seamlessly
• Extensibility to incorporate future large or intelligent systems
• Performance and flexibility to handle challenging deployments and use cases
• Robustness to guarantee continuity of operation despite partial failures
• Integrity and fine-grained security to protect against cyber attacks
• Widespread support for an industry standard
This document proposes a new technical architecture to build this future. The design combines the best of the OPC Unified Architecture (OPC UA), Data Distribution Service (DDS), and Time-Sensitive Networking (TSN) standards. It will connect the factory floor to the enterprise, sensors to cloud, and real-time devices to work cells. This proposal aims to define and standardize the architecture to unify the industry.
Technical overview of the DDS for Extremely Resource-Constrained Environments (DDS-XRCE) specification.
This specification was adopted by the OMG in March 2018.
Demonstrates interoperability of 5 independent products that implement the Data-Distribution Service (DDS) Security Standard
(https://www.omg.org/spec/DDS-SECURITY/).
Tests the following implementations: RTI Connext DDS, Twin Oaks Computing CoreDX DDS, Kongsberg InterComm DDS, ADLink Vortex DDS Cafe, and Object Computing Inc OpenDDS.
This demonstration was performed at the OMG Meeting held in Reston, VA, USA in March 2018
One of the most important challenges that system designers and system integrators face when deploying complex Industrial Internet of Things (IoT) systems is the integration of different connectivity solutions and standards. At RTI, we are constantly working to accelerate the Industrial IoT revolution. Over the past few years, we have developed standard connectivity gateways to ensure that DDS systems can easily integrate with other core connectivity frameworks.
This year, we developed a standard OPC UA/DDS Gateway, a bridge between two of the most well-known Industrial IoT connectivity frameworks. We are excited to announce that the gateway was just adopted by the Object Management Group (OMG).
In this webinar, we will dive deeper into the importance of choosing a baseline core connectivity standard for the Industrial IoT and how to ensure all system components are fully integrated. Attendees will also learn:
How the OPC UA/DDS Gateway specification was developed and how it works
How to leverage the Gateway to enable DDS and OPC UA applications to interoperate transparently
About the first standard connectivity gateway released with RTI Web Integration Service in Connext DDS 5.3
Gateways are a critical component of system interoperability and we will keep working to help companies accelerate Industrial IoT adoption.
This is the Beta 1 version of the OPC UA / DDS Gateway specification released by the Object Management Group in March 2018.
This specification defines a standard, vendor-independent, configurable gateway that enables interoperability and information exchange between systems that use DDS and systems that use OPC UA.
Data Distribution Service (DDS) is a family of standards from the Object Management Group (OMG) that provide connectivity, interoperability, and portability for Industrial Internet, cyber-physical, and mission-critical applications.
The DDS connectivity standards cover Publish-Subscribe (DDS), Service Invocation (DDS-RPC), Interoperability (DDS-RTPS), Information Modeling (DDS-XTYPES), Security (DDS-SECURITY), as well as programing APIs for C, C++, Java and other languages.
The OPC Unified Architecture (OPC UA) is an information exchange standard for Industrial Automation and related systems created by the OPC Foundation. The OPC UA standard provides an Addressing and Information Model for Data Access, Alarms, and Service invocation layered over multiple transport-level protocols such as Binary TCP and Web-Services.
DDS and OPC UA exhibit significant deployment similarities:
• Both enable independently developed applications to interoperate even when those applications come from different vendors, use different programming languages, or run on different platforms and operating systems.
• Both have significant traction within Industrial Automation systems.
• Both define standard protocols built on top of the TCP/ UDP/IP Internet stacks.
The two technologies may coexist within the same application domains; however, while there are solutions that bridge between DDS and OPC UA, these are based on custom mappings and cannot be relied to work across vendors and products.
This is the DDS-XRCE 1.0 Beta specification adopted by the OMG March 2018.
The purpose of DDS-XRCE is to enable resource-constrained devices to participate in DDS communication, while at the same time allowing those devices to be disconnected for long periods of time but still be discoverable by other DDS applications.
DDS-XRCE defines a wire protocol, the DDS-XRCE protocol, to be used between an XRCE Client and XRCE Agent. The XRCE Agent is a DDS Participant in the DDS Global Data Space. The DDS-XRCE protocol allows the client to use the XRCE Agent as a proxy in order to produce and consume data in the DDS Global Data Space.
Demonstrates interoperability of 5 independent products that implement the Data-Distribution Service (DDS) Security Standard
(https://www.omg.org/spec/DDS-SECURITY/).
Tests the following implementations: RTI Connext DDS, Twin Oaks Computing CoreDX DDS, Kongsberg InterComm DDS, ADLink Vortex DDS Cafe, and Object Computing Inc OpenDDS.
Demonstrates interoperability of 3 independent products that implement the Data-Distribution Service (DDS) Security Standard
(https://www.omg.org/spec/DDS-SECURITY/).
Tests the following implementations: RTI Connext DDS, Twin Oaks Computing CoreDX DDS, and Kongsberg InterComm DDS.
This specification provides the following additional facilities to DDS [DDS] implementations and users:
* Type System. The specification defines a model of the data types that can be used for DDS Topics. The type system is formally defined using UML. The Type System is de- fined in section 7.2 and its subsections. The structural model of this system is defined in the Type System Model in section 7.2.2. The framework under which types can be modi- fied over time is summarized in section 7.2.3, “Type Extensibility and Mutability.” The concrete rules under which the concepts from 7.2.2 and 7.2.3 come together to define compatibility in the face of such modifications are defined in section 7.2.4, “Type Com- patibility.”
* Type Representations. The specification defines the ways in which types described by the Type System may be externalized such that they can be stored in a file or communi- cated over a network. The specification adds additional Type Representations beyond the
DDS-XTypes version 1.2 1
one (IDL [IDL41]) already implied by the DDS specification. Several Type Representa- tions are specified in the subsections of section 7.3. These include IDL (7.3.1), XML (7.3.2), XML Schema (XSD) (7.3.3), and TypeObject (7.3.4).
* Data Representation. The specification defines multiple ways in which objects of the types defined by the Type System may be externalized such that they can be stored in a file or communicated over a network. (This is also commonly referred as “data serializa- tion” or “data marshaling.”) The specification extends and generalizes the mechanisms already defined by the DDS Interoperability specification [RTPS]. The specification in- cludes Data Representations that support data type evolution, that is, allow a data type to change in certain well-defined ways without breaking communication. Two Data Repre- sentations are specified in the subsections of section 7.4. These are Extended CDR (7.4.1, 7.4.2, and 7.4.3) and XML (7.4.4).
* Language Binding. The specification defines multiple ways in which applications can access the state of objects defined by the Type System. The submission extends and gen- eralizes the mechanism currently implied by the DDS specification (“Plain Language Binding”) and adds a Dynamic Language Binding that allows application to access data without compile-time knowledge of its type. The specification also defines an API to de- fine and manipulate data types programmatically. Two Language Bindings are specified in the subsections of section 7.5. These are the Plain Language Binding and the Dynamic Language Binding.
This specification defines the Security Model and Service Plugin Interface (SPI) architecture for compliant DDS implementations. The DDS Security Model is enforced by the invocation of these SPIs by the DDS implementation. This specification also defines a set of builtin implementations of these SPIs.
* Authentication Service Plugin. Provides the means to verify the identity of the application and/or user that invokes operations on DDS. Includes facilities to perform mutual authentication between participants and establish a shared secret.
* AccessControl Service Plugin. Provides the means to enforce policy decisions on what DDS related operations an authenticated user can perform. For example, which domains it can join, which Topics it can publish or subscribe to, etc.
* Cryptographic Service Plugin. Implements (or interfaces with libraries that implement) all cryptographic operations including encryption, decryption, hashing, digital signatures, etc. This includes the means to derive keys from a shared secret.
* Logging Service Plugin. Supports auditing of all DDS security-relevant events Data Tagging Service Plugin. Provides a way to add tags to data samples.
This document specifies the OMG Interface Definition Language (IDL). IDL is a descriptive language used to define data types and interfaces in a way that is independent of the programming language or operating system/processor platform.
The IDL specifies only the syntax used to define the data types and interfaces. It is normally used in connection with other specifications that further define how these types/interfaces are utilized in specific contexts and platforms.
This the the formal version 1.0 of the DDS Security specification released September 2016. OMG document number formal/2016-08-01.
DDS-Security defines the Security Model and Service Plugin Interface (SPI) architecture for compliant DDS implementations.
The DDS Security Model is enforced by the invocation of these SPIs by the DDS implementation. This specification also defines a set of builtin implementations of these SPIs.
* The specified builtin SPI implementations enable out-of-the box security and interoperability between compliant DDS applications.
* The use of SPIs allows DDS users to customize the behavior and technologies that the DDS implementations use for Information Assurance, specifically customization of Authentication, Access Control, Encryption, Message Authentication, Digital Signing, Logging and Data Tagging.
This specification is a response to the OMG RFP "eXtremely Resource Constrained Environments DDS (DDS- XRCE)"
It defines a DDS-XRCE Service based on a client-server protocol between a resource constrained, low-powered device (client) and an Agent (the server) that enables the device to communicate with a DDS network and publish and subscribe to topics in a DDS domain. The specifications purpose and scope is to ensure that applications based on different vendor’ implementations of the DDS-XRCE Service are compatible and interoperable.
This is the Joint submission by RTI, TwinOaks, and eProsima. Updated September 2017, OMG document number mars/2017-09-18.
DDS - The Proven Data Connectivity Standard for the Industrial IoT (IIoT)Gerardo Pardo-Castellote
The next wave of Industrial Internet applications will connect machines and devices together into functioning, intelligent systems with capabilities beyond anything possible today. These systems fundamentally depend on connectivity and information exchange to derive knowledge and make "smart decisions". They require a much higher level of reliability and security than "Consumer" IoT applications. OMG's Data-Distribution Service for Real-Time Systems (DDS) is the premier open middleware standard directly addressing publish-subscribe communications for Industrial IoT applications. It provides a protocol that meets the demanding security, scalability, performance, and Quality of Service requirements of IIoT applications spanning connected machines, enterprise systems, and mobile devices.This presentation will use concrete use cases to introduce DDS and examine why energy, advanced medical, asset-tracking, transportation, and military systems choose to base their designs on DDS.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
RTI Data-Distribution Service (DDS) Master Class 2011
1. DDS: A Next-Generation Approach to Building Distributed Real-Time Systems 2011 Masterclass Gerardo Pardo-Castellote, Ph.D. Co-chair OMG DDS SIG CTO, Real-Time Innovations [email_address] http://www.rti.com
91. How to Get Data? (Listener-Based) // Listener creation and attachment Listener listener = new MyListener(); reader->set_listener(listener); // Listener code MyListener::on_data_available( DataReader reader ) { TextSeq received_data; SampleInfoSeq sample_info; TextDataReader reader = TextDataReader::narrow(reader); treader->take( &received_data, &sample_info, …) // Use received_data printf(“Got: %s”, received_data[0]->contents); }
92. How to Get Data? (WaitSet-Based) // Creation of condition and attachement Condition foo_condition = treader->create_readcondition(…); waitset->add_condition(foo_condition); // Wait ConditionSeq active_conditions; waitset->wait(&active_conditions, timeout); // Wait returns when there is data (or timeout) FooSeq received_data; SampleInfoSeq sample_info; treader->take_w_condition (&received_data, &sample_info, foo_condition); // Use received_data printf(“Got: %s”, received_data[0]->contents);
93.
94.
95.
96.
97.
98.
99. IDL vs. XML: IDL Example struct MemberStruct{ short sData; } typedef MemberStructType; //@top-level false
100. IDL vs. XML: XML Example <? xml version ="1.0“ encoding ="UTF-8"?> < types xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation ="../rti_dds_topic_types.xsd"> < struct name ="MemberStruct" topLevel ="false"> < member name ="sData“ type ="short"/> </ struct > < typedef name ="MemberStructType" type ="nonBasic“ nonBasicTypeName ="MemberStruct“ topLevel ="false"/> </ types >
101. IDL vs. XSD: XSD Example <? xml version ="1.0" encoding ="UTF-8"?> < xsd:schema xmlns:xsd ="http://www.w3.org/2001/XMLSchema" xmlns:dds ="http://www.omg.org/dds" xmlns:tns ="http://www.omg.org/IDL-Mapped/" targetNamespace ="http://www.omg.org/IDL-Mapped/"> < xsd:import namespace ="http://www.omg.org/dds" schemaLocation ="rti_dds_topic_types_common.xsd"/> < xsd:complexType name ="MemberStruct"> < xsd:sequence > < xsd:element name ="sData" minOccurs ="1" maxOccurs ="1" type ="xsd:short"/> </ xsd:sequence > </ xsd:complexType > <!-- @topLevel false --> < xsd:complexType name ="MemberStructType"> < xsd:complexContent > < xsd:restriction base ="tns:MemberStruct"> < xsd:sequence > < xsd:element name ="sData" type ="xsd:short" minOccurs ="1" maxOccurs ="1"/> </ xsd:sequence > </ xsd:restriction > </ xsd:complexContent > </ xsd:complexType > <!-- @topLevel false --> </ xsd:schema >
137. Pub/Sub middleware (DDS) Application Secure OS Secure Transport DDS Application Secure DDS Global Data Credential Mgmt Authentication Access Control Data Tagging Encryption DDS Application DDS Application ? ? ? ?
138. App. Other DDS System Secure DDS middleware Authentication Plugin Access Control Plugin Data Encryption Plugin Data Tagging Plugin Crypto Module (e.g. TPM ) Secure Transport (e.g. TLS) CS application component certificates Kernel Policies Network Driver Network Encrypted Data TAGS Other DDS System Other DDS System App. App. Secure Kernel (e.g. SE Linux, MILS) ? Data cache Protocol Engine DDS Entities ? 1 2 3 4 5 6 7 8 9
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
Editor's Notes
Time: 45 minutes Title: The Data Distribution Service (DDS) Standard: A Next-Generation Approach to Building Distributed Real-Time Systems Abstract: DDS has been adopted worldwide by major air force, army, marine and navy programs as an open architecture standard for integrating real-time tactical systems with each other and with enterprise applications such as command and control systems. This breakout will introduce the DDS standard and show how it provides a net-centric, service-oriented approach to meeting the messaging and integration requirements of mission-critical embedded systems. ** Booth demo ** RTI will be demonstrating its real-time publish/subscribe middleware based on the Data Distribution Service (DDS) standard. DDS dramatically reduces software lifecycle costs by making it easy to develop, integrate and scale distributed real-time applications. DDS applications are loosely coupled and can communicate seamlessly across platforms, programming languages and network transports (including shared memory, backplane, LAN, WAN, wireless and satellite links). Supported operating systems include VxWorks, VxWorks MILS 2.0, Linux, Windows, Solaris and AIX. A small-footprint version is available for systems that require DO-178B certification.
Make the point that this precept has been fundamentally understood by the GVA, but the concept is empowering as a way to integrate larger systems of systems into a net-centric whole. In fact the data model of the vehicle in def-stan 23-09 can be assessed to determine what parts of its data set could and should be communicated to the wider net-centric environment. 2 nd bullet point – you can note: Otherwise how would crusty old Generals add the value they do in leading combat situations? (Joke – up to you if you use this)
Last slide was at one moment in time. Now, longer-term view… Example: with 12 apps, effort is order 12 vs. order 144: order of magnitude savings
Mostly for technical folks
DDS provides an infrastructure for integrating real-time applications. It also facilitates integrating real-time applications with non-real-time (enterprise) applications, such as command and control systems.
Work on the standard began in 2001 and version 1.0 was formally adopted in December 2004. RTI released the first commercial solution to comply with the standardized API in 2005. Implementations: RTI* PrismTech/Thales* MilSOFT* Twin Oaks* OpenDDS Gallium/Kongsberg Boeing SoSCOE *Claim support for wire protocol OCERA ORTE is RTPS only. http://www.ocera.org/download/components/WP7/orte-0.3.1.html
Applications that want to contribute information to the Global Data Space can declare their intent to publish the information. Applications that want to access portions of the Global Data Space can declare their intent to subscribe to the information Decoupling in several dimensions: - Space (location): Each side does not need to know the location of the other side. They publish/subscribe to the shared “global data space” - Redundancy: It is possible for the same data to be subscribe by multiple nodes, or to be written by multiple nodes. This is all managed transparently by the infrastructure. - Time: The reception of data does not need to be synchronous with the writing. A subscriber may, if so configured, receive data that was written even before the subscriber joined the network. - Platform: Applications do not have to worry about data representation, processor architecture, Operating System, or even programming on the other side. It is possible for example to publish from a real-time node using the C language and subscribe from a Linux node running Java. Each side is isolated from the details of the other. Mechanisms in place to allow access to data only by specific applications / nodes.
Applications that want to contribute information to the Global Data Space can declare their intent to publish the information. Applications that want to access portions of the Global Data Space can declare their intent to subscribe to the information Decoupling in several dimensions: - Space (location): Each side does not need to know the location of the other side. They publish/subscribe to the shared “global data space” - Redundancy: It is possible for the same data to be subscribe by multiple nodes, or to be written by multiple nodes. This is all managed transparently by the infrastructure. - Time: The reception of data does not need to be synchronous with the writing. A subscriber may, if so configured, receive data that was written even before the subscriber joined the network. - Platform: Applications do not have to worry about data representation, processor architecture, Operating System, or even programming on the other side. It is possible for example to publish from a real-time node using the C language and subscribe from a Linux node running Java. Each side is isolated from the details of the other. Mechanisms in place to allow access to data only by specific applications / nodes.
Applications that want to contribute information to the Global Data Space can declare their intent to publish the information. Applications that want to access portions of the Global Data Space can declare their intent to subscribe to the information Decoupling in several dimensions: - Space (location): Each side does not need to know the location of the other side. They publish/subscribe to the shared “global data space” - Redundancy: It is possible for the same data to be subscribe by multiple nodes, or to be written by multiple nodes. This is all managed transparently by the infrastructure. - Time: The reception of data does not need to be synchronous with the writing. A subscriber may, if so configured, receive data that was written even before the subscriber joined the network. - Platform: Applications do not have to worry about data representation, processor architecture, Operating System, or even programming on the other side. It is possible for example to publish from a real-time node using the C language and subscribe from a Linux node running Java. Each side is isolated from the details of the other. Mechanisms in place to allow access to data only by specific applications / nodes.
Start first window. Publish one instance of each shape (i.e., topic). Start second window. Subscribe to all three shapes. Point out: automatic discovery, peer-to-peer communication. Illustrates one-to-one communication. Start third window. Subscribe to all three shapes. Illustrates one-to-many. Start fourth window. Publish one of each shape, using different colors than are already being published. Illustrates many-to-many. Click “Delete All” and then exit the first window. Notice how well-suited DDS is for dynamic and ad hoc systems. Applications could come and go without impacting other applications. This also provides fault tolerance. Also see how this makes it easy to insert new applications and technology into already deployed systems. Note: keep the three other windows running. Will use them for showing content filter and time-based filter later.
In one of the two subscribing windows, delete all of the subscriptions and then subscribe to one shape with a content-filtered topic and another shape with a time based filter. Points: Applications have fine-grained control over which data is received. This optimizes performance and reduces/simplifies application logic Filtering has no impact on either the publisher or other subscriber. It is very loosely coupled. Every application can specify its own requirements. Delete all of the publications in the publishing window and all of subscriptions in the non-filtering window. Publish a shape with Durability, Reliability, History=250 and Deadline=1000. (Publish a shape that is being subscribed with a filter.) In the window with no subscriptions, enable “Show Reliability” and subscribe to the shape being published with Durability, Reliability, History=250 and Deadline=2000. Points: Late joining applications can get the state they need to start processing, including historic data that may be necessary to “prime” algorithms. Even though the QoS of the publisher was changed, the subscriber that was running with the old QoS is still getting data. Subscribers can get data as long as the request QoS is less than or equal to the published QoS. In the subscribing window with Deadline QoS set, view the Output pane and make sure to scroll to the bottom (<Control><End>). In the publishing window, click-hold on the shape so that it won’t be published. Note the missed deadline message. Point: Applications are notified if timing constraints are not being met so that they can take corrective action and won’t do anything erroneously.
The first thing to notice is that the knowledge of your data model that was associated with the data stream in the data-centric technology disappears when you use a message-centric technology. That makes it much harder to develop a generic component such as the Web Integration Service, which much transform arbitrary data types to and from XML, downsample data by based on content, etc. First message arrives. It has the same structure as we saw before, except without a known type definition, the type information must be embedded within the message itself, significantly increasing its size. The second message arrives. It’s in a totally different format than the first! This one is just a blob of binary-encoded data. Maybe the consuming application understands how to decode it and maybe not. Each application connected to the network will have expectations about the formats of the messages it receives. But a messaging infrastructure can’t support those expectations, so they have to be enforced by an organizational policy. I write up a Word document that describes how you should format your messages and email it to you, and you have to follow my instructions. If you make a mistake, we’ll have to debug it at integration time. In a data-centric approach, data type enforcement is built in: developers work with typed objects in their programming languages, errors are detected when the code is compiled before it’s ever deployed, and runtime mismatches that do occur are detected automatically by the middleware. How do I describe a content-based filter on a binary blob? How do I transform it into another format? How do I map it into a database? The third message arrives. It’s in yet a third format: a plain text string. Because the messaging system doesn’t have any concept of object lifecycle, each system has to define its own ad hoc system of sentinels: “create” messages, “dispose” messages, etc. More work, and it makes it much more difficult to leverage something you’ve built for one project on the next project. By comparison, Web Integration Service takes advantage of the built-in lifecycle support in DDS – you saw that when tracks were marked with “X” or “?”. And without any knowledge of your objects or their lifecycle, a messaging infrastructure can only support qualities of service that make sense across an entire topic: for example time-to-live (“lifespan” in language of DDS).
From the beginning, the data stream is associated with the schema of the data that will be propagated on that stream. Your applications already have some expectations; if you express those to a data-centric infrastructure, it can help you. For example, you can use this schema to automatically transform data into other formats. (This is how the Routing Service and Web Integration Service work.) The infrastructure can also dissect your data to filter on content (for example “give me updates where x > 5”). “ Key” means “this field establishes the identity of a unique object.” Like the key in a relational database table. In DDS, can be any number of fields of any type(s). New track you’ve never seen before. Notice that since type is already known, only need to send field values, not field names or types. Update to a track you’ve already seen Another new track – notice that the key is different A track you’ve seen before has gone away
Effector is responsible for transition algo. In this case, x y z. But could have chosen “curved” path x … z that never reached y.
Time: 45 minutes Title: The Data Distribution Service (DDS) Standard: A Next-Generation Approach to Building Distributed Real-Time Systems Abstract: DDS has been adopted worldwide by major air force, army, marine and navy programs as an open architecture standard for integrating real-time tactical systems with each other and with enterprise applications such as command and control systems. This breakout will introduce the DDS standard and show how it provides a net-centric, service-oriented approach to meeting the messaging and integration requirements of mission-critical embedded systems. ** Booth demo ** RTI will be demonstrating its real-time publish/subscribe middleware based on the Data Distribution Service (DDS) standard. DDS dramatically reduces software lifecycle costs by making it easy to develop, integrate and scale distributed real-time applications. DDS applications are loosely coupled and can communicate seamlessly across platforms, programming languages and network transports (including shared memory, backplane, LAN, WAN, wireless and satellite links). Supported operating systems include VxWorks, VxWorks MILS 2.0, Linux, Windows, Solaris and AIX. A small-footprint version is available for systems that require DO-178B certification.
So, what is RTI Routing Service? In a nutshell, Routing Service provides high-performance real-time data forwarding and transformation across DDS Domains, Communities of Interests and Wide Area Networks (WANs) including firewall and NAT traversal. With its plug-in architecture to accommodate new transports and bridging functionality, it has been designed from the ground up to meet custom integration requirements, such as: Bridging between new DDS applications and legacy systems; and Increasing data security by controlled information flows. With the help of our Services team, custom integrations can be easily created and with low risk. My colleague Gordon will explain this further in a little while. As we’re introducing our edition-based packaging, Routing Service will be available as a component of the new RTI Enterprise Edition. Later we’ll give details about how to get access to Enterprise Edition and Routing Service at a significant discount.
Hands-On: show rtiddsgen –help output
Short, easy to write Efficient, OMG standard
AUTOCOMPLETION XML friendly, easy to extend More powerful
First step to interoperate with WS Some customers have a lot of sources in XSD or WSDL
This is a great tool for seeing the organization of your distributed environment. Having the ability to change QoS settings on the fly and observing how they impact the overall environment is extremely useful Having the ability to determine “WHY” a DataWriter is not talking to a specific DataReader is also very beneficial. This has the ability to show both Node views and Topic views of the system.
Note: only one document can be specified with the string_profile variable
How it is realized http directly to the data bus
RTI is your best partner for a successful DDS deployment: Broadly proven commercial technology >11 years of commercial availability RTI is the de facto standard, with >70% market share Selected for nearly 500 unique applications – many mission critical – many deployed Industry-leading expertise and services capability We know the DDS standard better than anyone We have the most experience putting DDS to use in real applications We have a large team of senior, experienced engineers at your disposal to allow you to leverage our expertise: whether for training, consulting or as a partner in your development Corporate focus and commitment All of RTI’s technology is built on DDS – we are completely committed to it – we sold off our non-DDS business Proven, financially strong DDS business DDS is not acquired or peripheral technology that we could drop or dispose of if it became financially expedient Comprehensive infrastructure built around DDS DDS is only part of your infrastructure and application – RTI provides the most comprehensive overall solution to your real-time application and data management requirements Development tools, database integration, real-time data recording, Complex Event Processing, real-time data visualization dashboards Plus the largest set of partners for additional capabilities like modeling, high-performance transports, real-time operating systems and JVMs Superior architecture and implementation Significantly higher performance: lower latency and higher throughput with less overhead on your compute resources Much more fault tolerant and highly available, with no single points of failure and full redundancy for all services Most flexibility for supporting additional transports, legacy or other non-DDS data types, and custom discovery requirements, Best suited for resource-limited embedded systems Broadest availability across enterprise and embedded platforms Quality by design Mature, formal processes for design, development and Quality Assurance Invest in comprehensive user documentation Proof point: 98% customer satisfaction - extraordinary in any industry In summary, RTI provides: Highest performance and fault tolerance – because of our superior peer-to-peer architecture Fastest time-to-market Leveraging our training, consulting, engineering services and high-quality support organization Superior documentation Most complete infrastructure Flexible implementation Lowest risk Proven commercial technology Most experience and expertise with successful DDS deployment Quality processes and support Corporate focus and commitment to DDS