Interoperability and Open ArchitectureCurrent practice but…What is it really?Why is it hard?
VERSUSInteroperability by mediationAPB == Advanced Processing Build
So, its all good right? Not so fast.Transition out: So, what is missing, how de we get Open Architectures that really deliver on the promise?
Measure and require designed interoperability
When we talk about systems and data, we’re usually having conversations about interfaces, integration, and achieving interoperability. So now I’m going to talk a little about integration and interoperation of systems.
Using a Quality Attribute Methodology, that supports the Business, Non-functional RequirementsThe Key Architectural Drivers of an Open System are:How do we achieve these KADs?How do we achieve these in Open Architecture?
Where do we see integration in our everyday lives? When I really need to charge my phone (I forget…quite often, really), I grab my cable and I plug it in. One end of the cable connects to the phone, and the other end to the power source. After a bit of time, the phone is charged. This simple example demonstrates an everyday integration of systems: a phone and the power system. In this case, the two interfaces are designed in such a way that they work together – no adapter or mediating component is required.
To successfully enable systems to interoperate, they require a matching of specifications or interface standards through some means. Say this power cable is for my cell phone and I’m in Spain, visiting a friend. I just arrived at my hotel, and my phone is dead. I grab my phone and my cable and go to plug it in and… oh. Right. Complete interface mismatch. These components were not designed in a way that allows for them to be integrated right away. But I really need to charge my phone! Since these systems were not designed to work together, a solution must be architected if I require them to interoperate… They can’t interoperate without some element of mediation.
Interoperability can be achieved through a mediation layer – here, an adapter – that allows for the 2 systems to be integrated without requiring a change be made to one or both of their interfaces. I don’t need a new power cable to charge my cell phone while I’m in Europe, if I by the adapter.In the integration example, I achieved interoperation without the need for the mediation component – the “magic box”. In those cases where I require an element such as this to achieve interoperability, that interoperation is achieved by architecting a solution that allows for me to integrate my systems together. Common examples include standards matching and adapters
A model is anything used in any way to represent something else. We use models to observe the effect on manipulating the original, without actually having to manipulate it. A really good model will capture all of the details we need to manipulate the original, and no more. On the left we have a picture of an actual 1967 ford mustang gt, and on the right a model of that same car. Let’s say you have a child that is going through a phase where they’re really into cars. And this child wants nothing more than what his dad has – a 1967 ford mustang gt. Now, I love my kids and I want to give them everything just so I could see what marvelous things they’d do with it. However, I am not about to hand over the key to a car to my toddler. I would give them a scaled, fit for purpose version, such as the model toy on the right. It has very little in the way of extras, but it is entirely sufficient AND safe to entertain my toddler.
A data model is a representation that describes the data about the things that exist in your domain. If you have a system – since systems operate on data – well, then you have a data model. If you’re a system integrator, you deal with data models during your integration activities. Data models come in many different representations, they express many different things in varying degrees of explicitness. Some data models capture information very unambiguously and others don’t. But no matter where your data model falls on the spectrum, you can work with it to make it better. Data models come in many flavors, and they’re not all equal. Which is best for you is going to depend on your systems requirements, and the function of the system, or component that will use that data. Here we have three examples of models many people have some familiarity with at least two of them. The dictionary is a list of terms for a particular domain of knowledge. It contains a list of terms, as well as the definitions and pronunciations for those terms. Using a representation such as this, words alongside their meaning, we can communicate about the things that exist in our domain and the meaning of those expressions, the words, is understood to those who use the same dictionary. The linnean taxonomy is an example of a hierarchical data model - it shows us the conception, naming, and classification of organism groups. It represents information in a hierarchical format, such a classification or categorization schema. Using a representation structure such as this, I can express that “this” is one of “those”.The last example is the periodic table of elements. From this we can tell that Gold has a weight and a certain number of protons… but I don’t know if 2 elements will bond, and if they will what they will form, simply by looking at this table. Per our requirements, we define a good data model to be one that captures, among other things, the semantics, or meaning, of the things that is represents in an unambiguous way. The process by which you generate a data model is something you need to consider… Especially if you need that data model that helps you meet your key non-functional requirements.
A SoS is an appealing thing. It’s an opportunity for reuse of technology and investments. It’s a possibility for an entirely new capability to be born just by adhering to the right approach. And that approach needs to produce one key result: semantic interoperability. In a system, it's really important to model your data just to meet the general requirements, but in system of systems, if you do this properly, it can result in real, tangible benefits for years to come. But going about it in the wrong way will produce long lasting pains that are costly, especially during future integrations or as things change in your system. But this is avoidable. A system of systems is made up of many constituent systems. Each one of those constituent systems brings with it its own set of requirements that the SOS must now support. Because of this, a SOS set of requirements is actually the set that contains all of the requirements of all of the constituent systems, plus the additional requirement for semantic interoperability. As a SOS grows, so does the set of things that it needs to be capable of expressing – and while information from one system may need to be used be many other systems, this isn’t trivial 1-1 mapping of the systems interface definitions. That approach does not produce a SOS that meets the extensibility requirement, among many others. Instead, all of the systems will generate the appropriate representations such that they can meet their system requirements and the system of systems will be responsible for the creation of mathematical constructs, described using a formal language for data modeling. It is from these constructs that the constituent systems will generate their required representations.
To achieve semantic interoperability between constituent systems, you need a new approach to dealing with your data. A SOS data shall do the following:Meet the requirements of the constituent systems. Support the overarching requirement for semantic interoperability.Allow for changes to be made to the model without requiring changes be made to the existing system and application interfaces that use it.In order to do this, we need to adopt a formal approach. The components of this approach that we are going to talk about here are the formal language, a rigorous documentation methodology, and a formal process for construction of your model.
Transition. (to Formal Language)When we talked about levels of interoperability before, it was pointed out that we needed to achieve semantic interoperability. If I need to be able to describe the data about the things I am trying to model in a way that captures the semantics, I need a language that is up to the task: a formal language for data modeling.A formal language can be defined as a set of words over its alphabet. Sometimes the sets of words are grouped into expressions, where rules and constraints are applied for how to form the expression and allowed transformations. An expression that was created according to these rules would be deemed to be a “well formed” expression. We’ve seen that this approach produces very real results, especially in the fields of mathematics and computer science. Highly structured programming languages, such as C, are so rigorous and formal that they’ve managed to have a lot of success in consistently being able to capture the logic of programming. This can be shown by their ability to use compilers and tools to generate binary. What we need for systems of systems data modeling is very similar. If we use a rigorous and formal language for data modeling, we can achieve those same benefits. One of the key benefits is the ability to build unambiguous expressions. Ambiguities in the meaning of an expression cause errors in systems, and we can’t have that if we expect a system of systems to truly interoperate. Using a formal language for data modeling is a natural fit to satisfy this requirement. A commonly used formal language for software systems is UML, or unified modeling language. UML is managed by the OMG and as of 2000 it was accepted by the ISO as being the industry standard for modeling software intensive systems.ConclusionBy first modeling my data in a way that generates these formal representations, I can subsequently generate any of these other representations if they’re the appropriate type for my particular system, as the result of a simple mapping or transformation. All of the information is there, and it’s meaning is explicit.
Forming expressions can be done in a handful of ways, but natural language, and ad-hoc representations in general, are prone to ambiguities. To illustrate this, consider an example. First we have a word problem, stated in in natural language and on the right, the same word problem stated in a more formal language – mathematics.. Pretend for a moment that you’re in a class and this problem is on an exam. In the first statement of the problem - on the left – you would read the following: So, where do you start? When I was given a problem such as this one to solve, I’d start by defining my variables; reading the problem carefully to ensure I didn’t miss something. Because, you know, teachers love to throw curveballs. In the end, this problem would have me walking up to the teachers desk and asking for a clarification – the ambiguities are not only present, but they result in multiple solutions if you take them account and try to solve anyhow. Can I have full credit if one of my answers matches her test key? Probably not. But it wasn’t my fault – she didn’t clearly communicate. Now, what if we stated this exact same problem using a more formal language such as logical or mathematical constructs? So, rather than state the problem as a human readable string, I state is as a mathematical formalism. Mathematical formalisms are either well-formed (explicit) or not (ambiguous). If we go about generating our expressions using a rigorously defined set of transformation rules (grammar), the results would always be something that a computer could operate on and understand;his type of representation unambiguously captures meaning. As you can see: we can’t remove the ambiguities, but they are clearly indicated – it is obvious you have multiple solutions.Having Multiple solutions is equivalent to multiple meanings, which is not ok. To achieve semantic interoperability, the meaning must be completely clear, so the formulas or expressions are required to be well-formed AND understood! After all, a statement can be syntactically valid but semantically invalid. Conclusion. formalism is a continuum; to have a sos, you need formalism because you don’t have complete control over the structure and content of all constituent systems
The purpose of Data Model Documentation is to memorialize the decisions that were made during creation of the Data Model. These decisions include the Requirements or Use Cases that describe the functionality of the DM, as well as the Non-Functional Requirements that describe the behavior of the model over the life cycle of the system. It also records the Methodology that was used to construct the DM, as well as the resulting Model. Because in a SoS, there are many potentially competing forces that are separated in time, and by controlling organization, the effectiveness of the documentation is directly related to the degree of formalism and consistency that can be applied by different organizations at different times, over the lifecycle of the SoS.
The third component of our formal approach was to specify a formal process.Because a SoS has many, independent organizational stakeholders that each have independent control over a part of the SoS, the processes that are used to create, and document the resulting SoS DM must enforce the high level of rigor that is required in both the model, and the documentation.______ ARCHIVE** On chart 33 second bullet, would say "end at the messages" to reinforce that the additional documentation is about the Data, not the functions, or architecture.Engineer a process to achieve the desired results.You need to engineer/architect your model such that the results from the modeling process help you to achieve the desired outcome (no ad-hoc solutions) so that you can satisfy your requirements. Don’t tweak the product of a flawed process; all you’ll wind up with is a flawed product. Change the process.Unfortunately, in the end, if you model using only these three things, it's insufficient because unless you formed each one of those using rigorous, repeatable, mathematical methods/procedures, then you really only have an ad-hoc model with a bunch of unconnected stuff in it. There are no associations being modeled so there is no context provided and the model is not semantically explicit. But you might have also noticed that when anything is ambiguously defined, or not defined at all, it’s a strike against it. Why? Because ambiguities cause errors in data modeling and for semantic interoperability to be achieved, we can’t have that. Things that we needed, but that aren’t covered by simply having enough kind of objects to be able to build properly are: the meaning of the line, how do we decided on the connections, how are we then supposed to interpret them, Depending on how your system will use data, and what type of information you’re using the model to represent, there are many ways to options, but not every option works well for all circumstances
In the end, we want this:The ability to take those things on the left that we need to model, including the system itself!, and document their required structures and behaviors, we well as their context within the system of systems. And we will do this using a formal approach. And then, when we have our formally described and formed SOS data model, that has captured all of the information needed to ensure that the data has proper context so it can be interpreted properly by any other system wishing to interoperate with it, we can generate, from that, any representation that the constituent systems need. And because they’re all from the same root, described in the same formal language, and derived using a set of rules, those data definitions will be interoperable. And in addition to having our representations all fall out of that model, our other system documentation will also come out of this one place, because it’s all important and relevant. You have to document the system. And the data. Not separately, together.Why the language?Need to know what is well-formed, and what is notNeed to be able to extend the data model when things change (because they will!) without requiring all system elements to recompile (no N^2 integration!)Need to be able to be very expressive, but do so in a rigorous, formal, and repeatable way can encode anything, can decode anythingIf I can do this, describe my information in such a way, then I’m not integrating to a message set or a data definition per se, I’m integrating via a mediation component that performs transforms on the information in the system of systems. So when something new comes along, I don’t have to integrate to that new application, I add another transform to a mediation component. That way, I don’t have to change my application interfaces and the complexity that naturally arises from integration activities – from the need to achieving interoperability among all of these disparately developed things – it’s contained in one place, and not bleeding throughout all the other code in my system.Why the documentation?Documentation captures your decisions, your outlook, the context of the system and the data that the system operates on to produce the desired outputs/outcomesIf you’re going to integrate your system with another system, this information becomes very important to ensure that they are properly interoperatingsemantic interoperability can not be achieved without the information that captures the context of the system being modeled as wellWhy the process?Formal processesUses of Data in Systems: In the process, you see that we have structure, behavior, and context. These are all things about the systems that we NEED to capture in the SOS data model.
A data-centric integration solution to achieve semantic interoperability is important and achievable.It is important because… One of the only things I can guarantee in a SOS is that it will change. At some point, it will. And when that change happens, rather than have your system be broken by it, why not survive it? If I architect my data in a rigorous and formal manner, and since data is what the systems operate on, then any changes in the system are easily accommodated, because they’d manifest as changes to the information present in the SOS. If the changes are made in a rigorous and repeatable way, then by knowing the rules for formation and the abstract data model that all things in the SOS come from, I can simply transform it and understand it if need be. The data will have meaning. It will have context. It will be usable and understood. Letting your system be broken by something that is inevitable seems a bit silly, especially since we can anticipate that change and accommodate it by making some intelligent architecture and design decisions upfront.Here we can see legacy, future and current systems – which is a reality – they can technically interoperate via a protocol using a common infrastructure. We know how to do that. They can syntactically interoperate by using a common data structure. But how do we accommodate the systems where can can’t change the interfaces? When they are incompatible? We need a mediation component. Achieving semantic interoperability relies on components such as this, especially since one of our requirements was that we needed to be able to accommodate change and not be broken by it (have to make changes to existing interfaces).
System Architecture for C4I Coalition Operations
A System Architecture for C4I for
Gordon A. Hunt – UDT 2013 Spain
Chief Engineer, RTI • UCS WG Sub-Committee Chair • Commander USN-R
– Open Architecture and Current Approaches
• A Coalition is a System of Systems
– Definitions and Examples
• Interoperability Architecture
– It is all about the Data
– How to capture and define its meaning
– Interoperability by Design
System of Systems
System of Systems
• A system of systems
is a collection of task-
oriented or dedicated
systems that pool
their resources and
to create a new, more
which offers more
simply the sum of the
Has a set of >[n+1] capabilities