Good afternoon! When I received the announcement for the Langley Formal Methods workshop, I was particularly interested in two of your objectives. First, to foster information exchange between researchers and practitioners from industry and academia, and second to discuss directions for future work. When I submitted my presentation proposal to this workshop, the reviewers thought it was more of a “position statement,” and I think that’s a fair characterization. Because I am not here today to share with you results of my own research in formal methods or testing and verification. I am here today because I believe the customers I support in the Department of Defense (DoD) NEED YOUR HELP. They need your help to apply appropriate testing and verification methods to an emerging category of capability solutions they are building called composable solutions . In other words, I am here today because…
COMPOSABLE COMMAND AND CONTROL (C2) NEEDS FORMAL METHODS.
For those of you who aren’t familiar with MITRE, we are a not-for-profit company that operates as a federally-funded research and development partner. We work primarily with U.S. government agencies and military services. Due to the increasing emphasis on Homeland Security initiatives, we have also started interacting more with civilian agencies.
The opinions I express here are my own…NOT official positions of MITRE or the Department of Defense, which I’ll refer to hereafter as “DoD.” Also, I will NOT discuss any information that is classified or proprietary to MITRE or the government. Everything in this presentation is freely available in the public domain.
As I stated earlier, I am not here today to tell you about some new findings that provide a “shrink-wrapped” solution to what is a very challenging testing problem within DoD. As a matter of fact, I think I will raise many more questions than I can answer. And that’s good! I’m hoping you can answer them! And if not, at least that gives us many jumping off points for further dialog.
Here is a list of the take-aways you CAN expect from today’s briefing. I’ll go over some terminology to establish the context of the composable C2 testing problem, look at the kind of testing DoD is presently doing, and suggest some areas where improvements are needed. DETAILS: I’ll talk about service orientation and composability and why these two concepts are so important to command-and-control solutions for DoD. These concepts may be review for some of you, but I think it’s important to ensure that you understand how DoD uses these terms and views these concepts and the context in which composable C2 solutions are being built. I’ll also show you some examples of the types of composable testing that are being done for DoD solutions. In my opinion it’s inadequate, and what concerns me is that few people seem to be much worried about this! And finally I’ll identify some questions I believe are opportunities for subject matter experts such as yourselves to bring value to a very challenging situation for DoD. In order to save some time for questions and comments, I estimate I have a little less than 20 minutes to present. So I won’t be doing a deep dive on any of these topics, but rather just make one or two key comments on each slide. My slide pack is annotated so you can review the slides and the notes pages in more detail later.
Here is an outline of the remainder of the briefing.
Let’s quickly go through a little background about composable C2.
The broad availability of services on the internet has really revolutionized the way we find and share and think about information. I’m sure that virtually everyone in this audience has used at least one of the services listed here at some time or another. DoD has witnessed this revolution on the internet, and they want to bring similar capabilities to 21 st century warfighters, only tailored to address their unique needs. And in particular, their command-and-control needs.
Here is the official DoD definition of what a service is. You’ll notice that the aspect of services on which DoD is particularly focused is that they should be modular, and “composable like legos.” This is the long sought-after idea of “plug-and-play” environment we started talking about back in the 1980’s. We imagined then that there would someday be huge catalogs of reusable software components to build new systems, much like you can pick up a catalog of electronic components and order the piece parts to build new devices. This is what motivated the title of this briefing. DoD wants to migrate to an operational environment in which they can quickly “compose” solutions from existing service components (or “legos”) that they reuse as opposed to the time-, cost- and labor-intensive process of reinventing or building a new solution “from scratch” every time.
DoD decision makers, from the comfort of their living rooms, can look out on the internet and see capabilities like this mashup, which fuses public domain information from services “in the wild” which may never even have been originally intended for use in such a context … DoD sees this and they want to be able to do the same thing for warfighters to respond to emerging contingency needs in a rapid and agile fashion.
For that reason, in the past decade DoD decided they wanted to migrate their information sharing infrastructure to a service oriented architecture (SOA). I’ve listed here the “official” definition for an SOA. I’m probably preaching to the choir when I point out the SOA is not a product or a service orchestration, but rather it is an architectural style…nothing more and nothing less. Now, there are different ways to implement an SOA; DoD has CHOSEN to pursue a service-based approach. DoD wants SOA because they believe it will result in an explosion of agile capabilities for our warfighting decision makers – similar to what we see out on the internet.
Although I’ve never seen an authoritative definition, when DoD talks about composability, what they mean is listed in the blue box. In the literature, the term composability is meaningful at many layers of abstraction: components, subsystems, networked systems, and networks of networks. It also applies to policies, protocols, specifications, formal representations and proofs. As stated in the bullets, “composable solutions” are the ultimate end-state of a full-blown SOA framework, but please note there are alternative ways to accomplish composable solutions, just like there are alternative ways to implement an SOA. It just so happens that DoD has chosen to migrate to an SOA AND to build its COMPOSED SOLUTIONS by orchestrating appropriate SERVICES to provide capability to the warfighter.
There are at least two different kinds of testing or validation that are needed to support composable solutions. First, if we intend to reuse a component -- maybe in a context that the original implementer of that component never even imagined -- there needs to be techniques to ensure that component makes very few assumptions about how it will ultimately be employed. And for components that claim they meet this “composability” requirement, we need to be able to test and validate those component satisfies that claim. Once we’ve assembled a composed solution, we need to test and validate that the composition performs the intended function. You can chain together services that fit in terms of “goes-in-and-goes-out’s,” but that doesn’t mean it will necessarily accomplish something useful. For example, you can assemble services that wait for a certain event, and then make something else happen when that event occurs. But if the orchestration is employed in a context where the trigger for which it is waiting CAN NEVER and WILL NEVER occur, it is useless in the context in which it’s employed, even if it “fits together” properly. So, you may be asking yourself…how is this any different from unit testing and integration testing? On a certain level, it’s not. The worry is, little formal testing of any kind is being done within DoD for rapidly composed solutions, so there is real danger that the quality of delivered warfighter capability will suffer.
I’ll show you some examples, and see if you agree with me that the level of testing is probably inadequate, or at best “not thought through.”
Here is an example of how DoD is going about standing up composed capabilities. We start with the identification of some capability that the warfighter needs, and some ideas about the services and data that would be involved in implementing that capability. A Community of Interest – a coalition of stakeholders who understand the capability needs – then oversees the implementation of the services until they have a capability ready for demonstration. The composition is normally taken to a military exercise, and if it “works” for specific use cases, it’s considered “done.” There really isn’t any formalism involved in this at all. Why is this a problem? Aren’t we just trying to build faster, cheaper solutions? Well, YES, BUT from a command-and-control perspective, many C2 capabilities have life-critical consequences. No one is going to die if an order for more paperclips occasionally doesn’t process correctly, but if we have a service that provides the parameters to put a payload on a target, and it occasionally goes awry, this is NOT GOOD. DoD simply hasn’t thought through the differing levels of testing rigor that might be needed before fielding various types of capabilities, esp. within C2. Sometimes playing in a demo just isn’t enough.
The next example exhibits a little more rigor. Net-Centric Diplomacy (NCD) is a Department of State initiative in the Horizontal Fusion Portfolio. NCD provides Department of State cable and biographic reports via a search web service that can be accessed by the Federated Search client. In this sense, the NCD is a exemplar of a net-centric data provider based on web services (i.e, composition). This COULD BE a C2 composition if the person of interest has tactical relevance. The NCD team did some very rigorous testing of their capability. Along the way, they made some general observations about testing web services that I’ve listed here … very insightful. You will note in particular they concluded that testing something well takes time! ** The reason for the “starred” finding is: The deserialization of SOAP requests is far more processor intensive; and as a result, the number of requests that will cause a web service to fail is far lower than what would cause the web server itself to fail.
Here are more specific findings…you can look at the detailed notes later. But do NOTE: they were more focused on the performance aspects of the orchestration…whether it would be able to stand up to the service demands of potential users…vice functionality. They considered a threshold of 15% error acceptable. This is probably not tolerable in most C2 situations. So this is better testing than just playing in an exercise, but it may not be “fit for C2.” DETAILS: The NCD service could support up to 3 connections per second. Is this sufficient to serve all consumers? We don’t know, but at least this metric provides a quantitative parameter as a starting point for exploring that question. Another finding: a WSDL defines the interface to a service, but the valid use of that service is not specified. Example: a web service that implements a query syntax may allow for queries that are well-formed, but semantically meaningless and highly recursive. This degrades the performance of that service for other meaningful queries. IOW, incomplete understanding of a service when viewed through the “filter” of the WSDL can have a negative effect on the overall performance of a service, and it is difficult to ferret out & eliminate this with automated tools. Round Trip Time (RTT): The time required for a request to be sent from a client, processed by the server and returned Error: Incorrect results or error messages received from the web service Connections per Second (CPS): The number of connections that are being sent to the web application each second
What must DoD do to adequately test composed C2 solutions? Some ideas are suggested on the next few slides. You’ll note a recurring theme is: DoD NEEDS YOUR HELP!
For starters, testing compositions can’t be regarded as a one-time “feel good” step done between development and deployment. For composed solutions, just playing the composition in one exercise is probably inadequate when there are life-and-death consequences associated with the correct functioning of that orchestration. DoD needs your help to better understand what is the right kind and amount of testing for compositions, RECOGNIZING that DoD wants to rapidly deploy them and not spend on the order of weeks or months in testing...and at the same time RECOGNIZING the RISK of not doing any real testing at all. In addition, DoD needs your help to better understand the implications of long-term maintenance of any of these orchestrations that become persistent capabilities within the C2 environment. What if one of the component services changes or is no longer available? What is the plan to back-fill that part of the value-chain? If there are components whose composability potential or reliability is questionable, how do we keep track of that and make sure those components are replaced if a better service comes along? Can this be automated? DoD needs your help to define what is required in a test environment that will support more robust testing of compositions but will also meet their agility and timeliness requirements for getting new capabilities fielded. EXAMPLES ARE PROVIDED ON THE NEXT TWO SLIDES. PLEASE LOOK THESE OVER ON YOUR OWN LATER, but note I can find no evidence that any existing C2 composition has been tested in similar environments.
Maybe something like ELBA. This work dates from 2006 and was motivated because is no reliable way to predict the performance of complex applications (e.g., N-Tier distributed applications) in a complex environment (e.g., data centers). ELBA’s creators recognized that analytical methods are hindered by the need to come up with parameterizations of highly complex environments. ELBA provides and infrastructure to generate and manage realistic experiments, along with the analytic tools to digest the observations. While doing something this elaborate may be more than what is needed for a short-lived composition, the principles of automated design / configuration / evaluation / tuning would apply very well to composed solutions that are expected to persist for long periods of time. Plus, an infrastructure like this could be extended to look at functional aspects of testing beyond just the performance, and beyond the simplistic seams of the WDSLs, leveraging the rapidly deployable harness, perhaps by extending the testbed language (TBL) they use to describe the components and the staging environment. There a link to a good paper on ELBA if you’d like to learn more.
The STARSHIP environment that is being used at the Army’s Electronic Proving Ground appears to have a similar premise to ELBA. It provides a full-fledged testing environment which itself is a composed solution, and it is tailored to testing distributed solutions which is in keeping with the net-centric paradigm.
I’ve listed on the next two slides some of the things MITRE is doing to respond to the gaps in composable C2 testing. Engineers who are concerned about the problems of testing composable C2 solutions can engage through a email list and a community share site to exchange ideas. Composable C2 has also been declared a grand challenge problem in our internal research program, and I’ve listed here an excerpt from one of the proposals that is currently being competed in the MIP process that addresses composable C2.
Another potentially related effort is REACT. It supports “quick-and-dirty” early concept validation, and in this fiscal year the principal investigators are looking to extend their effort to handle “quick-and-dirty” testing of composed solutions … however this has not been funded as of yet. There has also been some discussion on the MITRE list about the need to understand if components that are derived from legacy systems are inherently of higher quality or more reliability than others…simply because they have a “proven” pedigree…but this is all just talk at the moment.
So where do we go from here?
On this slide I’ve listed some of the areas in which DoD needs to be better informed regarding testing and validating composable C2 solutions, and they’re probably questions that will apply in domains outside of C2 as well. I’ll give you a few moments to read them through. One of the nearest term questions I think DoD needs more information about is how to use testing and verification techniques better to characterize the “composability” potential of individual services it is building with reuse in mind, and also to know how much testing needs to be done on a composition before it goes operational to ensure we aren’t putting warfighters unnecessarily in harm’s way just to build things faster and cheaper…we still need to be SMART too!
This graphic exposes some of my fears / intuitions as an experienced software developer: my gut tells me that using any capability that has been subjected to little testing it is risky. And that is what DoD is doing right now for composable C2. In a DoD context, I may be willing to accept that risk if either the capability will be used only infrequently, or it doesn’t have high loss-of-life potential. Similarly, the longer a composition is used without error, I may feel one of two ways, depending on how optimistic I am: (1) well, gee, this must be a highly reliable solution; or (2) oh, no, a huge failure is lurking just around the corner. This kind of Russian roulette with composable C2 solutions is a disservice to operational warfighters…they are the ones who will bear the consequences if we fail to be proactive as developers and testers.
Here are some specific “points-of-pain” where I believe formal methods can inform DoD about how to test composable C2 solutions SMARTER and get HIGHER QUALITY capabilities to the field FASTER. You, the subject-matter experts in this discipline, may see other opportunities as well. And I would really welcome hearing them from you!
Here are the four key points I’d like you to take away from this presentation. The most important point is listed first. We must recognize that DoD is determined to continue down the “composable solution” path to realize a full-blown SOA. For that reason it’s absolutely necessary to explore ways to help them do this in less of an ad hoc manner, by putting more rigorous testing and verification into practice. Why? Because often lives are at stake, especially in C2 contexts.
Here are some great starting points for learning more about state-of-the-practice with respect to composable solutions. If there are other bodies of work that I have missed or that you feel would be helpful in addressing DoD’s problems, I’d welcome hearing from you about them or exploring ideas for joint applied research because…
Composable C2 DOES need formal methods!...even if DoD doesn’t realize it yet!!
Thanks so much for your attention. In the remaining time, I’d be happy to hear your comments and to answer any questions you have.
Modules like procedures and services are examples of primitive components. A procedure call or a service invocation are examples of primitive connectors between two components. These connectors and abstraction patterns listed as the first three bullets are elements of the technical realm of DoD’s transformation problem. Within the semantic realm, we must also realize that the components must interoperate a the level of meaning – the underlying ideas and concepts they support – in order to “plug-and-play” nicely with one another. Otherwise, we’re just chaining together services that fit at the seams but don’t really produce a meaningful value chain.
This slide illustrates the migration path DoD has laid out for accomplishing the SOA vision of ad hoc orchestrations of reusable services…you can read the detailed notes later. DoD is lurking around step 3. DETAILS: Systems evolve in predictable ways, and this slides illustrates DoD’s projected migration path to service-based capabilities. (1) The monolithic systems of a few decades ago gave way to (2) modular systems. We then started (3) web-enabling some of the modules (typically at the user interface), and (4) now we are generally trying to decouple systems into their constituent parts for deployment as reusable, “mobile” services. So DoD is somewhere between stages 3 and 4 on its path to realizing its SOA or net-centric information sharing environment. Ultimately, DoD wants to be able to stand up ad hoc orchestrations of reusable, mobile services to deliver a capability on an as-needed basis … then disappear when that orchestration is no longer needed. (5) DoD certainly isn’t there yet, but this is the objective net-centric vision.
I don’t want to say much about this graphic except that it is just a slightly different view of the evolutionary path depicted on the previous slide that was presented at a Gartner Group conference last year. As shown, DoD is about mid-way along this path.
Reuse versus Reinvention: How will Formal Methods Deal with ...
Reuse Versus Reinvention ** Mary Ann Malloy, PhD [email_address] LFM 2008 Conference ** How Will Formal Methods Deal with Composable Systems?
As a public interest company, MITRE works in partnership with the U.S. government to address issues of critical national importance.
Disclaimers <ul><li>The views, opinions, and conclusions expressed here are those of the presenter and should not be construed as an official position of MITRE or the United States Department of Defense (DoD). </li></ul><ul><li>All information presented here is UNCLASSIFIED , technically accurate, contains no critical military technology and is not subject to export controls. </li></ul>
What you WILL NOT take away today <ul><li>A “shrink-wrapped” solution to DoD’s emergent testing challenges. </li></ul>
What you WILL take away today <ul><li>An understanding of what “service-orientation” and “composability” mean. </li></ul><ul><li>Insight regarding how DoD is trying to build composable systems, but may not be testing them appropriately nor learning from the testing it does do. </li></ul><ul><li>Ideas for formal methods / testing investigation paths that may improve the state of composable systems verification & testing. </li></ul>
Overview <ul><li>Background </li></ul><ul><li>Examples </li></ul><ul><li>Recent Work </li></ul><ul><li>Challenges </li></ul><ul><li>Summary </li></ul>
Who uses services? Answer : YOU do! … and DoD wants to! BANKING DIRECTIONS TRAVEL eCOMMERCE NEWS & INFORMATION Online
What is a service? <ul><li>Characteristics of services </li></ul><ul><ul><li>Modular; composable , much like “lego” building blocks </li></ul></ul><ul><ul><li>Network-accessible </li></ul></ul><ul><ul><li>Reusable </li></ul></ul><ul><ul><li>Standards-based </li></ul></ul><ul><ul><li>Distributed capabilities </li></ul></ul>“ A mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description.” – DoD Net-Centric Services Strategy May 2007
What DoD sees… Ability to sort by type of incident, date, location, etc. Listing of bomb-related events between 14 Feb 08 and 15 Feb 08 Worldwide threats and incidents: airport, chemical, bridge, railway, bombs, etc. It also has links to related news stories and a searchable database. … and wants!
What is “service-oriented architecture”? <ul><li>An architectural style based on flexibly linked software components that leverage web standards and services </li></ul><ul><ul><li>NOT a product </li></ul></ul><ul><ul><li>NOT a bunch of web services </li></ul></ul>“ A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains.” – OASIS Reference Model for Service-Oriented Architecture October 2006
What is composability? <ul><li>What is a composable system? </li></ul><ul><ul><li>one that consists of recombinant atomic behaviors (components) selected and assembled to satisfy specific [new] processing requirements. </li></ul></ul><ul><li>NOTE: Composability is meaningful at many layers of abstraction. </li></ul>Composable solutions – the desired end-state of a full-scale SOA implementation – are the direction DoD, federal stakeholders & commercial enterprises are evolving their automation assets. <ul><ul><li>A design principle dealing with interrelated components that do not make assumptions about the compositions that may employ them; they are “fit for the unforeseen.” </li></ul></ul><ul><li>– proposed definition </li></ul>
Testing principles <ul><li>Testing for composability </li></ul><ul><ul><li>Ensure individual processing elements do not make undue assumptions about the composition </li></ul></ul><ul><ul><ul><li>Code analysis or inspection for “hidden assumptions” or “out of band” dependencies </li></ul></ul></ul><ul><li>Testing the composition </li></ul><ul><ul><li>Validate the chosen composition of individual elements performs the desired functions. </li></ul></ul><ul><ul><li>The “composition layer” is an additional one that must be tested separately. </li></ul></ul><ul><ul><ul><li>A composition can be VALID yet still not do anything USEFUL with respect to the relevant CONTEXT </li></ul></ul></ul>
Typical DoD approach to testing compositions Capability Delivery Enables Drives Data Needed START HERE Drives Service Needed Capability Demonstration DECLARE SUCCESS! Enables Enables Community Information Exchange Vocabulary Drives Capability Need Service Implementations
Better DoD example: Net-Centric Diplomacy ** <ul><li>General findings from the testing of the NCD initiative of Horizontal fusion: </li></ul><ul><ul><li>Many different types of interrelated testing are needed. </li></ul></ul><ul><ul><li>Exhaustive testing is impossible. </li></ul></ul><ul><ul><ul><li>testing must still be iterative </li></ul></ul></ul><ul><ul><ul><li>it is time consuming! </li></ul></ul></ul><ul><ul><ul><li>Operationally specific test cases are needed </li></ul></ul></ul><ul><ul><li>Performance testing must focus on service dependencies vice user interface. </li></ul></ul><ul><ul><li>The number of requests that will cause a web service to fail is far lower than for a web server. </li></ul></ul><ul><ul><li>“ Few realize the complexity that must be taken into account when attempting to quantitatively measure performance and reliability when dealing with web services.” </li></ul></ul><ul><li>– Derik Pack, SPAWAR System Center, 2005 </li></ul>** see http://www.dtic.mil/ndia/2005systems/thursday/pack2.pdf
Better DoD example: Net-Centric Diplomacy concluded <ul><li>Testing was conducted until “error thresholds” were reached: </li></ul><ul><ul><li>Round trip time (90 sec) </li></ul></ul><ul><ul><li>Error (15%) </li></ul></ul><ul><li>Specific findings </li></ul><ul><ul><li>A mean of 3.06 Connections per Second could be achieved </li></ul></ul><ul><ul><li>WSDLs define interfaces, but not valid service use </li></ul></ul>Is this practicable across all of DoD? DoD may need to stand up multiple access points for heavily used services / compositions; and the “sweet spot” will likely differ in times of war vice times of peace.
What DoD must create to “get there…” <ul><li>Loosely coupled, relevant, “right-sized” services that can be leveraged across continuously changing processes. </li></ul><ul><li>New governance that can deal with complex management of distributed, loosely coupled, dynamically composable services. </li></ul><ul><li>A better understanding of maintenance implications: </li></ul><ul><ul><li>How long does it take? How will other components or clients be impacted? </li></ul></ul><ul><ul><li>Components with low or unknown MTBF should be highly accessible and easily replaceable…can this be automated? </li></ul></ul><ul><li>Rapidly deployable, virtual, continuous test environment </li></ul><ul><ul><li>Examples provided on the next two slides … </li></ul></ul>
… something like ELBA? ** ** see http://www.static.cc.gatech.edu/systems/projects/Elba/pub/200604_NOMS_mulini.pdf 1) Developers provide design-level specifications of model and policy documents (as input to Cauldron) and a test plan (XTBL). 2) Cauldron creates a provisioning and deployment plan for the application. 3) Mulini generates staging plan from the input components referred to from XTBL (dashed arrows). 4) Deployment tools deploy the application, monitoring tools to the staging environment. 5) The staging is executed. 6) Data from monitoring tools is gathered for analysis. 7) After analysis, developers adjust deployment specifications or possibly even policies and repeat the process.
… or STARSHIP? <ul><li>Key component of the Electronic Proving Ground Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) tool kit for live distributed test environments. </li></ul><ul><li>Provides a “threads-based” composable environment to plan, generate planning documents, verify configuration, initialize, execute, synchronize, monitor, control, and report the status of sequence of activities. </li></ul><ul><li>Freely available & customizable to any problem domain. </li></ul><ul><li>Complexity may be a barrier. </li></ul>POC : Ms. Janet McDonald (520) 538-3575 (DSN 879) [email_address] PM ITTS IMO – Ft. Huachuca
What MITRE is doing <ul><li>c2c-composable-c2-list and Community Share site </li></ul><ul><li>Composable C2 is a “Grand Challenge Problem” within 2009 MITRE Innovation Program (MIP): </li></ul><ul><ul><li>How to build reconfigurable components that can be mashed together in an agile fashion </li></ul></ul><ul><ul><ul><li>Visualization and Analysis </li></ul></ul></ul><ul><ul><ul><li>Info Sharing </li></ul></ul></ul><ul><ul><ul><li>Interoperability and Integration </li></ul></ul></ul><ul><ul><ul><li>Resource Management to enable composablility (of people, organizations, networks, sensors, platforms…) </li></ul></ul></ul><ul><ul><ul><li>Acquisition and Systems Engineering </li></ul></ul></ul><ul><ul><ul><li>Collaborative and Distributed C2 </li></ul></ul></ul><ul><li>Example proposal: Web Service Process Optimization </li></ul><ul><ul><li>“ Our hypothesis is that web service optimization can be realized through </li></ul></ul><ul><ul><li>machine learning techniques and statistical methods. In this research we intend to find a computational solution to the problem of creating and maintaining web service processes.” </li></ul></ul>
What MITRE is doing concluded <ul><li>Resources for Early and Agile Testing </li></ul><ul><ul><li>Recently showed how low-cost simulation games can create a simple, “good-enough” simulation capability to evaluate new concepts early in development and expose the most challenging issues. </li></ul></ul><ul><ul><li>REACT “Online” A composed testing environment for composed solutions! </li></ul></ul><ul><ul><ul><li>a loosely coupled simulation capability delivering dynamic flexibility for “quick look” experiments </li></ul></ul></ul><ul><li>A series of brainstorming sessions on Composable C2 </li></ul><ul><ul><li>“ static” vs. “dynamic” composablity viz legacy systems </li></ul></ul><ul><ul><li>do services derived from a “proven capability” have lower or non-existent testing requirement </li></ul></ul>
Practical challenges <ul><li>Can we “right-size” testing as “fit-for-composition?” </li></ul><ul><ul><li>Is composability binary? A sliding scale? When is it [not] OK to use “lower-rated” components? </li></ul></ul><ul><ul><li>Can we characterize the right amount of testing based on the anticipated longevity of the composition? Other factors? </li></ul></ul><ul><li>What metadata must be exposed to assess contextual validity of components in composition? </li></ul><ul><ul><li>Should WSDL be enriched? Supplemented? </li></ul></ul><ul><ul><li>Can what constitutes valid compositions be expressed as rules? How narrowly / broadly? </li></ul></ul><ul><li>What thresholds / metrics are required? Nice to have? </li></ul><ul><ul><li>Performance thresholds? Ongoing component health? </li></ul></ul><ul><li>Can we “borrow” ideas from other composability abstractions for applicability here? </li></ul>
Levels of composability testing? testing Risk (e.g., loss of life) Can we “rate” the composability of components? For a composition that will only be used a few times, can we tolerate higher risk? as-is for composable C2
“ Pressure-points” for formalisms <ul><ul><li>How can the lessons-learned from the past inform the way ahead for extending formal methods to testing & verification of composable systems? </li></ul></ul><ul><ul><li>Can we derive principles to compose systems in methodical, rather than ad-hoc ways, that will produce more satisfactory results? </li></ul></ul><ul><li>How can we handle partial and incremental specifications? </li></ul><ul><li>How can we cope when building a composition with parts that make incompatible assumptions about their mutual interactions? </li></ul><ul><li>What kinds of automated checking and analysis can we support? </li></ul>
Take-away points <ul><li>DoD will continue to deploy composed solutions to realize its SOA vision. </li></ul><ul><li>Current testing focuses more on the level of service provided and less on how reliably the capability is delivered or whether it actually meets the need. </li></ul><ul><li>Different levels of testing are probably appropriate for different contexts (“static” versus “dynamic,” use frequency, loss-of-life consequences). </li></ul><ul><li>Automated environments are needed to test composed solutions targeted for rapid deployment </li></ul>
Pointers to more information <ul><li>www.thedacs.com </li></ul><ul><ul><li>Data & Analysis Center for Software: A repository of documents, tools in research areas including testing and reuse </li></ul></ul><ul><li>Search for the latest results on: </li></ul>composable systems composability web service testing <ul><li>www.peostri.army.mil/PRODUCTS/STARSHIP/ </li></ul><ul><ul><li>Starship II homepage </li></ul></ul>testing composable composable C2
Observations about compositions <ul><li>Solutions are built from primitive and composite components and connectors. </li></ul><ul><li>Components and connectors can be described by interface and protocol specifications. </li></ul><ul><li>Common patterns provide abstractions we may be able to exploit in design, development, analysis and testing. </li></ul><ul><li>To delivery meaningful capability, the components must be composable regarding their underlying ideas </li></ul>Technical realm Semantic realm
“ Lines of Evolution” vision for DoD systems A single system with a non-flexible hierarchical structure A system consisting of several independently functional but integrated components A capability is realized through a pre-defined orchestration of services Reusable, “mobile” services Services are orchestrated on an ad-hoc basis to deliver a capability…and then disappear 1 2 3 4 5
Another view: stages of SOA adoption Business Process Understanding: How is the work done? IT Assessment: What IT assets exist supporting the business process? SOA Design/ Determination: What should be a service? SOA Enablement (Java EE, .NET, federated data services): How will application and data services be developed and deployed? Infrastructure (ESB, Registry, Management Governance: How will services, application, people interact and communicate? Process Orchestration/ Composition: How will business processes and rules be developed and deployed? ** Mark Driver, Optimizing Open Source and SOA Strategies , Gartner Application Architecture, Development & Integration Summit 2007, http://www.gartner.com/it/page.jsp?id=506878&tab=overview DoD is lurking around here 1 2 3 4 5 6