The Seven Main Challenges of an Early Warning System Architecture

333 views

Published on

J. Moßgraber, F. Chaves, S. Middleton, Z. Zlatev, and R. Tao on "The Seven Main Challenges of an Early Warning System Architecture" at ISCRAM 2013 in Baden-Baden.

10th International Conference on Information Systems for Crisis Response and Management
12-15 May 2013, Baden-Baden, Germany

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
333
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

The Seven Main Challenges of an Early Warning System Architecture

  1. 1. ©Fraunhofer1ISCRAM 2013, Baden-Baden, GermanyThe Seven Main Challenges of an EarlyWarning System ArchitectureJ. Moßgraber, F. Chaves(1), S. Middleton, Z.Zlatev(2), R. Tao(3)(1) Fraunhofer Institute of Optronics, SystemTechnologies and Image Exploitation (IOSB), Germany(2) IT Innovation Centre, Southampton, UK(3) Department of Electronic Engineering, Queen MaryUniversity of London, UK
  2. 2. ©Fraunhofer2ISCRAM 2013, Baden-Baden, GermanySensorsDecisionMagicEarly Warning System Architecture
  3. 3. ©Fraunhofer3ISCRAM 2013, Baden-Baden, GermanyDownhole Drilling Purpose Exploration & exploitationof oil & gas Retrieval of geothermalenergy Controlled disposal ofcarbon dioxide Scientific drilling Safety Constraints Prevent crew, equipmentand environment frominjury, damage andpollution Prevent Crises
  4. 4. ©Fraunhofer4ISCRAM 2013, Baden-Baden, GermanyTsunami Warning Systems
  5. 5. ©Fraunhofer5ISCRAM 2013, Baden-Baden, GermanySystem-of-Systems Operational / managerial Independence ofthe Elements Different governments and institutionslike Warning Centres, Task Forces,Scientific Institutions, Data Centres, … Evolutionary Development Integration of new Sensor networks,analysis algorithms, .. Emergent Behaviour Combines the knowledge of parts Geographic Distribution Tsunami Early Warning System for theEuro-Mediterranean area (> 20 nationaland at least one regional centre).
  6. 6. ©Fraunhofer6ISCRAM 2013, Baden-Baden, GermanySystem-of-Systems (cont.)Communicationis the key!1st siteBroker Cluster2nd siteBroker Cluster3rd siteBroker Cluster
  7. 7. ©Fraunhofer7ISCRAM 2013, Baden-Baden, GermanyThe 7 Challenges1.Build a scalablecommunication layer fora SoS2.Build a resilientcommunication layer fora SoS3.Efficiently publishlarge volumes ofsemantically rich sensordata4.Scalable and highperformance storage oflarge distributeddatasets5.Handling federatedmulti-domainheterogeneous data6.Discovery of resourcesSensorsDecisionMagic
  8. 8. ©Fraunhofer8ISCRAM 2013, Baden-Baden, Germany1) Build a scalable communication layer Requirements Open, heterogeneity, standard, language/OSindependent Design decision Scalable communication using a Message-oriented Middleware (MOM): m-m Single MOM technology (Apache QPID): supportsAdvanced Message Queuing Protocol (AMQP) Discussion API standard, e.g., JMS / not a wirelinestandard, i.e., AMQP as supported by Qpid Text-based protocols, e.g., STOMP / not abinary protocol that is ↑ scalable Point-point systems, e.g., P2P, SOA / notgeneric M-M pub-sub model Proprietary MOMs / Not open-source, moredifficult to enhance & instrument to monitor8
  9. 9. ©Fraunhofer9ISCRAM 2013, Baden-Baden, Germany2) Build a resilient communication layer Requirements Broker Failure, e.g., broker crash due tounderlay system failure Link Failure, e.g., low QoS in comms links orlink broken Client Failure, e.g., subscriber client failed Design decision Broker Mirroring, i.e., messages arereplicated to both primary broker and mirrorbroker Overlay Routing, i.e., messages are auto-switched to another overlay routing pathaccording to the link status Durable Queue is selected to temporary storedthe messages when sub has failed. Discussion SOA, distributed stream processing: have noinherent resilience9
  10. 10. ©Fraunhofer10ISCRAM 2013, Baden-Baden, GermanyTwitterCrawlerUnconventional / Human SensorsSource: http://xkcd.
  11. 11. ©Fraunhofer11ISCRAM 2013, Baden-Baden, Germany3) Efficiently publish large volumes of semantically rich sensor data Design decision Publish data to MOM & metadata to a semanticregistry - fast, expressive metadata Combine different approaches (there is no „onesize fits all“) SWE O&M XML messages with data embedded inthem – slow, expressive metadata Support existing formats: WITS0 JSON encodedmessages – fast, limited metadata Binary HDF5 / netCDF – fastest, limitedmetadata Discussion Database query via SSH tunnel – tight coupling of machines,SQL security issues HTTP or SQL request over MOM - QPID does noteasily support this, SQL security issues11
  12. 12. ©Fraunhofer12ISCRAM 2013, Baden-Baden, Germany4) Scalable and high performance storage of large distributeddatasetsDesign decision Working datasets Hybrid databases [OWLIM formetadata, MySQL for data] –semantically rich and efficientqueries, clients must understand twoprotocols (SPARQL & SQL) File storage for larger binary data– no size limits, any format,difficult to query data Archived datasets HDF5 strategy for longer termstorage – compressed format withembedded metadata12
  13. 13. ©Fraunhofer13ISCRAM 2013, Baden-Baden, Germany4) Scalable and high performance storage of large distributeddatasets (cont.)Discussion Triple stores – good metadata querysupport, poor scalability, standardSPARQL Relational databases – fast, ridgidstructure, poor metadata query support,standard SQL NoSQL type solutions [column stores etc.]– suitable for our proposed hybridsolution Currently work on Apache Cassandraintegration Map Reduce solutions – good fordistributed processing, efficient forvery large datasets, complex to setup[overkill for real-time data with shortprocessing time horizons] 13
  14. 14. ©Fraunhofer14ISCRAM 2013, Baden-Baden, Germany5) Handling federated multi-domain heterogeneous data Design decision Broker pattern for access / transformation ofdata between domains – scalable, slow addingan extra „hop‟ to workflow Domain ontologies in semantic registries –scalable, requires a lookup step Federated data queries – scalable, extraaggregation work for clients Discussion Data sources and/or apps map data to a globalontology – efficient, not practical to getagreement between domains Data sources and/or apps locally map betweendata models – does not scale, inconsistencieslikely Automatic ontology alignment services –scalable, difficult & error prone, slow addingan extra „hop‟ to workflow14
  15. 15. ©Fraunhofer15ISCRAM 2013, Baden-Baden, Germany6) Discovery of resources in a geo-distributed SoSDesign decision Multiple semantic registries hostedby stakeholders – scalable, modular Separation of frontend/s and ontologystore/s Shared ontology core (classes,relations, attributes, designpatterns based on SSNO),registry-specific sub-classes andindividuals – flexibility foradaptation to different domains Data and services described insemantically rich ways, allowingsearch/browse by both humans andmachines – multiple interfaces forestablished protocols/standards15
  16. 16. ©Fraunhofer16ISCRAM 2013, Baden-Baden, Germany6) Discovery of resources in a geo-distributed SoS(cont.)Discussion Classical search-engines andcatalogues – many out-of-the-boxsolutions,limited “semantic” search-capabilities, Monolithic systems – optimized andgood performance for some types ofapplications,many additional tools but dependencybetween components One central semantic registry – no“synchronisation” of ontology corenecessary, bottleneck if centralregistry not available16
  17. 17. ©Fraunhofer17ISCRAM 2013, Baden-Baden, Germany7) Coordination of work between geo-distributedsystems Design decision Decision tables & authoring tool for end users– self-documenting rule-sets and intuitiveeasy-to-use interfaces for non IT-experts,many available tools; comfortable mapping of“models” (sets of variables and rules) toontology elements not yetstandardized/available, Workflow engine(s) coordinating federated –message-based event processing for complex andrich choreographies, requirement-specificflexible adaptation of (standard) workflows Discussion Ontology reasoning – powerful and expressivereasoning und rule systems; require high-levelof expertise, performance and (quality of)results depend on size/complexity ofontology, persistent storage of big ontologiesstill a problem.17
  18. 18. ©Fraunhofer18ISCRAM 2013, Baden-Baden, GermanySystem-of-systems1st siteBroker Cluster2nd siteBroker Cluster3rd siteBroker Cluster
  19. 19. ©Fraunhofer19ISCRAM 2013, Baden-Baden, GermanyData Source(s)1st siteMOMData Source(s)Data Source(s)Feeder StorageHistoric DataCached DataSemanticRegistryWorkflow ServiceData Source(s)Data Source(s)ProcessingServiceRReceiverealtime dataRGet cached data andparameters; write resultsUser InterfaceRCache / store dataQueryRSteersRReceivenotificationsRInvoke &handle resultsRRDownstreamDisseminationRRRegister & request topicGeneric TRIDEC Architecture
  20. 20. ©Fraunhofer20ISCRAM 2013, Baden-Baden, GermanyExample Workflow for Tsunami Early Warning
  21. 21. ©Fraunhofer21ISCRAM 2013, Baden-Baden, GermanyData Source(s)1st siteMOMData Source(s)Data Source(s)Feeder StorageHistoric DataCached DataSemanticRegistryWorkflow ServiceData Source(s)Data Source(s)ProcessingServiceRReceiverealtime dataRGet cached data andparameters; write resultsUser InterfaceRCache / store dataQueryRSteersRReceivenotificationsRInvoke &handle resultsRRDownstreamDisseminationRRRegister & request topicGeneric TRIDEC Architecture
  22. 22. ©Fraunhofer22ISCRAM 2013, Baden-Baden, GermanyDoes it work? Tsunami TRIDEC software deployed atInstituto Português do Mar eda Atmosfera (IPMA) inLissabon and the KandilliObservatory and EarthquakeResearch Institute (KOERI) inIstanbul for testing purposes Two scenarios in theEuropean-wide Tsunamiexercise NEAMWave2012 weresuccessfully validated New functionalities “Centre-to-Centre communication” viasoftware systems betweenTurkey and Portugal andeyewitness reports sent frommobile devices via apps wereavailable for the first time.
  23. 23. ©Fraunhofer23ISCRAM 2013, Baden-Baden, GermanyConclusion and Future Work Main problems when dealing with thearchitecture of a SoS presented requires a scalable and resilientcommunication layer large amounts of data need to bepublished and processed requires a scalable storage concept respect the geo-distributed nature Future Work improving the resilience and workloadallocation of the MOM improve the resilience of the SemanticRegistry by providing a replicationmechanism research the federated access to Big Data
  24. 24. ©Fraunhofer24ISCRAM 2013, Baden-Baden, GermanyAcknowledgementsThe presented work is done in collaborationwith consortium of the TRIDEC projectwhich is supported by the EuropeanCommission under the 7th FrameworkProgramme(ICT-2009.4.3 Intelligent Information ManagementProject Reference: 258723)

×