The document discusses the role of ontologies in supporting emergent middleware. Emergent middleware is dynamically generated distributed system infrastructure that enables interoperability in complex distributed systems.
Ontologies play a key role by providing meaning and reasoning capabilities to allow the right runtime choices to be made. They support various functions throughout an emergent middleware architecture, including discovery, composition, and mediation. Two experiments provide initial evidence of ontologies' potential role in middleware by enabling semantic matching and process mediation. However, challenges remain around generating ontologies and addressing interoperability between heterogeneous ontologies.
This document outlines a proposed system to filter unwanted messages from online social networks. It discusses the existing problems of misuse on social media platforms. The proposed system would use machine learning techniques like SVM for text categorization and identification of fake profiles to filter content by category (e.g. abusive, vulgar, sexual). It presents the system architecture as a three-tier structure and provides results of testing the filtering mechanism and classifier. The conclusion is that the "Filtered wall" system could address concerns around unwanted content on social media walls.
Heterogeneous Device-to-Device mobile networks
are characterised by frequent network disruption and unreliability
of peers delivering messages to destinations. Trust-based
protocols has been widely used to mitigate the security and
performance problems in D2D networks. Despite several efforts
made by previous researchers in the design of trust-based routing
for efficient collaborative networks, there are fewer related
studies that focus on the peers’ neighbourhood as a routing
metrics’ element for a secure and efficient trust-based protocol.
In this paper, we propose and validate a trust-based protocol
that takes into account the similarity of peers’ neighbourhood
coefficients to improve routing performance in mobile HetNets
environments. The results of this study demonstrate that peers’
neighbourhood connectivity in the network is a characteristic
that can influence peers’ routing performance. Furthermore, our
analysis shows that our proposed protocol only forwards the
message to the companions with a higher probability of delivering
the packets, thus improving the delivery ratio and minimising
latency and mitigating the problem of malicious peers ( using
packet dropping strategy).
DOMINANT FEATURES IDENTIFICATION FOR COVERT NODES IN 9/11 ATTACK USING THEIR ...IJNSA Journal
The document presents a framework called SoNMine that identifies key players in the 9/11 covert network using node behavioral profiles. It generates profiles by analyzing node behaviors based on path types extracted from the network's multi-relational structure. The framework identifies outlier nodes with dense connections or high communication as influential players. It also determines dominant features that help classify normal and outlier nodes more accurately.
A system to filter unwanted messages from OSN user wallsGajanand Sharma
The document presents a system to filter unwanted messages from user walls on online social networks. It uses machine learning techniques like text classification and radial basis function networks to categorize messages as neutral or non-neutral, and further classify non-neutral messages. Users can define custom filtering rules and blacklists to automatically filter messages on their walls based on content, user relationships, and other criteria. The system aims to give users more control over their timeline posts while maintaining flexibility.
Comprehensive Analysis on the Vulnerability and Efficiency of P2P Networks un...ijp2p
Peer-to-peer systems are the networks consisting of a group of nodes possible to be as wide as the Internet.
These networks are required of evaluation mechanisms and distributed control and configurations, so each
peer (node) will be able to communicate with other peers. P2P networks actually act as the specific
transportation systems created to provide some services such as searching, large-scale storage, context
sharing, and supervisioning. Changes in configuration, possibly the resultant effects of faults and failures
or of the natural nodes behavior, are one of the most important features in P2P networks. Resilience to
faults and failures, and also an appropriate dealing with threats and attacks, are the main requirements of
today’s most communication systems and networks. Thus, since P2P networks can be individually used as
an infrastructure and an alternative for many other communication networks, they have to be more
reliable, accessible, and resilient to the faults, failures and attacks compared to the client/server approach.
In this work on progress, we present a detailed study on the behavior of various P2P networks toward
faults and failures, and focus on fault-tolerance subject. We consider two different static failure scenarios:
a) a random strategy in which nodes or edges of the network will be removed with an equal probability and
without any knowledge of the network’s infrastructure, b) a targeted strategy that uses some information
about the nodes, and in which the nodes with the highest degree have the most priority to be attacked. By
static faults, we mean a situation where the nodes or components encounter some faults before the network
starts to work or through its operation, and will remain faulty to the end of the work session. Our goal is to
introduce various measures to analyzing P2P networks evaluating their vulnerability rate. The presented
criteria can be used for evaluating the reliability and vulnerability of P2P networks toward both random
and targeted failures. There is no limit to the number and types of failures, the presented measures are able
to be used for different types of failures and even a wide range of networks.
COMMUNITY DETECTION USING INTER CONTACT TIME AND SOCIAL CHARACTERISTICS BASED...ijasuc
Delay Tolerant Networks (DTNs) where the node connectivity is opportunistic and end-to-end path between
any pair of source and destination is not guaranteed most of the time. Hence the messages are transferred
from source to destination via intermediate nodes on hop to hop basis using store-carry-forward paradigm.
Due to quick advancement in hand held devices such as smart phone and laptop with support of wireless
communication interface carried by human being, it is possible in coming days to use DTNs for message
dissemination without setting up infrastructure. The routing task becomes challenging in DTNs due to
intermittent network connectivity and the connection opportunity arises only when node comes in
transmission range of each other. The performance of the routing protocols depend on the selection of
appropriate relay node which can deliver the message to final destination in case of source and destination
do not meet at all. Many social characteristics are exhibited by the human being like friendship,
community, similarity and centrality which can be exploited by the routing protocol in order to take the
forwarding decisions. Literature shows that by using these characteristics, the performance of DTN routing
protocols have been improved in terms of delivery probability. The existing routing schemes used
community detection using aggregated contact duration and contact frequency which does not change over
the time period. We propose community detection through Inter Contact Time (ICT) between node pair
using power law distribution where the members of community are added and removed dynamically. We
also considered single copy of each message in entire network to reduce the network overhead. The
proposed routing protocol named Social Based Single Copy Routing (SBSCR) selects the suitable relay
node from the community members only based on the social metrics such as similarity and friendship
together. ICTs show power law nature in human mobility which is used to detect the community structure at
each node. A node maintains its own community and social metrics such as similarity and friendship with
other nodes. Whenever node has to select the relay node then it selects from its community with higher
value of social metric. The simulations are conducted using ONE simulator on the real traces of campus
and conference environments. SBSCR is compared with existing schemes and results show that it
outperforms in terms of delivery probability and delivery delay with comparable overhead ratio.
This document discusses human behavior's impact on opportunistic forwarding in delay-tolerant networks. It argues that current forwarding solutions focus on network structure and ignore human behavior dynamics. It also argues that delay-tolerant networking could enable low-cost communications in dense urban networks if it considers users' social and data similarities. The paper aims to show how performance could improve by taking human behavior into account in both low and high density networks, and how delay-tolerant networking could reduce communication costs in dense urban scenarios.
This document outlines a proposed system to filter unwanted messages from online social networks. It discusses the existing problems of misuse on social media platforms. The proposed system would use machine learning techniques like SVM for text categorization and identification of fake profiles to filter content by category (e.g. abusive, vulgar, sexual). It presents the system architecture as a three-tier structure and provides results of testing the filtering mechanism and classifier. The conclusion is that the "Filtered wall" system could address concerns around unwanted content on social media walls.
Heterogeneous Device-to-Device mobile networks
are characterised by frequent network disruption and unreliability
of peers delivering messages to destinations. Trust-based
protocols has been widely used to mitigate the security and
performance problems in D2D networks. Despite several efforts
made by previous researchers in the design of trust-based routing
for efficient collaborative networks, there are fewer related
studies that focus on the peers’ neighbourhood as a routing
metrics’ element for a secure and efficient trust-based protocol.
In this paper, we propose and validate a trust-based protocol
that takes into account the similarity of peers’ neighbourhood
coefficients to improve routing performance in mobile HetNets
environments. The results of this study demonstrate that peers’
neighbourhood connectivity in the network is a characteristic
that can influence peers’ routing performance. Furthermore, our
analysis shows that our proposed protocol only forwards the
message to the companions with a higher probability of delivering
the packets, thus improving the delivery ratio and minimising
latency and mitigating the problem of malicious peers ( using
packet dropping strategy).
DOMINANT FEATURES IDENTIFICATION FOR COVERT NODES IN 9/11 ATTACK USING THEIR ...IJNSA Journal
The document presents a framework called SoNMine that identifies key players in the 9/11 covert network using node behavioral profiles. It generates profiles by analyzing node behaviors based on path types extracted from the network's multi-relational structure. The framework identifies outlier nodes with dense connections or high communication as influential players. It also determines dominant features that help classify normal and outlier nodes more accurately.
A system to filter unwanted messages from OSN user wallsGajanand Sharma
The document presents a system to filter unwanted messages from user walls on online social networks. It uses machine learning techniques like text classification and radial basis function networks to categorize messages as neutral or non-neutral, and further classify non-neutral messages. Users can define custom filtering rules and blacklists to automatically filter messages on their walls based on content, user relationships, and other criteria. The system aims to give users more control over their timeline posts while maintaining flexibility.
Comprehensive Analysis on the Vulnerability and Efficiency of P2P Networks un...ijp2p
Peer-to-peer systems are the networks consisting of a group of nodes possible to be as wide as the Internet.
These networks are required of evaluation mechanisms and distributed control and configurations, so each
peer (node) will be able to communicate with other peers. P2P networks actually act as the specific
transportation systems created to provide some services such as searching, large-scale storage, context
sharing, and supervisioning. Changes in configuration, possibly the resultant effects of faults and failures
or of the natural nodes behavior, are one of the most important features in P2P networks. Resilience to
faults and failures, and also an appropriate dealing with threats and attacks, are the main requirements of
today’s most communication systems and networks. Thus, since P2P networks can be individually used as
an infrastructure and an alternative for many other communication networks, they have to be more
reliable, accessible, and resilient to the faults, failures and attacks compared to the client/server approach.
In this work on progress, we present a detailed study on the behavior of various P2P networks toward
faults and failures, and focus on fault-tolerance subject. We consider two different static failure scenarios:
a) a random strategy in which nodes or edges of the network will be removed with an equal probability and
without any knowledge of the network’s infrastructure, b) a targeted strategy that uses some information
about the nodes, and in which the nodes with the highest degree have the most priority to be attacked. By
static faults, we mean a situation where the nodes or components encounter some faults before the network
starts to work or through its operation, and will remain faulty to the end of the work session. Our goal is to
introduce various measures to analyzing P2P networks evaluating their vulnerability rate. The presented
criteria can be used for evaluating the reliability and vulnerability of P2P networks toward both random
and targeted failures. There is no limit to the number and types of failures, the presented measures are able
to be used for different types of failures and even a wide range of networks.
COMMUNITY DETECTION USING INTER CONTACT TIME AND SOCIAL CHARACTERISTICS BASED...ijasuc
Delay Tolerant Networks (DTNs) where the node connectivity is opportunistic and end-to-end path between
any pair of source and destination is not guaranteed most of the time. Hence the messages are transferred
from source to destination via intermediate nodes on hop to hop basis using store-carry-forward paradigm.
Due to quick advancement in hand held devices such as smart phone and laptop with support of wireless
communication interface carried by human being, it is possible in coming days to use DTNs for message
dissemination without setting up infrastructure. The routing task becomes challenging in DTNs due to
intermittent network connectivity and the connection opportunity arises only when node comes in
transmission range of each other. The performance of the routing protocols depend on the selection of
appropriate relay node which can deliver the message to final destination in case of source and destination
do not meet at all. Many social characteristics are exhibited by the human being like friendship,
community, similarity and centrality which can be exploited by the routing protocol in order to take the
forwarding decisions. Literature shows that by using these characteristics, the performance of DTN routing
protocols have been improved in terms of delivery probability. The existing routing schemes used
community detection using aggregated contact duration and contact frequency which does not change over
the time period. We propose community detection through Inter Contact Time (ICT) between node pair
using power law distribution where the members of community are added and removed dynamically. We
also considered single copy of each message in entire network to reduce the network overhead. The
proposed routing protocol named Social Based Single Copy Routing (SBSCR) selects the suitable relay
node from the community members only based on the social metrics such as similarity and friendship
together. ICTs show power law nature in human mobility which is used to detect the community structure at
each node. A node maintains its own community and social metrics such as similarity and friendship with
other nodes. Whenever node has to select the relay node then it selects from its community with higher
value of social metric. The simulations are conducted using ONE simulator on the real traces of campus
and conference environments. SBSCR is compared with existing schemes and results show that it
outperforms in terms of delivery probability and delivery delay with comparable overhead ratio.
This document discusses human behavior's impact on opportunistic forwarding in delay-tolerant networks. It argues that current forwarding solutions focus on network structure and ignore human behavior dynamics. It also argues that delay-tolerant networking could enable low-cost communications in dense urban networks if it considers users' social and data similarities. The paper aims to show how performance could improve by taking human behavior into account in both low and high density networks, and how delay-tolerant networking could reduce communication costs in dense urban scenarios.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Content Based Message Filtering For OSNS Using Machine Learning ClassifierIJMER
The document proposes a content-based message filtering system for online social networks (OSNs) using machine learning classifiers. It aims to filter unwanted messages from OSN user walls. The system uses a machine learning classifier to categorize messages and implements customizable filtering rules. It also includes a blacklist mechanism to block users who frequently post unwanted content. The architecture is divided into three layers: a social network manager layer, a content filtering layer using classifiers, and a graphical user interface layer. Filtering rules allow restricting messages based on sender attributes and relationships. Blacklist rules determine which users to block based on the percentage of their messages that violate rules.
Filtering Unwanted Messages from Online Social Networks (OSN) using Rule Base...IOSR Journals
Online Social Networks (OSNs) are today one of the most popular interactive medium to share,
communicate, and distribute a significant amount of human life information. In OSNs, information filtering can
also be used for a different, more responsive, function. This is owing to the fact that in OSNs there is the
possibility of posting or commenting other posts on particular public/private regions, called in general walls.
Information filtering can therefore be used to give users the ability to automatically control the messages
written on their own walls, by filtering out unwanted messages. OSNs provide very little support to prevent
unwanted messages on user walls. For instance, Facebook permits users to state who is allowed to insert
messages in their walls (i.e., friends, defined groups of friends or friends of friends). Though, no content-based
partialities are preserved and therefore it is not possible to prevent undesired communications, for instance
political or offensive ones, no matter of the user who posts them. To propose and experimentally evaluate an
automated system, called Filtered Wall (FW), able to filter unwanted messages from OSN user walls
Spreading processes on temporal networksPetter Holme
This document discusses temporal networks and how temporal structures can impact dynamical processes on networks. It begins by describing different types of temporal networks including person-to-person communication, information dissemination, physical proximity, and cellular biology networks. It then discusses methods for analyzing temporal network structures like inter-event times and how bursty or heavy-tailed distributions can slow spreading compared to memory-less processes. The document also presents examples of how neutralizing temporal structures like inter-event times or beginning/end times can impact spreading simulations. Finally, it discusses how different temporal network datasets exhibit diverse temporal structures.
Temporal Networks of Human InteractionPetter Holme
Temporal networks provide a framework for modeling systems of interactions that occur between nodes over time. These networks capture both the topological structure of connections as well as the timing of interactions. Three key aspects of temporal networks discussed in the document are:
1) Temporal networks can be represented using contact sequences that capture when interactions occur between nodes, unlike static networks which only represent connections.
2) The temporal structure of interactions, such as patterns in the timing of contacts, can impact dynamical processes unfolding on the network like information or disease spreading.
3) Randomizing the timing of contacts in empirical temporal network data can alter dynamical processes, highlighting the importance of temporal structure beyond just topology.
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
An Sna-Bi Based System for Evaluating Virtual Teams: A Software Development P...ijcsit
The dependence of today's collaborative projects on knowledge acquisition and information dissemination
emphasizes the importance of minimizing communication breakdowns. However, as organizations are
increasingly relying on virtual teams to deliver better and faster results, communication issues come to the
forefront of project managers' concerns. This is particularly palpable in software development projects
which are increasingly virtual and knowledge-consuming as they require continuous generation and
upgrade of shared information and knowledge. In a previous work, we proposed an SNA-BI based system
(Covirtsys) that supplements the Analytics modules of the collaborative platform in order to offer a
complementary analysis of communication flows through a network perspective. This paper concerns the
application of this system on a software development project virtual team and shows how it can bring new
insights that could help overcome communication issues among team members.
Complex Networks Analysis @ Universita Roma TreMatteo Moci
This document discusses complex networks and their analysis. It provides a brief history of network analysis starting in the 18th century with Euler's work on the Seven Bridges of Königsberg problem. It then covers key topics like different types of networks, graph modeling approaches, measures to analyze networks, and applications of network analysis to domains like the web, social networks, and disease spreading. The document emphasizes that understanding network structure and interactions is important for studying complex systems and influences within networks.
1. The document discusses a proposed technique called Fuzzy Based Improved Mutual Friend Crawling (Fmfc) for crawling online social networks. It aims to reduce bias introduced by the time taken for crawling the whole network.
2. The technique crawls all users within the same community first before moving to the next community, allowing researchers to selectively obtain users belonging to the same community. This is compared to existing mutual friend crawling.
3. The paper also provides a literature review of existing crawling techniques and studies of complex network properties relevant to community detection in networks. Future work in overlapping communities and performance evaluation on very large networks is discussed.
Delay Tolerant Networking routing as a Game Theory problem – An OverviewCSCJournals
This document discusses modeling delay tolerant networking (DTN) routing as a game theory problem. It provides background on DTN, which aims to enable communication in disrupted networks, and discusses how routing in DTN can be viewed as a strategic interaction between nodes. The document then gives an overview of game theory and different game forms. It proposes analyzing DTN routing as a game with Nash equilibrium, where nodes make forwarding decisions rationally based on beliefs about other nodes' actions.
APPLICATION OF CLUSTERING TO ANALYZE ACADEMIC SOCIAL NETWORKSIJwest
This document discusses clustering academic social networks to analyze relationships between researchers. It proposes measuring similarity between researcher profiles based on attributes like research interests, publications, and co-authors. The profiles are represented using FOAF and RDF, with attributes like name, email, affiliation, interests, publications and coauthors. Similarities are calculated using measures like Euclidean distance, cosine similarity and Jaccard coefficient. Clustering profiles based on these similarities can simplify analysis of the large, dense social networks by identifying groups of related researchers.
The community detection in complex networks has attracted a growing interest and is the subject of several
researches that have been proposed to understand the network structure and analyze the network
properties. In this paper, we give a thorough overview of different community discovery strategies, we
propose taxonomy of these methods, and we specify the differences between the suggested classes which
helping designers to compare and choose the most suitable strategy for the various types of network
encountered in the real world.
2009-Social computing-First steps to netviz nirvanaMarc Smith
This document summarizes two user studies that evaluated NodeXL, an open-source social network analysis tool integrated with Microsoft Excel, and its effectiveness for teaching SNA concepts. 21 graduate students with varying technical backgrounds used NodeXL to analyze online communities. The studies found that NodeXL was usable for a diverse range of users and its integrated metrics and visualizations helped spark insights and facilitated understanding of SNA techniques. Lessons learned can help educators, researchers, and developers improve SNA tools.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
The document discusses the logos of several companies including EA Sports, Coca-Cola, Xbox 360, Manchester United, and Vodafone. It describes key design elements and symbolism for each logo such as the "E" in the EA Sports logo representing sports, the consistent use of the Coca-Cola logo since 1885, the "X" representing Xbox, and the speech mark in Vodafone representing phones. Overall, the document examines how these company logos effectively communicate their brands through simple yet symbolic design.
E-commerce provides several benefits such as access to global markets, 24/7 trading capabilities, and low start-up and running costs. However, it also faces weaknesses like lack of consumer trust, lack of human contact, and product description problems. These issues can be addressed by ensuring security, providing contact details, and carefully writing accurate descriptions. The technologies that support e-commerce include databases, browsers, web authoring software, and domain names. Promotion methods involve search engine optimization, forums, banners, loyalty programs, and direct marketing. Security relies on strong passwords, virus protection, and identity theft precautions.
This document discusses architectural integration styles for large-scale enterprise software systems. It proposes using architectural styles as a way to generalize common integration solutions at the enterprise system level, similar to how styles are used in traditional software architecture. The document defines key terms and presents a structure for describing architectural integration styles. It then describes several example styles, and presents a case study applying the style selection process to an energy company's system integration project. The goal is to provide an approach for selecting integration solutions based on the characteristics of existing systems and desired quality attributes.
The document discusses the importance of continuous improvement for business applications to maintain responsiveness to changing business needs and technologies. It identifies four key categories of continuous improvement: 1) business empowerment through tools that allow business users to directly change applications, 2) application enhancements and integration projects, 3) planned and unplanned maintenance, and 4) version upgrades for off-the-shelf applications. The document emphasizes that a continuous improvement strategy can both reduce maintenance costs and open new opportunities for business responsiveness compared to solely focusing on maintenance. It provides recommendations for balancing business empowerment with controls and avoiding building up technical debt through enhancement projects.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Sistema de denúncia de desperdício de água - Etapa de SínteseCRISLANIO MACEDO
O documento apresenta um sistema de denúncia de desperdício de água, descrevendo sistemas similares, legislação, pesquisa realizada e protótipo desenvolvido. A pesquisa avaliou a disposição das pessoas em denunciar e usar o sistema, e o protótipo inclui modelos de tarefas, interação e ferramentas como Moqups e Cacoo.
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
Content Based Message Filtering For OSNS Using Machine Learning ClassifierIJMER
The document proposes a content-based message filtering system for online social networks (OSNs) using machine learning classifiers. It aims to filter unwanted messages from OSN user walls. The system uses a machine learning classifier to categorize messages and implements customizable filtering rules. It also includes a blacklist mechanism to block users who frequently post unwanted content. The architecture is divided into three layers: a social network manager layer, a content filtering layer using classifiers, and a graphical user interface layer. Filtering rules allow restricting messages based on sender attributes and relationships. Blacklist rules determine which users to block based on the percentage of their messages that violate rules.
Filtering Unwanted Messages from Online Social Networks (OSN) using Rule Base...IOSR Journals
Online Social Networks (OSNs) are today one of the most popular interactive medium to share,
communicate, and distribute a significant amount of human life information. In OSNs, information filtering can
also be used for a different, more responsive, function. This is owing to the fact that in OSNs there is the
possibility of posting or commenting other posts on particular public/private regions, called in general walls.
Information filtering can therefore be used to give users the ability to automatically control the messages
written on their own walls, by filtering out unwanted messages. OSNs provide very little support to prevent
unwanted messages on user walls. For instance, Facebook permits users to state who is allowed to insert
messages in their walls (i.e., friends, defined groups of friends or friends of friends). Though, no content-based
partialities are preserved and therefore it is not possible to prevent undesired communications, for instance
political or offensive ones, no matter of the user who posts them. To propose and experimentally evaluate an
automated system, called Filtered Wall (FW), able to filter unwanted messages from OSN user walls
Spreading processes on temporal networksPetter Holme
This document discusses temporal networks and how temporal structures can impact dynamical processes on networks. It begins by describing different types of temporal networks including person-to-person communication, information dissemination, physical proximity, and cellular biology networks. It then discusses methods for analyzing temporal network structures like inter-event times and how bursty or heavy-tailed distributions can slow spreading compared to memory-less processes. The document also presents examples of how neutralizing temporal structures like inter-event times or beginning/end times can impact spreading simulations. Finally, it discusses how different temporal network datasets exhibit diverse temporal structures.
Temporal Networks of Human InteractionPetter Holme
Temporal networks provide a framework for modeling systems of interactions that occur between nodes over time. These networks capture both the topological structure of connections as well as the timing of interactions. Three key aspects of temporal networks discussed in the document are:
1) Temporal networks can be represented using contact sequences that capture when interactions occur between nodes, unlike static networks which only represent connections.
2) The temporal structure of interactions, such as patterns in the timing of contacts, can impact dynamical processes unfolding on the network like information or disease spreading.
3) Randomizing the timing of contacts in empirical temporal network data can alter dynamical processes, highlighting the importance of temporal structure beyond just topology.
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
An Sna-Bi Based System for Evaluating Virtual Teams: A Software Development P...ijcsit
The dependence of today's collaborative projects on knowledge acquisition and information dissemination
emphasizes the importance of minimizing communication breakdowns. However, as organizations are
increasingly relying on virtual teams to deliver better and faster results, communication issues come to the
forefront of project managers' concerns. This is particularly palpable in software development projects
which are increasingly virtual and knowledge-consuming as they require continuous generation and
upgrade of shared information and knowledge. In a previous work, we proposed an SNA-BI based system
(Covirtsys) that supplements the Analytics modules of the collaborative platform in order to offer a
complementary analysis of communication flows through a network perspective. This paper concerns the
application of this system on a software development project virtual team and shows how it can bring new
insights that could help overcome communication issues among team members.
Complex Networks Analysis @ Universita Roma TreMatteo Moci
This document discusses complex networks and their analysis. It provides a brief history of network analysis starting in the 18th century with Euler's work on the Seven Bridges of Königsberg problem. It then covers key topics like different types of networks, graph modeling approaches, measures to analyze networks, and applications of network analysis to domains like the web, social networks, and disease spreading. The document emphasizes that understanding network structure and interactions is important for studying complex systems and influences within networks.
1. The document discusses a proposed technique called Fuzzy Based Improved Mutual Friend Crawling (Fmfc) for crawling online social networks. It aims to reduce bias introduced by the time taken for crawling the whole network.
2. The technique crawls all users within the same community first before moving to the next community, allowing researchers to selectively obtain users belonging to the same community. This is compared to existing mutual friend crawling.
3. The paper also provides a literature review of existing crawling techniques and studies of complex network properties relevant to community detection in networks. Future work in overlapping communities and performance evaluation on very large networks is discussed.
Delay Tolerant Networking routing as a Game Theory problem – An OverviewCSCJournals
This document discusses modeling delay tolerant networking (DTN) routing as a game theory problem. It provides background on DTN, which aims to enable communication in disrupted networks, and discusses how routing in DTN can be viewed as a strategic interaction between nodes. The document then gives an overview of game theory and different game forms. It proposes analyzing DTN routing as a game with Nash equilibrium, where nodes make forwarding decisions rationally based on beliefs about other nodes' actions.
APPLICATION OF CLUSTERING TO ANALYZE ACADEMIC SOCIAL NETWORKSIJwest
This document discusses clustering academic social networks to analyze relationships between researchers. It proposes measuring similarity between researcher profiles based on attributes like research interests, publications, and co-authors. The profiles are represented using FOAF and RDF, with attributes like name, email, affiliation, interests, publications and coauthors. Similarities are calculated using measures like Euclidean distance, cosine similarity and Jaccard coefficient. Clustering profiles based on these similarities can simplify analysis of the large, dense social networks by identifying groups of related researchers.
The community detection in complex networks has attracted a growing interest and is the subject of several
researches that have been proposed to understand the network structure and analyze the network
properties. In this paper, we give a thorough overview of different community discovery strategies, we
propose taxonomy of these methods, and we specify the differences between the suggested classes which
helping designers to compare and choose the most suitable strategy for the various types of network
encountered in the real world.
2009-Social computing-First steps to netviz nirvanaMarc Smith
This document summarizes two user studies that evaluated NodeXL, an open-source social network analysis tool integrated with Microsoft Excel, and its effectiveness for teaching SNA concepts. 21 graduate students with varying technical backgrounds used NodeXL to analyze online communities. The studies found that NodeXL was usable for a diverse range of users and its integrated metrics and visualizations helped spark insights and facilitated understanding of SNA techniques. Lessons learned can help educators, researchers, and developers improve SNA tools.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
The document discusses the logos of several companies including EA Sports, Coca-Cola, Xbox 360, Manchester United, and Vodafone. It describes key design elements and symbolism for each logo such as the "E" in the EA Sports logo representing sports, the consistent use of the Coca-Cola logo since 1885, the "X" representing Xbox, and the speech mark in Vodafone representing phones. Overall, the document examines how these company logos effectively communicate their brands through simple yet symbolic design.
E-commerce provides several benefits such as access to global markets, 24/7 trading capabilities, and low start-up and running costs. However, it also faces weaknesses like lack of consumer trust, lack of human contact, and product description problems. These issues can be addressed by ensuring security, providing contact details, and carefully writing accurate descriptions. The technologies that support e-commerce include databases, browsers, web authoring software, and domain names. Promotion methods involve search engine optimization, forums, banners, loyalty programs, and direct marketing. Security relies on strong passwords, virus protection, and identity theft precautions.
This document discusses architectural integration styles for large-scale enterprise software systems. It proposes using architectural styles as a way to generalize common integration solutions at the enterprise system level, similar to how styles are used in traditional software architecture. The document defines key terms and presents a structure for describing architectural integration styles. It then describes several example styles, and presents a case study applying the style selection process to an energy company's system integration project. The goal is to provide an approach for selecting integration solutions based on the characteristics of existing systems and desired quality attributes.
The document discusses the importance of continuous improvement for business applications to maintain responsiveness to changing business needs and technologies. It identifies four key categories of continuous improvement: 1) business empowerment through tools that allow business users to directly change applications, 2) application enhancements and integration projects, 3) planned and unplanned maintenance, and 4) version upgrades for off-the-shelf applications. The document emphasizes that a continuous improvement strategy can both reduce maintenance costs and open new opportunities for business responsiveness compared to solely focusing on maintenance. It provides recommendations for balancing business empowerment with controls and avoiding building up technical debt through enhancement projects.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Sistema de denúncia de desperdício de água - Etapa de SínteseCRISLANIO MACEDO
O documento apresenta um sistema de denúncia de desperdício de água, descrevendo sistemas similares, legislação, pesquisa realizada e protótipo desenvolvido. A pesquisa avaliou a disposição das pessoas em denunciar e usar o sistema, e o protótipo inclui modelos de tarefas, interação e ferramentas como Moqups e Cacoo.
Lectura 3.5 word normalizationintwitter finitestate_transducersMatias Menendez
This document describes a system for normalizing Spanish tweets using finite-state transducers. The system consists of three main components: 1) An analyzer that tokenizes tweets and identifies standard words, 2) Transducers that generate candidate words for out-of-vocabulary tokens by implementing linguistic models of spelling variation, 3) A statistical language model that selects the most probable sequence of words. The transducers are based on weighted finite-state automata and implement models of texting features such as logograms, shortenings, and non-standard spellings. An evaluation shows the system is effective at normalizing Spanish tweets.
Best practice business analyse (KAM201302)Crowdale.com
Deutsche Essent GmbH is an energy company based in Düsseldorf, Germany with approximately 2,500 employees. It is a subsidiary of Essent N.V. and focuses on natural gas transport and storage, renewable energy trade and distribution, and partnering with municipal utilities. Deutsche Essent has majority stakes in utility companies and is expanding its wind and gas power generation capacity. It aims to grow its market position in electricity, gas, and renewable energy in Germany. Key opportunities for Deutsche Essent include further investing in wind and gas power, partnering with municipalities, and contributing to Germany's transition to renewable and low-carbon energy sources.
This document provides an overview of service oriented architecture (SOA) and directions for SOA. It introduces a conceptual framework for understanding software integration as different layers, from the communication layer to the presentation layer. It then evaluates existing SOA realization approaches like WS-* specifications, ebXML, semantic web services, and RESTful services based on this framework. The document concludes by outlining future directions in SOA to further simplify the problem of integration.
1) The document discusses challenges with achieving interoperability between ultra large scale systems due to heterogeneity in platforms, data, and semantics.
2) It proposes a three-layered model for interoperability using web service technologies and semantic web approaches to address these challenges.
3) Key aspects of interoperability discussed include different levels (e.g. syntactic, semantic), use of ontologies to provide common understandings and resolve conflicts, and semantic web service approaches like OWL-S that semantically annotate service descriptions.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
This document proposes a framework called the Context-Influence Framework for managing contexts in interactive system development. The framework defines contexts as collections of influences that affect system behaviors and users. Contexts are analyzed at the level of "Use Subsystems" which are specific combinations of users, tasks, and interfaces. Contexts can be divided into domains to capture information from different perspectives. Contexts are also organized hierarchically to allow representation at different levels of detail. The framework is intended to help communication between teams and allow contexts to be added without increasing complexity.
Concurrency Issues in Object-Oriented ModelingIRJET Journal
This document discusses concurrency issues in object-oriented modeling. It begins with an abstract that introduces the topic of finding a synthesis between concurrency and object models by analyzing representative concurrent object-oriented languages. The document then provides background on concurrency and object-oriented programming individually before discussing how they intersect and the issues that arise when combining them. Key concepts of concurrency like activities, parallelism, and communication are defined. Common language constructs for concurrency like co-routines and threads are also introduced.
Automated identification of sensitive informationJeff Long
October 21, 1999: "Using Ultra-Structure for Automated Identification of Sensitive Information in Documents". Presented at the 20th annual conference of the American Society for Engineering Management. Paper published in conference proceedings.
This document discusses the need for adaptive and dynamic software development that can adjust to changing runtime environments and fault conditions. It argues that traditional static approaches to fault tolerance, like using fixed levels of redundancy, are inadequate as the threat environment may vary. The document then introduces an adaptive data integrity tool that allows the level of redundancy to change dynamically based on faults detected at runtime. This provides an example of the new approach called for, termed "New Software Development," that is more adaptive, maintainable and reconfigurable like New Product Development concepts.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
This document discusses a model of the tradeoff between adaptability and structural stability in network organizations. The model considers a network organization with a fixed backbone structure and flexible links that can be rewired over time. The organization faces an environment that changes unpredictably, and nodes must decide whether to rewire their flexible links to new targets. Rewiring improves direct access but removes useful indirect links. The analysis finds that as environmental volatility increases, the optimal adaptability of the organization shifts from fully flexible to fully rigid, with no intermediate options being optimal. This has implications for debates around stability versus change in organizations.
A survey of techniques for achieving metadata interoperabilityunyil96
This document provides a survey of techniques for achieving metadata interoperability between heterogeneous metadata repositories. It begins by introducing the concept of metadata and identifying three key components: metadata instances, schemas, and schema definition languages. It then analyzes factors that can impede interoperability between distinct metadata descriptions, such as structural and semantic heterogeneities. Various techniques for establishing interoperability are categorized and described, with a focus on metadata mapping. Finally, the techniques are compared in terms of their ability to resolve different types of heterogeneities.
Java Abs Peer To Peer Design & Implementation Of A Tuple Spacencct
Final Year Projects, IEEE Projects, Final Year Projects in Chennai, Final Year IEEE Projects, final year projects, college projects, student projects, java projects, asp.net projects, software projects, software ieee projects, ieee 2009 projects, 2009 ieee projects, embedded projects, final year software projects, final year embedded projects, ieee embedded projects, matlab projects, microcontroller projects, vlsi projects, dsp projects, free projects, project review, project report, project presentation, free source code, free project report, Final Year Projects, IEEE Projects, Final Year Projects in Chennai, Final Year IEEE Projects, final year projects, college projects, student projects, java projects, asp.net projects, software projects, software ieee projects, ieee 2009 projects, 2009 ieee projects, embedded projects, final year software projects, final year embedded projects, ieee embedded projects, matlab projects, final year java projects, final year asp.net projects, final year vb.net projects, vb.net projects, c# projects, final year c# projects, electrical projects, power electronics projects, motors and drives projects, robotics projects, ieee electrical projects, ieee power electronics projects, ieee robotics projects, power system projects, power system ieee projects, engineering projects, ieee engineering projects, engineering students projects, be projects, mca projects, mtech projects, btech projects, me projects, mtech projects, college projects, polytechnic projects, real time projects, ieee projects, non ieee projects, project presentation, project ppt, project pdf, project source code, project review, final year project, final year projects
Java Abs Peer To Peer Design & Implementation Of A Tuple Sncct
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Data and Computation Interoperability in Internet ServicesSergey Boldyrev
This document discusses the need for a framework to enable interoperability between heterogeneous cloud infrastructures and systems. It proposes representing data and computation semantically so they can be transmitted and executed across different environments. It also emphasizes the importance of analyzing system behavior and performance to achieve accountability and manage privacy, security, and latency requirements in distributed cloud systems.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
A Review on Evolution and Versioning of Ontology Based Information Systemsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document provides a review of existing approaches to ontology evolution and versioning. It begins by defining ontologies and discussing why evolution and versioning are needed as ontologies are used in information systems. It then outlines some existing solutions for ontology evolution and version management, noting different languages used to conceptualize ontologies. Challenges of ontology versioning and evolution are discussed. Increased usage of ontologies in different domains is reviewed. Finally, some available tools for ontology change management are mentioned.
An agent based approach for building complex software systemsIcaro Santos
1) The document discusses an agent-based approach for developing complex software systems. It argues that agent-oriented approaches are well-suited for building distributed systems due to their ability to model complexity, interactions, and organizational relationships.
2) Complex systems inherently exhibit hierarchy, nearly decomposable subsystems, and changing interactions. An agent-based approach models a system as autonomous agents that can achieve objectives through flexible and decentralized interactions.
3) Key advantages of the agent approach include its use of agents, interactions, and organizations as natural abstractions to represent subsystems, components, and relationships in complex systems. It also allows runtime determination of interactions to reduce coupling between components.
A survey of peer-to-peer content distribution technologiessharefish
This document provides a survey of peer-to-peer content distribution technologies. It begins with defining key concepts of peer-to-peer computing and classifying peer-to-peer systems. The focus is on content distribution systems, which allow personal computers to function as a distributed storage medium for digital content. The document proposes a framework for analyzing nonfunctional characteristics and architectural designs of current peer-to-peer content distribution systems.
journal of object technology - context oriented programmingBoni
This document introduces Context-oriented Programming (COP) as a new programming technique to enable context-dependent computation. COP treats context explicitly and provides mechanisms to dynamically adapt behavior in reaction to context changes at runtime. The document discusses the motivation for COP, defines context and how it can influence behavior, and provides examples of COP implementations in different programming languages. COP aims to bring the same degree of dynamicity to behavioral variations as object-oriented programming brought to polymorphism.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
Similar to Lectura 2.2 the roleofontologiesinemergnetmiddleware (20)
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
2. The Role of Ontologies in Emergent Middleware
411
was of course unsustainable and very quickly distributed systems expanded in terms
of scale, level of heterogeneity and complexity of administrative control, leading to
the Internet-scale distributed systems that we are familiar with today. A number of
interoperability solutions emerged both in terms of proposed standards for
interoperability and solutions to bridging between standards. Distributed systems
have, however, continued to evolve, and we particularly note two important trends:
1. The level of heterogeneity has increased dramatically in recent years with
developments such as ubiquitous computing potentially coupled with enhanced
modes of interaction (for example using ad hoc networking), mobile computing
where an increasing range of mobile devices provide a window on to greater
distributed system services, and cloud computing where complex distributed
system services are offered in the greater Internet. We refer to this as extreme
heterogeneity, whereby the levels of heterogeneity significantly exceed the
previous generation of distributed systems in terms of the size and capabilities of
end system devices, the operating systems used by different devices, the style of
communication protocols employed to provide network-level interoperability, the
languages and indeed programming paradigms utilized, and so on. Some observers
refer to such systems as Systems of Systems [1], and this certainly captures rather
elegantly the complexity of the resultant system structures.
2. The level of dynamism in such systems has also increased significantly, partly as a
result of the trends noted above, for example the increasing mobility involved in
distributed systems has led to the need to support spontaneous interoperation
whereby devices interoperate with services that are discovered in a given location,
coupled with solutions that need to be intrinsically context-aware (including of
course location-aware access to services). The level of dynamism is also affected
by the need for more adaptive and/ or autonomic approaches, again stemming from
the complexity of modern distributed systems.
The end result is that it is very difficult to achieve interoperability in such complex
distributed systems. Indeed, we can say that distributed systems are in crisis with no
principled solutions to interoperability for such complex and dynamic distributed
systems structures. Note that we can go further in this analysis and not just consider the
ability to interoperate but also the quality of service of interoperability solutions in terms
of a range of non-functional properties, for example related to security or dependability.
This is a very valid dimension to consider but is beyond the scope of this paper (we
return to this in the final section, and in particular our statements on future work).
It is interesting to note the definition of interoperability from Tanenbaum [2]:
“The extent by which two implementations of systems or components from different
manufacturers can co-exist and work together by merely relying on each other’s
services as specified by a common standard”
This definition emphasizes the role of a global, or at least common, standard and,
while this offers one solution to interoperability, it is not a realistic option for the
complex distributed systems of today. For example, competitive pressures have
inevitably led to competing standards emerging in the marketplace. Where standards
have reached a level of acceptance, for example with web services, it is recognized by
3. 412
G.S. Blair et al.
the community that they may be problematic for certain operating environments, for
example, ubiquitous systems. In addition, any given standard can very quickly
become a legacy system as time elapses and requirements evolve.
We argue that with the above pressures we need a fundamental re-think of
distributed systems. In particular, we advocate a solution whereby the necessary
middleware to achieve interoperability is not a static entity but rather is generated
dynamically as required by the current context. We refer to this as emergent
middleware. Furthermore, we investigate the key role of ontologies in supporting this
process and, in particular, in providing the ability to interpret meaning and associated
reasoning capabilities in generating emergent middleware. Ontologies have already
been studied in the context of distributed systems, most prominently in the semantic
web community, offering a means of interpreting the meaning of data or associated
services as they are dynamically encountered in the World-Wide Web. This however
limits the scope of ontologies to support the top-level access to data and services. We
are interested in a more comprehensive role for ontologies in supporting meaning and
reasoning in the distributed systems substrate which supports and enables access to
such services, i.e., in the middleware itself, offering a cross-cutting approach where
ontologies provide support to fundamental distributed systems engineering elements.
This paper focuses exclusively on the role of ontologies in supporting the concept
of emergent middleware (further discussion of the broader area of emergent
middleware can be found in [3]). More specifically, the aims of the paper are:
1. To investigate previous work on interoperability in the middleware community and
in the semantic web community with a view to seeking a unification between these
(to date) largely distinct areas of research;
2. To understand both the role and scope of ontologies in supporting key middleware
functions, particularly related to emergent middleware solutions;
3. To investigate more generally the role of ontologies within a general architecture
for emergent middleware.
The paper is structured as follows. Section 2 examines the interoperability-related
challenges associated with complex distributed systems and the associated responses
both from the middleware and the semantic web community. Section 3 moves into the
solution space, presenting the key components of an emergent middleware approach,
before charting the role of ontologies within this approach. Section 4 presents two
experiments, which together provide evidence of the key role ontologies can play in
different levels of a middleware architecture. Finally, Section 5 contains an overall
analysis and reflections over the experience of working with ontologies in emergent
middleware, including the identification of key areas of future work related to this area.
2
The Interoperability Problem Space: Challenges and
Responses
The problem space for interoperability must consider the differences of: i)
applications, and ii) middleware protocols. In both cases, there will typically be
differences in data and behaviour:
4. The Role of Ontologies in Emergent Middleware
•
413
Application data differs in terms of format and meaning, e.g., the data value of a
price parameter can be defined in an object or XML document. It can also
mean different things, e.g., the price is in Pounds versus Euros.
Depending upon application interfaces, the behaviour may be significantly
different, e.g., multiple operations of one interface performing the same
functionality of a single operation of another.
Middleware protocols providing the same communication abstraction may differ
in the data format and type model, e.g., different RPC protocols capture data and
types using different methods and formats.
There now exists a broad range of communication abstractions (e.g., publishsubscribe, tuple spaces, message-orientation, group communication) offered by
middleware protocols; these exhibit significant behavioural differences.
•
•
•
We now examine the responses to these challenges from two distinct communities
(the middleware and the semantic web communities) and investigate the extent to
which comprehensive application and middleware interoperability has been achieved.
2.1
Response from the Middleware Community
The first responses by the middleware community to address interoperability problems
proposed standards-based approaches, i.e., common protocols and interface description
languages. CORBA, DCOM, and web services are effective examples of this approach.
However, as previously described, such solutions are not suited to today’s highly
complex distributed systems that exhibit extreme heterogeneity and dynamic behaviour.
The second set of responses then looked at the challenges of heterogeneous middleware
protocols interoperating with one another. One example of this, software bridge, acts as a
one-to-one mapping between domains; taking messages from a client in one format and
then marshalling this to the format of the server middleware. As examples, the OMG
created the DCOM/CORBA Inter-working Specification [6]. OrbixCOMet is an
implementation of the DCOM-CORBA bridge, while SOAP2CORBA1 bridges SOAP
and CORBA middleware. Further, Model Driven Architecture advocates the generation
of such bridges to underpin deployed interoperable solutions. However, developing
bridges is a resource intensive, time-consuming task, which for universal interoperability
would be required for every protocol pair.
Alternatively, intermediary-based solutions take the ideas of software bridges
further; rather than a one-to-one mapping, the protocol or data is translated to an
intermediary representation at the source and then translated to the legacy format at
the destination. Enterprise Service Buses (ESB), INDISS [8], uMiddle [9] and SeDIM
[10] are examples that follow this philosophy, and these allow differences of both
behaviour and data to be overcome. However, this approach suffers from the greatest
common divisor problem, i.e., between two protocols the intermediary is where their
behaviour matches, they cannot interoperate beyond this defined subset. As the
number of protocols grows, this common divisor then becomes smaller, such that only
limited interoperability is possible.
1
http://soap2corba.sourceforge.net/
5. 414
G.S. Blair et al.
A radically different response involved substitution solutions (e.g., ReMMoC [11]
and WSIF [12]); rather than bridging, these embrace the philosophy of speaking the
peer’s language. That is, they substitute the communication middleware to be the
same as the peer or server they wish to use. A local abstraction maps the behaviour
onto the substituted middleware. This approach allows interoperation among different
abstractions and protocols. However, as with software bridges this is particularly
resource consuming; every potential (and future) middleware must be developed such
that it can be substituted. Further, it is generally limited to client-side interoperability
with heterogeneous servers.
The limitation of all the above responses is that they ignore the heterogeneity of the
application, assuming that there are no differences, due to the adoption of a common
interface. In complex systems, this is clearly not the case.
2.2
Response from the Semantic Web Community
The semantic web community’s responses to the interoperability problem are based
upon the principles of reasoning about and understanding how different systems can
work together. Their key contribution is ontologies. An ontology is defined as a logic
theory, and more precisely as a tuple <A, L, P>, where A is a set of axioms, L is a
language in which to express these axioms, and P is a proof theory, that supports the
automatic derivation of consequences from the axioms. In turn, the proof theory P
allows us to derive consequences, which extract relations that have never been stated
explicitly, but that are implicit in the description of the systems. Ultimately, the proof
theory allows recognition of the deeper “semantic” similarity between structures that
are syntactically very different.
The work in semantic web services demonstrates how ontologies can be used to
address interoperability problems at the application level. Specifically, ontologies have
been used during discovery to express the capabilities of services, as well as the requests
for capabilities; in this case, the proof theory recognizes whether a given capability fits a
given request. A number of semantic middleware technologies provide this ability, e.g.,
the Task Computing project [13], and the Integrated Global Pervasive Computing
Framework [14]. One important solution, EASY [15], implements efficient, semantic
discovery and matching to foster interoperability in pervasive networking environments.
Further, ontologies have been used during composition to address the problem of
application data interoperability, as well as the problem of recognizing whether the
conditions for executing the service indeed hold. The limitation of these responses lies
in the assumption of a specific middleware, namely web services. There is a need to
represent heterogeneous middleware and networking environments, which is almost
completely absent in the semantic web services work.
Ontologies introduce a new meta-level, which can produce its own interoperability
problems. Heterogeneous ontologies push the interoperation problem one level up.
The computational complexity of the proof theories, which is often beyond
exponential, makes ontologies resource expensive. Finally, there is a problem of
generating the ontologies. The problems listed here are fundamental problems with
which the semantic web at large is grappling, and fortunately a number of partial
6. The Role of Ontologies in Emergent Middleware
415
solutions exist that mitigate these problems. For example, ontology matching can be
used to address the problem of different ontologies, and smart and efficient inference
engines are now available. As a result, ontologies may be used effectively to
automatically address many interoperability problems.
2.3
Summary
It is clear that semantic technologies and interoperability middleware have mostly
been developed in isolation by distinct communities. The middleware community
made assumptions of common application interfaces and focused on middleware
behaviour and data heterogeneity. The semantic web community made the opposite
assumption, that there was a common middleware, and the solutions focused on
differences in application behaviour and data.
In our view, semantic technologies and interoperability middleware must be
comprehensively combined to enable emergent middleware, that is, on-the-fly
generation of the middleware that allows networked systems to coordinate to achieve
a given goal. Semantic technologies bring the necessary means to rigorously and
systematically formalize, analyze and reason about the behaviour of digital systems.
Semantic web service technologies have further highlighted the key role of process
mediation in an Internet-scale open network environment where business processes
get composed out of services developed by a multitude of independent stakeholders.
Then, in a complementary way, interoperability middleware solutions hint towards an
architecture of emergent middleware that mediates interaction among networked
systems that semantically match while possibly behaviourally mismatching, from the
application down to the network layer.
3
The Solution Space
The realisation of emergent middleware faces significant challenges, which we are in
particular investigating as part of the CONNECT project [3]: i) discovering what is
there in terms of application and middleware behaviour and data, ii) enhancing this
information using learning techniques, and iii) reasoning upon the required mediation
and synthesizing the resulting software to enable interoperability between
heterogeneous networked systems. In this section, we first introduce the architecture
of the generated emergent middleware, and then we present the ontology-based
models of the networked systems used by Enablers, i.e., active software entities that
collaborate to realise the emergent middleware. Finally, we describe the architecture
of Enablers that need to be deployed in the network toward allowing networked
systems to interact seamlessly.
3.1
Architecture of Emergent Middleware
Building upon previous interoperability middleware solutions [8, 10, 16], the
architecture of the emergent middleware (as shown in Fig. 1) decomposes into: (i)
message interoperability that is dedicated to the interpretation of messages
7. 416
G.S. Blair et al.
from/toward networked systems (listeners parse messages and actuators compose
messages) and (ii) behavioural interoperability that mediates the interaction protocols
run by the communicating networked systems by translating messages from one
protocol to the other, from the application down to the middleware and further to the
network layer.
However, interoperability can only be achieved based on the unambiguous
specification of networked systems’ behaviour, while not assuming any a priori
design-time knowledge about the given systems. This is where the key role of
semantic technologies, i.e., ontologies, comes into play. As discussed in the next
section, ontological concepts are employed to characterise the semantics of exchanged
messages, from the application down to the network layer, and thus allow the analysis
of and reasoning about the external actions performed by systems. This is a major step
in the realization of interoperability, since it allows the mediation of interaction
protocols at all layers, provided their respective functionalities semantically match.
Fig. 1. The emergent middleware architecture
3.2
Ontology-Based Networked System Model
The networked system model builds upon semantic technologies and especially
semantic web services ontologies [17]. Fig. 2 depicts key elements of the system
model with ontologies cross-cutting these elements. The model decomposes into:
•
2
The Affordances (aka capabilities in OWL-S2) provide a macroscopic view of
networked system features. An affordance is specified using ontology concepts
defining the semantics of its functionality and of the associated inputs and
outputs. Essentially, the affordance describes the high-level roles a networked
system plays, e.g., ’prints a document’. This allows semantically equivalent
action-relationships/interactions with another networked system to be matched; in
short, they are doing the same thing. Then, provided the matching of affordances
that are respectively required and provided by two networked systems, it should
be possible to synthesize an emergent middleware that allows the networked
systems to coordinate toward the realization of the affordance despite possible
mismatches in the messages they exchange and even their behaviour. In practice,
http://www.w3.org/Submission/OWL-S/
8. The Role of Ontologies in Emergent Middleware
•
•
networked systems do not advertise affordances but rather interfaces, as
discussed below. Nevertheless, recent advances on learning techniques,
combining solutions to the cohesion of system interfaces [18] and semantic
knowledge inference [19], provide base ground that can be exploited to support
the automated inference of affordances from interfaces, although this remains
area for future work.
The Interface provides a refined or microscopic view of the system by specifying
finer actions or methods that can be performed by/on the networked system, and
used to implement its affordances. Each networked system is associated with a
unique interface. However, there exist many interface definition languages and
actually as many languages as middleware solutions. Nevertheless, existing
languages may easily be translated into a common IDL so as to allow the
matchmaking of interfaces [20]. Still, a major requirement and challenge are for
interfaces to be annotated with ontology concepts so that the semantics of
embedded actions can be reasoned upon. While this is already promoted by web
services standards (e.g., SA-WSDL3), it still remains an exception for
middleware solutions at large. Here too, research on advanced learning
techniques can lead to automated solutions to the semantic annotation of syntactic
interfaces [22].
The Behaviour describes how the actions of the interface are co-ordinated to
achieve a system's affordance, and in particular how these are related to the
underlying middleware functions. The language used to specify the behaviour of
networked systems revolves around process algebra enriched with ontology
knowledge, so as to allow reasoning about their behavioural matching based on
the semantics of their actions, and subsequently support the generation of the
emergent middleware. Such behaviour description has been acknowledged as a
fundamental element of system composition in open networks in the context of
the Web4. However, in the vast majority of cases, networked systems do not
advertise their behaviour. On the positive side, different techniques have emerged
to learn the interaction behaviour of systems, either reactively or proactively [23,
24, 33]. Still, major research challenges remain in the area, as provided
techniques need to be made more efficient as well as be improved, considering,
e.g., the handling of data and non-functional properties.
1
Interface
Networked
System
0..n
Affordance
1
Functionality
0..n
Input
1
Behaviour
0..n
Output
Fig. 2. The networked system model
3
4
417
http://www.w3.org/TR/sawsdl/
www.w3.org/TR/wscl10/
9. 418
3.3
G.S. Blair et al.
Enablers for Emergent Middleware
The realization of emergent middleware is supported by cooperating core Enablers as
depicted in Fig. 3.
Fig. 3. The architecture of the emergent middleware Enablers
The Discovery Enabler receives both the advertisement messages and lookup
request messages that are sent within the network environment by the networked
systems. The enabler obtains this input by listening on known multicast addresses
(used by legacy discovery protocols), as common in interoperable service discovery
[25]. These messages are then processed; information from the legacy messages is
extracted. At this stage, the networked system model includes at least the interface
description, which can be used to infer the ontology concepts associated to the
affordance in the case they are not specified. The semantic matching of affordances is
then performed to determine whether two networked systems are candidates to have
an emergent middleware generated between them. The semantic matching of
affordances is based on the subsumption relationship possibly holding between
concepts of the compared affordances [26]; briefly, the functionality of a required
affordance matches a provided one if the former is subsumed by the latter. Other
semantic relations such as sequence [29] or part-whole5 can also be beneficial to
concept matching. On a match, the process of emergent middleware generation is
started; the current networked system model is sent to the Learning Enabler, which
adds more semantic knowledge to it. On completion of the model, the Discovery
Enabler sends this to the Synthesis Enabler.
More specifically, the Learning Enabler attaches semantic annotations to the
interface, and uses active learning algorithms to dynamically determine the interaction
behaviour associated to an affordance. Interaction behaviour learning is built upon the
5
http://www.w3.org/2001/sw/BestPractices/OEP/SimplePartWhole/
index.html
10. The Role of Ontologies in Emergent Middleware
419
LearnLib tool [27], and employs methods based on monitoring and model-based
testing of the networked systems. It takes the semantic annotations of the interface as
input, and returns the system’s behaviour description.
The role of the Synthesis Enabler is to take the completed networked system models
of two systems and then synthesize the emergent middleware that enables the networked
systems to coordinate on a given affordance. The emergent middleware specifically
implements the needed mediation between the protocols run by the systems to realize
the affordance, which are abstractly characterized by the behavioural description. The
synthesis of the mediator results from the automated behavioural matching of the two
protocols based on the ontological semantics of their actions. In few words, the mediator
defines the possible sequences of actions that serve translating semantic actions of one
protocol to semantic actions of the other. Obviously, many approaches to behavioural
matching and related protocol mediation may be applied considering the state of the art
in the area [30, 31]. Basically, the solution to automated protocol mediation shall allow
for efficient mediator synthesis, while at the same time enabling interoperability beyond
current interoperability middleware solutions. In particular, protocol mediation shall
span all the targeted protocol layers, dealing with the semantics of both application and
middleware actions [28], as illustrated in the next section. An approach that is
particularly promising and that we are investigating lies in ontology-based model
checking [32]; this exploits the power of both ontologies to systematically reason about
the semantics of actions and model checking to systematically reason about the
compatibility of protocols. Still, the more flexible is the compatibility check, the more
complex is the reasoning process. The challenge is then to find the appropriate tradeoffs
so as to foster interoperability in open networks in a computationally tractable way.
Finally, the emergent middleware is deployed, with the resultant connector following
the architecture as depicted in Fig. 1, with listeners and actuators providing message
interoperability and the synthesized mediator dealing with behavioural differences and
translating the message content between heterogeneous message fields. Note the
listeners and actuators are automatically generated using the Starlink framework6.
While this section has focused on the core Enablers toward the generation of
emergent middleware, additional enablers are necessary to cope with the uncertainty
associated with emergent middleware. Indeed, the learning phase is a continuous
process where the knowledge about networked systems is being enriched over time,
which implies that emergent middleware possibly needs to adapt as the knowledge
evolves. Furthermore, it is important that emergent middleware respects the quality
requirements of networked systems regarding their interactions, which requires
appropriate dependability and security enablers.
The development, from the supporting theory to concrete prototype
implementation, of such enablers is currently ongoing as part of the CONNECT EU
project7. Despite the tremendous challenges that are raised in unifying and combining
the principles of semantic technologies and interoperability middleware to enable
emergent middleware, we have been developing experimental enablers to validate this
vision. Our initial experiences with the use of ontologies within this broad solution
6
7
http://starlink.sourceforge.net/
http://connect-forever.eu/
11. 420
G.S. Blair et al.
space are sketched in the next section; these further highlight the important role
ontologies have to play in realising our vision of emergent middleware.
4
Experiments
To provide initial insight into the benefits of using ontologies to support
interoperability, we now present two experiments that show how semantic
technologies can underpin the automatic generation of emergent middleware. The first
experiment examines the use of ontologies to address data and behavioural
heterogeneity at both application and middleware layers. The second experiment
demonstrates how ontologies are used to perform automated matching of message
fields to support interoperability at the network layer.
4.1
Reasoning about Interoperability at Application and Middleware Layers
This experiment illustrates the role of ontologies in handling heterogeneity both at
application and middleware layers. For this purpose, we consider two travel agency
systems that have heterogeneous application interfaces and are implemented using
heterogeneous middleware protocols (one is implemented using SOAP and the other
with HTTP REST). We use application-specific and middleware ontologies to reason
about the matching of both application and middleware behaviour.
The travel agencies example. The first networked system, called EUTravelAgency,
is developed as an RPC-SOAP web service. Thus, data is transmitted using SOAP
request and response envelopes transported using HTTP Post messages. The service
allows users to perform the following operations concurrently:
•
•
•
•
Selecting a flight. The client must specify a destination, a departure and a return
date. The service returns a list of eligible flights.
Selecting a hotel. The client indicates the check-in and check-out dates. The
service returns a list of rooms.
Selecting a car to rent. The user indicates the period of rental and their preferred
model of car. The service then proposes a list of cars.
Making a reservation. Once the user has chosen a flight and/or a hotel room
and/or a car, they confirm their reservation. The service returns an
acknowledgment.
The interface signature for EUTravelAgency (abstracted from WSDL 2.0) is given
below, where we provide only the ontology concepts associated with the syntactic
terms embedded in the interface:
SelectFlight({destination, departureDate, returnDate}, flightList)
SelectHotel({checkIndate, checkOutdate, pref}, roomList)
SelectCar({dateFrom, dateTo, model}, carList)
MakeReservation({flightID, roomID, carID}, Ack)
The second system is called USTravelAgency and allows users to perform the
following two operations:
12. The Role of Ontologies in Emergent Middleware
•
421
Finding a trip. The client specifies a destination, departure and return date. The
service finds a list of “packages” including a flight and hotel room and car.
Making a reservation. The user selects a trip package and confirms it. The service
acknowledges the reception of the selection.
•
The interface signature, although giving only embedded ontology concepts, is
abstracted as follows:
FindTrip({destination,departureDate,returnDate,needCar},flightList)
ConfirmTrip(tripID,Ack)
The USTravelAgency service is implemented as a REST web service over the HTTP
protocol. The findTrip operation is performed as a HTTP Get and the confirmTrip
operation is performed using a HTTP Post as shown below (the outputs of both
service operations are formatted using JSON8):
GET http://ustravelagency.com/rest/tripervice/findTrip/{destination}/
{departureDate}/{returnDate}/{needCar}
POST http://ustravelagency.com/rest/tripervice/confirmTrip/{tripID}
A client of the EUTravelAgency cannot interact with the USTravelAgency, and
similarly a client developed for the USTravelAgency cannot communicate with the
EUTravelAgency due to the aforementioned heterogeneity dimensions:
•
•
•
•
Application data. The EUTravelAgency refers to the Flight, Hotel and Car
concepts, whereas the USTravelAgency makes use only of the Trip concept.
Additionally, the EUTravelAgency specifies the departure and the return dates
using Greenwich Mean Time (GMT), while the USTravelAgency uses Pacific
Standard Time (PST) to describe them.
Application behaviour. In the EUTravelAgency implementation, users can
independently select a flight, a room and a car, whereas in the USTravelAgency
implementation all of them are selected through a package.
Middleware data format. The data exchanged in the EUTravelAgency
implementation are encapsulated in a SOAP message, while the input data of the
USTravelAgency are passed through a URL and the output data are formatted
using JSON.
Middleware behaviour: REST and RPC-SOAP are different architectural styles
and induce heterogeneous control and communication models.
The travel agency ontology. The first step of the experiment of interoperability
between EUTravelAgency and USTravelAgency was to create the domain-specific
ontology associated with the travel agency scenario (Fig. 4 illustrates an excerpt of
this ontology). The ontology shows the relations holding among the various concepts
defined in the interfaces of the two travel agencies. Note that the application-specific
8
http://www.json.org/
13. 422
G.S. Blair et al.
ontology not only describes the semantics and relationships related to data but also the
semantics of the operations performed on data, such as FindTrip, SelectFlight,
SelectHotel, and SelectCar.
In the general case, the application ontology is not defined by the application
developers but by domain experts, to reflect shared knowledge about a specific
domain. Many ontologies have been developed for specific domains, e.g., Sinica
BOW9 (Bilingual Ontological Wordnet) for English-Chinese integration. In addition,
work on ontology alignment enables dealing with possible usage of distinct ontologies
in the modelling of different networked systems from the same domain, as illustrated
by the W3C Linking Open Data project10.
Fig. 4. The travel agency ontology
Dealing with application-level heterogeneity. The travel agency ontology indicates
how the Flight, Hotel and Car concepts are related to the Trip concept, including their
individual attributes. Moreover, we can also use standard ontologies for translation,
e.g., OWL-Time11 can be used to resolve the time difference between GMT and PST.
Solving the application data mismatches is not sufficient. We also need to
coordinate the actions of the networked systems in order to make them interoperate.
Ontologies help establishing the correspondence between actions. As illustrated in
Fig. 4, FindTrip is defined as equivalent to the composition of the three operations
SelectFlight, SelectHotel, and SelectCar. A mediator that ensures the coordination
between the above operations can then be synthesized based on the semantic
(subsumption) relations between them and the behaviour of the two networked
systems. Moreover, since the SelectFlight, SelectHotel, and SelectCar can be
executed concurrently, we need to check all possible executions. Therefore, we rely
on model checking further extended with ontology reasoning capabilities, in order to
exhaustively explore all the state space and systematically guarantee the correctness
of the synthesized mapping rules. As illustrated in Fig. 5, the mediator translates the
9
http://BOW.sinica.edu.tw/
http://www.w3.org/wiki/SweoIG/TaskForces/CommunityProjects/
LinkingOpenData
11
http://www.w3.org/TR/owl-time/
10
14. The Role of Ontologies in Emergent Middleware
423
FindTrip action to the concurrent execution of the SelectFlight, SelectHotel and
SelectCar actions, and the MakeReservation action to the ConfirmTrip action. This
translation is further refined according to the underlying middleware of each
networked system as illustrated next.
Dealing with middleware-level heterogeneity. To reason about the behavioural
matching of middleware, we have defined a middleware ontology that identifies
where sequences of protocol messages execute similar functionality. For example, the
request-response message sequence of CORBA is clearly equivalent to that of SOAP.
Yet, there may be cases where the relationship is semantically deeper, e.g.,
subscription in a publish-subscribe protocol may be equivalent to a RPC invocation
(but only when they are performing similar application behaviour) [28].
In the travel agency scenario, the operations are implemented atop SOAP and
HTTP REST. The ontology specifies SOAP as a request followed by a synchronous
response. Similarly, REST is specified as four alternative synchronous message sends
and responses (Get, Post, Put, Delete). The ontology defines that a SOAP operation in
the general case is semantically equivalent to all four REST behaviours. Therefore, to
reason about interoperability, the application matching must be considered in tandem.
For example, in the FindTrip operation case, the protocol mediation is from SOAP to
Get, whereas in the ConfirmTrip case the protocol mediation is from SOAP to Post.
Mediator)
USTravelAgency)Client)
EUTravelAgency)
Fig. 5. Behavioural specification of the two travel agencies and the mediator
Another fundamental difference at the middleware level is in the heterogeneity of
messages, i.e., the complexity of the translation of SOAP data content to REST data
content while message formats are different. We investigate the use of ontologies to
reason about this important problem in the second experiment.
15. 424
4.2
G.S. Blair et al.
Reasoning about Interoperability at the Network Layer
Devising solutions at the application and middleware level to enable any two systems to
interoperate does not suffice if they cannot properly exchange network messages. It is
imperative to understand and reason about the heterogeneous message formats of
protocols in such a way that message-level interoperability can be achieved on a broader
scale. We need systematic ways to dynamically capture the underlying differences of
network packets to then generate the mapping between them. Ontologies provide the
methodology to identify these semantic similarities and differences in order to
automatically identify the translation between messages.
This experiment focuses on using ontologies to map between heterogeneous
Vehicular Ad Hoc Network (VANET) protocols; this domain was chosen because the
protocol behaviour is common (i.e., routing of messages to a destination), but there is
a high-level of heterogeneity at the packet level. A number of VANET protocols exist
that fall into different routing strategies: broadcast, position-based forwarding,
trajectory-based forwarding, restricted directional flooding, content-based forwarding,
cluster-based forwarding, and opportunistic forwarding. Hence, these exhibit highly
heterogeneous message formats owing to the vast array of routing strategies.
Fig. 6. Packet formats of BBR and Broadcomm packets
Interoperability between BBR and Broadcom. The BBR protocol [34] is a
broadcast routing protocol that keeps track of neighbouring nodes and broadcasts the
packet at a set rate. The node, lying on the border of the transmission range, is
designated to forward the packet further away in the network. This is determined by
the number of common neighbours this node has with the source node. This value is
represented as a CommonNeighbourNo field in the packet. Fig. 6 shows the format of
BBR and Broadcomm packets. Broadcomm [35] is a position-based routing protocol,
which keeps track of nodes through their geographical locations. This protocol
divides the network into clusters and allocates one node in each cluster to be the
cluster head. The latter is responsible to forward messages to the cluster members and
forward them to the nearest neighbour found outside the cluster. We can say that the
behaviour of Broadcomm matches with that of BBR to a certain degree in the sense
that both designate a node to disseminate messages further into the network. But both
differ in the way their messages are formulated, especially with the use of
geographical coordinates in one protocol and not in the other.
Applying Ontologies. Given the differences in their message formats, any sort of
interoperation does not seem to be valid if Broadcomm and BBR try to interoperate.
16. The Role of Ontologies in Emergent Middleware
425
However, if we can interpret both message formats and deduce their meaning, it is
possible to find a basis for comparison. As a result, we create a vehicular ontology for
the various routing strategies used in VANETs together with a definition of known
packet formats. The main idea is to use this ontology to classify packet formats under
the appropriate routing scheme and deduce how to enable this packet to interoperate
with another packet, i.e., construct the mappings that are part of the synthesized
mediator in the emergent middleware architecture. The presence of a reasoner engine
enables us to infer the meaning of a packet (as we discover middleware knowledge of
the networked system). As a result, the packet gets classified under the most
appropriate routing strategy. This classification is an important step as it helps to
establish a ground for comparison between packets belonging to different routing
categories. Part of the inferred ontology is displayed in Fig. 7, where the BBR packet
(BBRPacket) is ranked under IdentifiedPacket and MFRBroadcastPacket classes. The
requirements for MFRBroadcastPacket are the fields: CommonNeighbourNo and
NeighbourList. Since these fields form part of BBRPacket, the reasoner is able to
classify the latter under MFRBroadcastPacket. The IdentifiedPacket class denotes
that the packet contains known fields. In this way, incoming packets can be classified
by the ontology and be compared with existing packet formats. For example, assume
the incoming packet to be Broadcomm and the existing packet to be BBR.
Fig. 7. Inferred Vehicular Ontology
Field Matching. Once both packets are classified, they can be compared to each other
through an intuitive mechanism embedded in the ontology, which is the use of SWRL
rules and SQWRL12 query rules. These mechanisms add further reasoning to the
classification process to enable field matching. As an example, the following SQWRL
rule retrieves the fields from BBR and Broadcomm packets and tries to find the
differences between them. To do so, it creates a collection of the fields of each packet
using the SQWRL makeBag function and identifies the differences with the SQWRL
difference function. The SQWRL clause is introduced within the SWRL rule by a
separator character °. The SQWRL clause handles construction and manipulation
operators required to execute SQWRL-based rules. As can be seen in the example
below, the ° separator character enables a SWRL rule to include a SQWRL query.
12
http://protege.cim3.net/cgi-bin/wiki.pl?SWRLTab and
…/wiki.pl?SQWRL
17. 426
G.S. Blair et al.
BBRPacket(?b) ∧ hasFields(?b, ?f) ∧ Broadcomm(?p) ∧ hasFields(?p, ?pf) ˚
sqwrl:makeBag(?bag, ?f) ∧ sqwrl:makeBag(?bagt, ?pf) ˚ sqwrl:difference(?diff, ?bagt, ?bag) ∧
sqwrl:element(?e, ?diff) sqwrl:selectDistinct(?p, ?e)
The result of this query gives the fields required for BBR to function as Broadcomm
and vice versa; the fields lacking in BBR would be LocationCoordinates,
TargetRoute and ClusterHead. Moreover, further classification is also possible
through the use of SWRL rules to reason about the data types of the fields. As an
example, suppose we have a field x in BBR packet of type <int> and a corresponding
field y in Broadcomm of type <String>. In this case, we can make use of a SWRL rule
to suggest a mapping between these two fields:
hasFields(BBR, ?x) ^ hasType(?x, <int>) ^ hasFields(Broadcomm, ?y) ^ hasType(?y,<String>)
swrlb:MapIntToString(?x, ?y)
The OWL language enhanced with the use of SWRL and SQWRL enables
comparison of two packets. The ontology can hence interpret the packet formats
through matching and suggest a possible mapping between them. For example, the
ontology can suggest that BBR lacks geographical coordinates in order to operate as
Broadcomm. This information is fundamental in determining how to enable mapping
between these two different types of packets. This is in itself a step forward towards
interoperability between different network packets; however, further research is
required into how ontologies can be used to generally identify mapping solutions that
resolve the differences between packets. Further details about the use of ontologies
within the domain of message-level heterogeneity are presented in [7].
5
Overall Reflections
Interoperability remains a fundamental problem for distributed systems due to the
increasing level of heterogeneity and dynamism of the networking environment. In
this paper, we have argued for a new approach to interoperability, i.e., emergent
middleware that is synthesized on the fly according to the behaviour of the associated
networked systems. A central element of our approach is the use of ontologies in the
middleware design so that middleware may dynamically emerge based on semantic
knowledge about the environment. Hence, while interoperability in the past has been
about making concessions, e.g., pre-defined standards and design decisions, emergent
middleware builds on the ability of machines to themselves reason about and tackle
the heterogeneity they encounter. Further, acknowledging that interoperability is, as
with many features of distributed systems, an end-to-end problem [5], emergent
middleware emphasizes that interoperability can only be achieved through a
coordinated approach involving application, middleware and network levels.
This paper has introduced the core elements of the emergent middleware vision,
i.e., ontologies and related Enablers to reason about and implement interoperability on
the fly. The architecture of Enablers outlined in Section 3 has provided a view of how
18. The Role of Ontologies in Emergent Middleware
427
emergent middleware can be realised, where associated technologies becoming
available through the CONNECT project. This architecture illustrates the important
roles of discovery, learning and synthesis in achieving our goals. The most notable
feature of the architecture is that ontologies have a cross-cutting role. The
experimental work reported in Section 4 has further illustrated the central role of
ontologies in supporting meaning and reasoning in distributed systems, not just at the
application level but also in the underlying distributed systems substrate, for
achieving interoperability in the highly heterogeneous and dynamic style of today’s
distributed systems. However, despite the latest advances in Enablers for emergent
middleware, significant challenges remain ahead as discussed below.
While emergent middleware relieves the burden of interoperability from the
middleware designers and developers, and fosters future-proof interoperability, its
general applicability is dependent upon the effectiveness of the supporting Enablers.
The latest results of CONNECT are encouraging in that they introduce base building
blocks for the Enablers, spanning automated support for discovery, learning and
synthesis. Small-scale experiments further demonstrate that Enablers may adequately
be combined. Still, applicability to real-scale experiments is area for future work.
Realizing the central role of ontologies to allow machines to tackle interoperability
across time raises the issue of how large, comprehensive ontologies may be deployed
for interoperability in practice. At first sight, this basically depends on the
development of supporting ontologies by domain experts and hence on the
requirements of a given domain in terms of interoperability. For instance, it is
expected that the Internet of Things will lead to major ontology development. Another
consideration is the cost of processing large ontologies and, more specifically, the
efficacy of semantic tools, which keep improving over time given research in the area.
There is also considerable potential for core research on ontologies concerning the
role of fuzziness in supporting richer forms of reasoning [21], the possibility of
learning new ontological information and merging it with existing information as it
becomes available, and also dealing with heterogeneity in the ontologies themselves.
We have so far concentrated on the synthesis of mediators from scratch, while the
construction of mediators by composing existing ones would enable more efficient
synthesis and support self-adaptive emergent middleware. Ongoing CONNECT
research on an algebra for mediators will provide us the required foundations [4].
The inherent openness and probability of failure in emergent middleware solutions
raise important challenges. If the solution is to be deployed at Internet scale, then it
must be reliably able to produce correct mediators and also be secure against
malicious threats. Hence, dependability is a central research question; this has to
overcome the partial knowledge about systems as well as security concerns arising. A
related concern is that of dealing with interoperability between fault tolerant systems
and in general with the heterogeneity of non-functional properties across systems.
Dedicated solutions are being investigated within CONNECT.
Furthermore, failing to generate emergent middleware in a specific context is not
only dependent on the reliability of our solution, but also, most importantly in the
open target environments, on the degree of incompatibility between candidate
systems. For example, semantic matching may indicate that the semantic distance
19. 428
G.S. Blair et al.
between the application features of two systems is too great to be bridged. Precisely
evaluating the limitations of our approach in producing a result is an area of future
work; we are already studying aspects of this important issue within CONNECT.
Another interesting research direction for emergent middleware is that of involving
end-users in the synthesis process to inform the automated approach. For example,
end-users can assist semantic matching where ontology heterogeneity may lead
automated reasoning to ambiguous results. This raises various challenges, including
how to provide user-friendly interfaces to the emergent middleware internals.
In summary, this paper has argued that, given the increasing complexity of
contemporary distributed systems, both in terms of increasing heterogeneity and
dynamism, there is a need for a fundamental rethink of approaches to even the most
basic of problems, that is, interoperability. We advocate a new approach to
middleware, that of emergent middleware. This paper has looked at one key aspect of
emergent middleware, namely, the role of ontologies in supporting core underlying
middleware functions related to achieving interoperability. This leads to a fascinating
set of research challenges both in terms of understanding a given deployment
environment and also dynamically creating appropriate connectivity solutions. We
hope this paper has given a flavour of the potential of this approach and also some
real experimental evidence that the approach can work in selected aspects of
distributed systems. As a final comment, while CONNECT is addressing a number of
the ongoing challenges, this is a vast and largely unchartered territory and we invite
other researchers to join in the quest for suitable solutions for emergent middleware.
Acknowledgements. This work is carried out in the CONNECT project, a European
collaboration funded under the Framework 7 Future and Emerging Technologies
Programme (Proactive Theme on ICT Forever Yours): http://www.connectforever.eu.
References
1. Maier, M.W.: Architecting Principles for System of Systems. Systems Engineering 1(4),
267–284 (1998)
2. Van Steen, M., Tanenbaum, A.: Distributed Systems: Principles and Paradigms. PrenticeHall (2001)
3. Bennaceur, A., Blair, G., Chauvel, F., Huang, G., Georgantas, N., Grace, P., Howar, F.,
Inverardi, P., Issarny, V., Paolucci, M., Pathak, A., Spalazzese, R., Steffen, B., Souville,
B.: Towards an Architecture for Runtime Interoperability. In: Margaria, T., Steffen, B.
(eds.) ISoLA 2010. LNCS, vol. 6416, pp. 206–220. Springer, Heidelberg (2010)
4. Autili, M., Chilton, C., Inverardi, P., Kwiatkowska, M., Tivoli, M.: Towards a Connector
Algebra. In: Margaria, T., Steffen, B. (eds.) ISoLA 2010. LNCS, vol. 6416, pp. 278–292.
Springer, Heidelberg (2010)
5. Saltzer, H., Reed, D.P., Clark, D.D.: End-to-end arguments in system design. ACM Trans.
Comput. Syst. 2(4), 277–288 (1984)
6. Object Management Group, COM/CORBA Interworking Spec. Part A & B (1997)
20. The Role of Ontologies in Emergent Middleware
429
7. Nundloll, V., Grace, P., Blair, G.S.: The Role of Ontologies in Enabling Dynamic
Interoperability. In: Felber, P., Rouvoy, R. (eds.) DAIS 2011. LNCS, vol. 6723, pp. 179–
193. Springer, Heidelberg (2011)
8. Bromberg, Y.-D., Issarny, V.: INDISS: Interoperable Discovery System for Networked
Services. In: Alonso, G. (ed.) Middleware 2005. LNCS, vol. 3790, pp. 164–183. Springer,
Heidelberg (2005)
9. Nakazawa, J., Tokuda, H., Edwards, W., Ramachandran, U.: A Bridging Framework for
Universal Interoperability in Pervasive Systems. In: Proceedings of 26th IEEE
International Conference on Distributed Computing Systems (ICDCS 2006), Lisbon,
Portuga (2006)
10. Cortes, C., Grace, P., Blair, G.: SeDiM: A Middleware Framework for Interoperable
Service Discovery in Heterogeneous Networks. ACM Transactions on Autonomous and
Adaptive Systems 6(1), Article 6:1-8 (2011)
11. Grace, P., Blair, G., Samuel, S.: A Reflective Framework for Discovery and Interaction in
Heterogeneous Mobile Environments. ACM SIGMOBILE Mobile Computing and
Communications Review 9(1), 2–14 (2005)
12. Duftler, M., Mukhi, N., Slominski, S., Weerawarana, S.: Web Services Invocation
Framework (WSIF). In: Proceedings of OOPSLA 2001 Workshop on Object Oriented
Web Services, Tampa, Florida (2001)
13. Masuoka, R., Parsia, B., Labrou, Y.: Task Computing – The Semantic Web Meets
Pervasive Computing. In: Fensel, D., Sycara, K., Mylopoulos, J. (eds.) ISWC 2003.
LNCS, vol. 2870, pp. 866–881. Springer, Heidelberg (2003)
14. Singh, S., Puradkar, S., Lee, Y.: Ubiquitous Computing: Connecting Pervasive Computing
Through Semantic Web. Information Systems and e-Business Management Journal (2005)
15. Ben Mokhtar, S., Preuveneers, D., Georgantas, N., Issarny, V., Berbers, Y.: EASY:
Efficient SemAntic Service Discovery in Pervasive Computing Environments with QoS
and Context Support. Journal of Systems and Software 8(5), 785–808 (2008)
16. Bromberg, Y., Grace, P., Reveillere, L.: Starlink: runtime intereoperability between
heterogeneous middleware protocols. In: Proceedings of the 31st IEEE International
Conference on Distributed Computing Systems, Minneapolis, USA (June 2011)
17. Martin, D., Burstein, M., Mcdermott, D., Mcilraith, S., Paolucci, M., Sycara, K.,
Mcguinness, D.L., Sirin, E., Srinivasan, N.: Bringing semantics to web services with
OWL-S. World Wide Web Journal 10, 243–277 (2007)
18. Athanasopoulos, D., Zarras, A.: Fine-Grained Metrics of Cohesion Lack for Service
Interfaces. In: Proc. of ICWS 2011 (to appear, 2011)
19. Bennaceur, A., Johansson, R., Moschitti, A., Spalazzese, R., Sykes, D., Saadi, R., Issarny,
V.: Inferring affordances using learning techniques. In: International Workshop on Eternal
Systems, EternalS 2011 (2011)
20. Mokhtar, S.B., Raverdy, P.-G., Urbieta, A., Cardoso, R.S.: Interoperable semantic and
syntactic service discovery for ambient computing environments. IJACI 2(4), 13–32
(2010)
21. Straccia, U.: A Fuzzy Description Logic for the Semantic Web. In: Sanchez, E. (ed.) Fuzzy
Logic and the Semantic Web, Capturing Intelligence, ch. 4, pp. 73–90. Elsevier (2006)
22. Heß, A., Kushmerick, N.: Learning to Attach Semantic Metadata to Web Services. In:
Fensel, D., Sycara, K., Mylopoulos, J. (eds.) ISWC 2003. LNCS, vol. 2870, pp. 258–273.
Springer, Heidelberg (2003)
23. Krka, I., Brun, Y., Popescu, D., Garcia, J., Medvidovic, N.: Using dynamic execution
traces and program invariants to enhance behavioral model inference. In: ICSE (2), pp.
179–182 (2010)
21. 430
G.S. Blair et al.
24. Bertolino, A., Inverardi, P., Pelliccione, P., Tivoli, M.: Automatic synthesis of behavior
protocols for composable web-services. In: ESEC/SIGSOFT FSE, pp. 141–150 (2009)
25. Caporuscio, M., Raverdy, P.-G., Moungla, H., Issarny, V.: Ubisoap: A service oriented
middleware for seamless networking. In: ICSOC, pp. 195–209 (2008)
26. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F.: The
Description Logic Handbook. Cambridge University Press (2003)
27. Merten, M., Steffen, B., Howar, F., Margaria, T.: Next Generation LearnLib. In: Abdulla,
P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 220–223. Springer,
Heidelberg (2011)
28. Issarny, V., Bennaceur, A., Bromberg, Y.-D.: Middleware-Layer Connector Synthesis:
Beyond State of the Art in Middleware Interoperability. In: Bernardo, M., Issarny, V.
(eds.) SFM 2011. LNCS, vol. 6659, pp. 217–255. Springer, Heidelberg (2011)
29. Drummond, N., Rector, A.L., Stevens, R., Moulton, G., Horridge, M., Wang, H.,
Seidenberg, J.: Putting OWL in order: Patterns for sequences in OWL. In: OWLED (2006)
30. Vaculin, R., Sycara, K.P.: Towards automatic mediation of OWL-S process models. In:
Proceedings of ICWS (2007)
31. Williams, S.K., Battle, S.A., Cuadrado, J.E.: Protocol Mediation for Adaptation in
Semantic Web Services. In: Sure, Y., Domingue, J. (eds.) ESWC 2006. LNCS, vol. 4011,
pp. 635–649. Springer, Heidelberg (2006)
32. Clarke Jr., E.M., Grumberg, O., Peled, D.A.: Model Checking. The MIT Press (1999)
33. Howar, F., Jonsson, B., Merten, M., Steffen, B., Cassel, S.: On Handling Data in Automata
Learning - Considerations from the Connect Perspective. In: Margaria, T., Steffen, B.
(eds.) ISoLA 2010, Part II. LNCS, vol. 6416, pp. 221–235. Springer, Heidelberg (2010)
34. Zhang, M., Wolf, R.: Border Node Based Routing Protocol for VANETs in Sparse and
Rural Areas. In: IEEE Globecom Autonet Workshop, Washington, pp. 1–7 (November
2007)
35. Durresi, M., Durresi, A., Barolli, L.: Emergency Broadcast Protocol for Inter-Vehicle
Communications. In: Proc. 11th International ICPADS Conference Workshops, pp. 402–
406 (2005)