This document proposes an approach called "OntoFrac-S" to handle the increasing number of ontologies being created for the semantic web. It suggests using fractals and multi-agent systems to implement the semantic web and link data in a way that accounts for the fractal and self-similar nature of data at different levels. Specifically, it argues that merely integrating local and global ontologies is not sufficient, and that ontologies should be viewed as relative concepts depending on the scale, with each local ontology potentially acting as a global ontology for lower-level sub-ontologies. The approach aims to apply concepts of semantic and ontological relativity using fractals to help build a semantically linked global graph while addressing cross-c
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Comm...eraser Juan José Calderón
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance by David Rozas (drozas@ucm.es), Antonio Tenorio-Fornés (antoniotenorio@ucm.es), Silvia
1 2
Díaz-Molina (smdmolina@ucm.es), and Samer Hassan (shassan@cyber.harvard.edu)
Six Degrees of Separation to Improve Routing in Opportunistic Networksijujournal
This document discusses using small-world network concepts for routing in opportunistic networks. It analyzes three real-world datasets representing contact graphs and finds they exhibit small-world properties with high clustering and short path lengths. The document proposes a simple routing algorithm that applies these findings and concludes it outperforms other algorithms in simulations by taking temporal contact factors into account.
This document provides a comparative analysis of two main hierarchical distributed hash table (DHT) systems - the homogenous design and the superpeer design. It presents an analytical framework and cost model to evaluate these designs. The analysis reveals that contrary to initial expectations, the costs incurred by the hierarchical superpeer design are not necessarily minimized. Key aspects of the two designs like load balancing, fault tolerance, and advantages/disadvantages are discussed. The document aims to help identify the better hierarchical DHT design for a given workload or application.
Fueling the future with Semantic Web patterns - Keynote at WOP2014@ISWCValentina Presutti
I will claim that Semantic Web Patterns can drive the next technological breakthrough: they can be key for providing intelligent applications with sophisticated ways of interpreting data. I will picture scenarios of a possible not so far future in order to support my claim. I will argue that current Semantic Web Patterns are not sufficient for addressing the envisioned requirements, and I will suggest a research direction for fixing the problem, which includes the hybridisation of existing computer science pattern-based approaches, and human computing.
The purpose of the present scientific contribution is to investigate from the business economics standpoing the emerging phenomenon of company networks. In particular, through the analysis of the theory of networks will be proposed the principal categories of business networks, and even before this the concept of the network will be defined. The proposed research, qualitatively, represents the point of departure for the study of the network phenomenon in light of the current economic phase termed “economy of knowledge”. Moreover, the research questions are the following: From where does the theory of networks arise? Do company networks consider themselves equal to knowledge networks?
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Comm...eraser Juan José Calderón
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance by David Rozas (drozas@ucm.es), Antonio Tenorio-Fornés (antoniotenorio@ucm.es), Silvia
1 2
Díaz-Molina (smdmolina@ucm.es), and Samer Hassan (shassan@cyber.harvard.edu)
Six Degrees of Separation to Improve Routing in Opportunistic Networksijujournal
This document discusses using small-world network concepts for routing in opportunistic networks. It analyzes three real-world datasets representing contact graphs and finds they exhibit small-world properties with high clustering and short path lengths. The document proposes a simple routing algorithm that applies these findings and concludes it outperforms other algorithms in simulations by taking temporal contact factors into account.
This document provides a comparative analysis of two main hierarchical distributed hash table (DHT) systems - the homogenous design and the superpeer design. It presents an analytical framework and cost model to evaluate these designs. The analysis reveals that contrary to initial expectations, the costs incurred by the hierarchical superpeer design are not necessarily minimized. Key aspects of the two designs like load balancing, fault tolerance, and advantages/disadvantages are discussed. The document aims to help identify the better hierarchical DHT design for a given workload or application.
Fueling the future with Semantic Web patterns - Keynote at WOP2014@ISWCValentina Presutti
I will claim that Semantic Web Patterns can drive the next technological breakthrough: they can be key for providing intelligent applications with sophisticated ways of interpreting data. I will picture scenarios of a possible not so far future in order to support my claim. I will argue that current Semantic Web Patterns are not sufficient for addressing the envisioned requirements, and I will suggest a research direction for fixing the problem, which includes the hybridisation of existing computer science pattern-based approaches, and human computing.
The purpose of the present scientific contribution is to investigate from the business economics standpoing the emerging phenomenon of company networks. In particular, through the analysis of the theory of networks will be proposed the principal categories of business networks, and even before this the concept of the network will be defined. The proposed research, qualitatively, represents the point of departure for the study of the network phenomenon in light of the current economic phase termed “economy of knowledge”. Moreover, the research questions are the following: From where does the theory of networks arise? Do company networks consider themselves equal to knowledge networks?
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices byactive vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of thenext article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBL Pand Pubmed datasets, respectively.
This document discusses navigability in social tagging systems. It begins by defining social tagging systems and folksonomies. It then examines factors that influence navigability in social tagging systems like motivations for tagging. It analyzes how tag clouds and hierarchies can be used for navigation but notes that user interface constraints like tag cloud size and pagination can impair navigability. It concludes that certain popular approaches to tag clouds do not support navigability and new approaches are needed that consider the trade-off between semantic and navigational properties.
The keynote presentation at the 2nd Jordan International Conference on Computer Science and Engineering discusses three examples of modeling complex networks using computer science techniques:
1) Analyzing the link structure of websites using link structure graphs to understand site organization and user experience.
2) Studying messages exchanged in online social networks using machine learning to determine levels of agreement and disagreement.
3) Modeling complex biological systems like gene networks using stochastic pi-calculus to study multi-component system interactions.
The presenter is Dr. Natasa Milic-Frayling, a senior researcher at Microsoft Research Cambridge who leads research on information retrieval, machine learning, and user-centered design.
BookyScholia: A Methodology for the Investigation of Expert Systemsijcnac
Mathematicians agree that encrypted modalities are an interesting new topic in the field
of software engineering, and systems engineers concur. In our research, we proved the
deployment of consistent hashing, which embodies the intuitive principles of algorithms.
Our focus in our research is not on whether the World Wide Web and SMPs are largely
incompatible, but rather on presenting an analysis of interrupts (BookyScholia).
Experiences with such solution and active networks disconfirm that access points and
cache coherence can synchronize to realize this mission. W woulde show that
performance in BookyScholia is not an obstacle. The characteristics of BookyScholia, in
relation to those of more seminal systems, are famously more natural. Finally,we would
focus our efforts on validating that the UNIVAC computer can be made probabilistic,
cooperative, and scalable.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Computational Frameworks for Higher-order Network Data AnalysisAustin Benson
1. The document discusses computational frameworks for analyzing higher-order network data, where interactions can involve more than two nodes. Real-world systems often involve higher-order interactions that are reduced to pairwise connections.
2. The author presents several datasets involving higher-order interactions and shows that predicting the formation of new higher-order connections is similar to link prediction but considers groups of nodes rather than individual links. Structural properties like edge density and tie strength influence the likelihood of simplicial closure.
3. Models are proposed to score open simplices based on structural features and predict which will transition to closed simplices. Accounting for higher-order structure provides new insights beyond traditional network analysis of pairwise connections.
This document discusses the evolution of the scholarly document from printed to digital forms, moving beyond hypertext paradigms to semantic publishing and linked data. It covers:
1) The transition from printed to digital documents and further to semantic publishing and linked data, which erodes the monolithic document notion.
2) How this affects science less than the humanities by changing the basic modes of document signification and conditions of comprehension.
3) The work of RTP-Doc in deconstructing the document notion in digital environments and reconstituting integrity and authenticity without the print analogy.
This document discusses various aspects of information and associativity as they relate to urban entanglements and the development of "wurban" things. It touches on how information is a multidimensional concept that is interpreted and context-dependent. Networks can develop through either logical/deterministic or associative/probabilistic growth, and associativity plays a key role in creating patterns and information from the interactions within dense urban environments. The document questions how future densification might develop in a more open, decentralized, and artful manner that accounts for performance and practice over rigid logics.
The document discusses the vision, architecture, and technology of the Semantic Web. It defines key concepts like semantics, ontology, RDF, and provides an overview of the Semantic Web stack and architecture. Examples of semantic web applications and technologies like SPARQL queries are also presented to illustrate how semantic markup allows machines to understand web content.
Higher-order link prediction and other hypergraph modelingAustin Benson
Higher-order link prediction and other hypergraph modeling can better model real-world systems composed of higher-order interactions that are often reduced to pairwise ones. Hypergraphs allow the modeling of interactions between more than two nodes, like groups of people collaborating, multiple recipients of emails, students gathering in groups, and drug compounds made of several substances.
Challenging Issues and Similarity Measures for Web Document ClusteringIOSR Journals
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. It then reviews related work on similarity approaches and measures. Some key challenging issues in web document clustering are discussed, such as measuring semantic similarity between words and evaluating cluster validity. Various types of similarity measures are also described, including string-based measures like Jaro-Winkler distance and corpus-based measures like latent semantic analysis. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs and discusses different similarity measures that can be used.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
Simplicial closure and higher-order link prediction LA/OPTAustin Benson
- The speaker proposes a framework called "higher-order link prediction" to evaluate models of higher-order network data. This extends classical link prediction to predict new groups of nodes that will form simplices.
- Analysis of datasets shows that many have many "open triangles" of nodes connected by edges but not in a simplex. A simple probabilistic model can account for variation in open triangles.
- Simplicial closure probability depends on edge density and tie strength between nodes, both for 3 and 4-node groups.
- For higher-order link prediction, the speaker evaluates score functions based on edge weights, structural properties, whole-network similarities, and machine learning to predict which open triangles will close.
Quality, Relevance and Importance in Information Retrieval with Fuzzy Semanti...tmra
We propose a framework for ranking information based on quality, relevance and importance, and argue that a socio-semantic contextual approach that extends topicality can lead to increased value of information retrieval systems. We use Topic Maps to implement our framework, and discuss procedures for calculating the resource ranking. A fuzzy neural network approach is envisioned to complement the process of manual metadata creation.
This document provides an introduction to a seminar on hypertext and critical theory. It discusses several postmodern theorists and concepts that are relevant to understanding hypertext, including intertextuality, networks, and the readerly vs writerly text. It also examines how some key aspects of hypertext, such as its non-linear structure and links between lexias, relate to and realize ideas from postmodern critical theory. Finally, it raises questions about the effects of hypertext and whether hypertext technologies can achieve or go beyond postmodern goals.
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...Daniel Katz
This document provides an overview of complex systems models in social sciences, focusing on network analysis and community detection methods. It discusses key concepts like directed vs undirected networks, weighted vs unweighted edges, and overlapping vs non-overlapping communities. It also notes important considerations like network resolution, computational complexity, and how community detection results depend on the specific context and questions being examined. A variety of examples are provided, including social networks defined by friendships or voting coalitions.
1. Semantic mapping involves connecting metadata structures like schemas and ontologies, as well as connecting vocabularies of values like knowledge organization systems and knowledge bases.
2. Significant progress has been made in mapping metadata structures to enable interoperability and data integration, though mapping knowledge organization systems remains a challenge due to the sparseness of links between concepts.
3. While automatic mapping tools have improved, manual mapping remains important and applications of mappings could be better categorized to inform mapping requirements and evaluation.
2013 Melbourne Software Freedom Day talk - FOSS in Public Decision MakingPatrick Sunter
Slides from my talk at the Melbourne Software Freedom Day, 21st September 2013, on the topic of Free and Open Source Software (FOSS) in public decision-making, particularly in the policy areas of climate change and transportation.
This document summarizes research posters being presented at a computer science and electrical engineering department research review. It describes 8 posters presented by BS, MS, and PhD students. The posters cover topics such as identifying political affiliations in blogs, statistically weighted visualization hierarchies, voter verifiable optical-scan voting, predictive caching in mobile networks, generating statistical volume models, predicting appropriate semantic web terms, approximating online social network community structure, and utilizing semantic policies for managing BGP route dissemination.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
This document discusses navigability in social tagging systems. It begins by defining social tagging systems and folksonomies. It then examines factors that influence navigability in social tagging systems like motivations for tagging. It analyzes how tag clouds and hierarchies can be used for navigation but notes that user interface constraints like tag cloud size and pagination can impair navigability. It concludes that certain popular approaches to tag clouds do not support navigability and new approaches are needed that consider the trade-off between semantic and navigational properties.
The keynote presentation at the 2nd Jordan International Conference on Computer Science and Engineering discusses three examples of modeling complex networks using computer science techniques:
1) Analyzing the link structure of websites using link structure graphs to understand site organization and user experience.
2) Studying messages exchanged in online social networks using machine learning to determine levels of agreement and disagreement.
3) Modeling complex biological systems like gene networks using stochastic pi-calculus to study multi-component system interactions.
The presenter is Dr. Natasa Milic-Frayling, a senior researcher at Microsoft Research Cambridge who leads research on information retrieval, machine learning, and user-centered design.
BookyScholia: A Methodology for the Investigation of Expert Systemsijcnac
Mathematicians agree that encrypted modalities are an interesting new topic in the field
of software engineering, and systems engineers concur. In our research, we proved the
deployment of consistent hashing, which embodies the intuitive principles of algorithms.
Our focus in our research is not on whether the World Wide Web and SMPs are largely
incompatible, but rather on presenting an analysis of interrupts (BookyScholia).
Experiences with such solution and active networks disconfirm that access points and
cache coherence can synchronize to realize this mission. W woulde show that
performance in BookyScholia is not an obstacle. The characteristics of BookyScholia, in
relation to those of more seminal systems, are famously more natural. Finally,we would
focus our efforts on validating that the UNIVAC computer can be made probabilistic,
cooperative, and scalable.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Computational Frameworks for Higher-order Network Data AnalysisAustin Benson
1. The document discusses computational frameworks for analyzing higher-order network data, where interactions can involve more than two nodes. Real-world systems often involve higher-order interactions that are reduced to pairwise connections.
2. The author presents several datasets involving higher-order interactions and shows that predicting the formation of new higher-order connections is similar to link prediction but considers groups of nodes rather than individual links. Structural properties like edge density and tie strength influence the likelihood of simplicial closure.
3. Models are proposed to score open simplices based on structural features and predict which will transition to closed simplices. Accounting for higher-order structure provides new insights beyond traditional network analysis of pairwise connections.
This document discusses the evolution of the scholarly document from printed to digital forms, moving beyond hypertext paradigms to semantic publishing and linked data. It covers:
1) The transition from printed to digital documents and further to semantic publishing and linked data, which erodes the monolithic document notion.
2) How this affects science less than the humanities by changing the basic modes of document signification and conditions of comprehension.
3) The work of RTP-Doc in deconstructing the document notion in digital environments and reconstituting integrity and authenticity without the print analogy.
This document discusses various aspects of information and associativity as they relate to urban entanglements and the development of "wurban" things. It touches on how information is a multidimensional concept that is interpreted and context-dependent. Networks can develop through either logical/deterministic or associative/probabilistic growth, and associativity plays a key role in creating patterns and information from the interactions within dense urban environments. The document questions how future densification might develop in a more open, decentralized, and artful manner that accounts for performance and practice over rigid logics.
The document discusses the vision, architecture, and technology of the Semantic Web. It defines key concepts like semantics, ontology, RDF, and provides an overview of the Semantic Web stack and architecture. Examples of semantic web applications and technologies like SPARQL queries are also presented to illustrate how semantic markup allows machines to understand web content.
Higher-order link prediction and other hypergraph modelingAustin Benson
Higher-order link prediction and other hypergraph modeling can better model real-world systems composed of higher-order interactions that are often reduced to pairwise ones. Hypergraphs allow the modeling of interactions between more than two nodes, like groups of people collaborating, multiple recipients of emails, students gathering in groups, and drug compounds made of several substances.
Challenging Issues and Similarity Measures for Web Document ClusteringIOSR Journals
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. It then reviews related work on similarity approaches and measures. Some key challenging issues in web document clustering are discussed, such as measuring semantic similarity between words and evaluating cluster validity. Various types of similarity measures are also described, including string-based measures like Jaro-Winkler distance and corpus-based measures like latent semantic analysis. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs and discusses different similarity measures that can be used.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
Simplicial closure and higher-order link prediction LA/OPTAustin Benson
- The speaker proposes a framework called "higher-order link prediction" to evaluate models of higher-order network data. This extends classical link prediction to predict new groups of nodes that will form simplices.
- Analysis of datasets shows that many have many "open triangles" of nodes connected by edges but not in a simplex. A simple probabilistic model can account for variation in open triangles.
- Simplicial closure probability depends on edge density and tie strength between nodes, both for 3 and 4-node groups.
- For higher-order link prediction, the speaker evaluates score functions based on edge weights, structural properties, whole-network similarities, and machine learning to predict which open triangles will close.
Quality, Relevance and Importance in Information Retrieval with Fuzzy Semanti...tmra
We propose a framework for ranking information based on quality, relevance and importance, and argue that a socio-semantic contextual approach that extends topicality can lead to increased value of information retrieval systems. We use Topic Maps to implement our framework, and discuss procedures for calculating the resource ranking. A fuzzy neural network approach is envisioned to complement the process of manual metadata creation.
This document provides an introduction to a seminar on hypertext and critical theory. It discusses several postmodern theorists and concepts that are relevant to understanding hypertext, including intertextuality, networks, and the readerly vs writerly text. It also examines how some key aspects of hypertext, such as its non-linear structure and links between lexias, relate to and realize ideas from postmodern critical theory. Finally, it raises questions about the effects of hypertext and whether hypertext technologies can achieve or go beyond postmodern goals.
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...Daniel Katz
This document provides an overview of complex systems models in social sciences, focusing on network analysis and community detection methods. It discusses key concepts like directed vs undirected networks, weighted vs unweighted edges, and overlapping vs non-overlapping communities. It also notes important considerations like network resolution, computational complexity, and how community detection results depend on the specific context and questions being examined. A variety of examples are provided, including social networks defined by friendships or voting coalitions.
1. Semantic mapping involves connecting metadata structures like schemas and ontologies, as well as connecting vocabularies of values like knowledge organization systems and knowledge bases.
2. Significant progress has been made in mapping metadata structures to enable interoperability and data integration, though mapping knowledge organization systems remains a challenge due to the sparseness of links between concepts.
3. While automatic mapping tools have improved, manual mapping remains important and applications of mappings could be better categorized to inform mapping requirements and evaluation.
2013 Melbourne Software Freedom Day talk - FOSS in Public Decision MakingPatrick Sunter
Slides from my talk at the Melbourne Software Freedom Day, 21st September 2013, on the topic of Free and Open Source Software (FOSS) in public decision-making, particularly in the policy areas of climate change and transportation.
This document summarizes research posters being presented at a computer science and electrical engineering department research review. It describes 8 posters presented by BS, MS, and PhD students. The posters cover topics such as identifying political affiliations in blogs, statistically weighted visualization hierarchies, voter verifiable optical-scan voting, predictive caching in mobile networks, generating statistical volume models, predicting appropriate semantic web terms, approximating online social network community structure, and utilizing semantic policies for managing BGP route dissemination.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
TOPIC networking portfolio
ACADEMIC LEVEL Undergrad. (yrs 3-4)
DISCIPLINE Business Studies
DOCUMENT TYPE Term paper
SPACING DOUBLE
CITATION STYLE Harvard
SIX DEGREES OF SEPARATION TO IMPROVE ROUTING IN OPPORTUNISTIC NETWORKSijujournal
Opportunistic Networks are able to exploit social behavior to create connectivity opportunities. This
paradigm uses pair-wise contacts for routing messages between nodes. In this context we investigated if the
“six degrees of separation” conjecture of small-world networks can be used as a basis to route messages in
Opportunistic Networks. We propose a simple approach for routing that outperforms some popular
protocols in simulations that are carried out with real world traces using ONE simulator. We conclude that
static graph models are not suitable for underlay routing approaches in highly dynamic networks like
Opportunistic Networks without taking account of temporal factors such as time, duration and frequency of
previous encounters.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
Linked Data Generation for the University Data From Legacy Database dannyijwest
Web was developed to share information among the users through internet as some hyperlinked documents.
If someone wants to collect some data from the web he has to search and crawl through the documents to
fulfil his needs. Concept of Linked Data creates a breakthrough at this stage by enabling the links within
data. So, besides the web of connected documents a new web developed both for humans and machines, i.e.,
the web of connected data, simply known as Linked Data Web. Since it is a very new domain, still a very
few works has been done, specially the publication of legacy data within a University domain as Linked
Data.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
SVHsIEVs for Navigation in Virtual Urban Environmentcsandit
Many virtual reality applications, such as training, urban design or gaming are based on a rich
semantic description of the environment. This paper describes a new representation of semantic
virtual worlds. Our model, called SVHsIEVs1
should provide a consistent representation of the
following aspects: the simulated environment, its structure, and the knowledge items using
ontology, interactions and tasks that virtual humans can perform in the environment. Our first
main contribution is to show the influence of semantic virtual objects on the environment. Our
second main contribution is to use these semantic informations to manage he tasks of each
virtual object. We propose to define each task by a set of attributes and relationships, which
determines the links between attributes in tasks, and links between other tasks. The architecture
has been successfully tested in 3D dynamic environments for navigation in virtual urban
environments.
Many virtual reality applications, such as training, urban design or gaming are based on a rich
semantic description of the environment. This paper describes a new representation of semantic
virtual worlds. Our model, called SVHsIEVs1should provide a consistent representation of the
following aspects: the simulated environment, its structure, and the knowledge items using
ontology, interactions and tasks that virtual humans can perform in the environment. Our first
main contribution is to show the influence of semantic virtual objects on the environment. Our
second main contribution is to use these semantic informations to manage he tasks of each
virtual object. We propose to define each task by a set of attributes and relationships, which
determines the links between attributes in tasks, and links between other tasks. The architecture
has been successfully tested in 3D dynamic environments for navigation in virtual urban
environments.
This document proposes a data model for managing large point cloud data while integrating semantics. It presents a conceptual model composed of three interconnected meta-models to efficiently store and manage point cloud data, and allow the injection of semantics. A prototype is implemented using Python and PostgreSQL to combine semantic and spatial concepts for queries on indoor point cloud data captured with a terrestrial laser scanner.
OntoSOC: S ociocultural K nowledge O ntology IJwest
This paper
present
s
a
sociocultural knowledge ontology (OntoSOC) modeling appro
a
ch. Ont
o-
SOC modeling appro
a
ch is based on Engeström‟s
Human Activity Theory (HAT)
.
That Theory allowed us
to identify fundamental concepts and rel
a
tionshi
ps between them. The top
-
down precess has been used to
d
efine differents sub
-
concepts. The
modeled vocabulary permits us to organise data, to facilitate in
form
a-
tion retrieval
by introducing a semantic layer in social web platform architec
ture,
we project t
o impl
e
ment.
This platform can be considered as a «
collective me
mory
»
and Participative and Distributed Info
r
mation
System
(PDIS) which will allow Cameroonian communities to share an co
-
construct knowledge on perm
a-
nent organi
z
ed activ
i
ties.
A Semantic Web Primer: The History and Vision of Linked Open Data and the Web 3.0
There is a transformational change coming to the world-wide-web that will fundamentally alter how its vast array of data is structured, and as a result greatly enhance the way humans and machines interact with this indispensable resource. Given the inertia of existing infrastructure, this segue will be evolutionary as opposed to revolutionary, and indeed has been envisioned since the inception of the web. Come join us for a layman's look at the nature of the Web 3.0, its historical underpinnings, and the opportunities it presents.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
Ontology-Based Resource Interoperability in Socio-Cyber-Physical Systems ITIIIndustries
The paper proposes a core ontology of socio-cyberphysical systems for resource interoperability. The ontology comprises the main concepts and relationships which are identified as relevant to model such systems. The approach considers a socio-cyber-physical system comprising cyber space, physical space, and mental space. In the ontology, these spaces are represented by sets of resources. The ontology provides the resources with a common vocabulary to share information and services and therefore makes these resources interoperable. The core ontology is specialized for a socio-cyber-physical system embedded in robotics domain. Technology of online communities is proposed to be used for resource communication.
Searching for patterns in crowdsourced informationSilvia Puglisi
This document introduces crowdsourcing and discusses discovering patterns in crowdsourced data. It discusses defining the context of volunteered information on the internet in order to understand relationships between data. A network model is proposed where different types of context define nodes and relationships between context determine edges. Properties of small world networks are discussed including how they could be used to model relationships between crowdsourced data and evaluate data quality. Finally, applications to search ranking, privacy and security are briefly mentioned.
Lectura 2.2 the roleofontologiesinemergnetmiddlewareMatias Menendez
The document discusses the role of ontologies in supporting emergent middleware. Emergent middleware is dynamically generated distributed system infrastructure that enables interoperability in complex distributed systems.
Ontologies play a key role by providing meaning and reasoning capabilities to allow the right runtime choices to be made. They support various functions throughout an emergent middleware architecture, including discovery, composition, and mediation. Two experiments provide initial evidence of ontologies' potential role in middleware by enabling semantic matching and process mediation. However, challenges remain around generating ontologies and addressing interoperability between heterogeneous ontologies.
The document discusses the trends and advancements of Web 3.0, also known as the Semantic Web. Web 3.0 aims to make internet data machine-readable through standards that encode semantics and metadata. This allows data to be shared and reused across applications through common data formats and exchange protocols. Key technologies that enable the Semantic Web include Resource Description Framework, Web Ontology Language, and SPARQL query language. Challenges to the Semantic Web include dealing with the vastness of data, vagueness, uncertainty, inconsistency, and potential for deceit.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
1. (2011) xxx-yyy xxx
»OntoFrac-S« Ontogenesis of Semantic Web with Fractal Federation
Moving from the realm of WWW to GGG
Rolly Seth
Scientist Fellow (QHS)
Council of Scientific & Industrial Research, India,
rolly.seth@gmail.com, rollys@csir.res.in
Keywords: multi-agent, semantic, factotum, fractal, medical, semantic relativity, ontology, EHR, GGG
Abstract: The annals of history bear witness to plethora of ontologies being created for the realization
of Semantic web. In order to manage this outburst of ontologies, we propose “OntoFrac-S” (Semantic-
Ontology Fractals) as the new way ahead to handle these ever emergent „Ontology Management‟
requirements. The paper presents a conceptual model for the implementation of Semantic Web using
„Fractals‟ and multi-agents as applicable to the next level web systems, that is, GGG (Giant Global
Graph). A generic approach has been proposed as applicable to all domains and further its
implementation in the Medical domain using „Factotum agents‟ is also explained. This paper can be
viewed as a base document for other to work on for building a Semantic world while adhering to the
concept of „Semantic Relativity‟.
1 Introduction
With the day to day proliferation of data coming
from various heterogeneous sources, the need for
adoption of a robust ‗Ontology Management‘ System is
becoming the de-facto standard. In the present times,
like the data itself, ontology also come in assorted sizes,
domains etc. Diverse attempts have been made to
standardize these ontologies in order to accomplish Tim
Berners Lee‘s aim of Semantic Web [1] by making the
data interoperable. Now the researchers are concerned
about linking the heterogenous ontologies more than the
‗data itself‘ cause the ‗concepts‘ which relates data will
only make this data interoperable. In the meanwhile, the
father of ‗WWW‘, Tim Berners Lee has coined yet
another term for the next level of networking which he
has named as GGG (Giant Global Graph) [13]. It
highlights the importance of ‗Linked Data‘ in the coming
years where ontologies will play a crucial role more than
ever.
Keeping apace of the latest developments all around
the world, it is being widely accepted that a single
unified ontology would not be sufficient enough to cater
the every growing needs of the user as has been
expressed in [2], [3], [4], [5] . Over the years the concept
of having local ontologies has emerged in order to
account for the fact that some concepts, interpretations
etc are limited within small communities itself. Thus,
human clusters prefer interacting in their locally accepted
terms than following the lingua franca of the world. This
encumbrance hugely hampers the realization of the
Semantic Web. However, over the years the researchers
have evolved various methodologies to address this
concern which focuses on mapping and integration of
local and global ontologies. Some of them are presented
in [6], [7], [8], [9], [10], [11], [12].
Albeit, they provide the veracious solution for local
and global ontology mapping, hardly any conceptualize
that ‗local‘ and ‗global‘ are merely relative terms and
each ‗local‘ would act as global for a cobweb of many
more sub-local ontologies within it . Thus, this crucial
factor also needs to be addressed while envisioning a
globally linked graph. This paper aims to raise this issue
and propose a solution in this regard.
The rest of the paper is organized as follows: Section
2 starts with the related work. Section 3 gives a brief
overview about relativity of the global data and
addressing it using the fractal approach. Section 3 of the
paper describes our approach in highlighting this issue.
Section 4 explains our proposed approach, followed by
OntoFrac-S Communication Algorithm in Section 5.
Section 6 talks about semantic communication in the
medical world using the relativity concept and aided by
Multi-Agent System of Factotums. Section 7 highlights
the importance of OntoFrac-S while Section 8 gives a
brief overview of implementation methodology. Section
9 presents the future work and Section 10 concludes the
work.
2 Related Work
Ontology Mapping is not a new concept and has
been talked about numerous times. The proposed
approaches for accessing the Global Linked data is mere
integration of local and global ontologies to provide
interoperability of RDF tagged data. Some others have
addressed this issue by offering ‗on the fly Semantic
Web Services‘ solution on a click of a button or even
without human intervention through the use of Intelligent
Agents. So far, so good, but it is high time that we
acknowledge the fact that the world cannot be
conceptualized by assuming that sole integration of a
single global/foundational and many local/domain
2. 2011 xxx–yyy xxx
ontologies exists at ‘One level‘ only. The macrocosm in
which we live is a complex system having multiple
layers which might not be visible at first view but
becomes more evident as we zoom in to the layers. With
regards to the information, the complexity has already
been justified by proposing that the data is fractal or
‗self- similar‘ in nature. In other words, at first view the
data might be viewed as a specific pattern but as we
zoom in on a specific area, we explore that this pattern
reappears again in a much more contracted area. This
self-similar nature continues at different levels along
with a contraction factor. The papers [15], [16], [17]
have highlighted the presence of this fractal nature in the
information. Apart from the information, Tim Lee in one
of his recent articles emphasizes on seeing the web
system also as a ‗fractal [14]. In order to address these
ever growing complexities of the ‗linked data‘, Tim
proposes on visualizing the web system as ‗FRACTAL
COMMUNITIES‘.
With this view, which has evolved over the years
when we see the ontological approaches in this respect to
unify the global data, they don‘t seem to address this
fractal nature. If data is fractal (as has been addressed by
many), the local and global terminology may seem to be
a relative concept depending upon the amount of scaling
done to view the data. This warrants the fact that merely
conceptualizing the integration of local and global
schemas and ontologies at one level would not help us
achieve our vision of GGG. At this juncture it is crucial
to study the ‘relativity of concepts’ in terms of a layered
approach rather than adopting an unconditional approach
to link the data which might not lead us to the correct
path. Philosophers like W.V. Quine and Noam Chomsky
have already addressed this issue and some early theories
as early as 1968 talk about ‗Semantic Relativism‘ and
‗Ontological Relativity‘ [18], [19]. However, these
philosophies are often juxtaposed with the technology
instead of adopting a federated approach. In the
upcoming sections we shift our paradigm on this well
known but less trodden path for interconnecting the
globe. While at one end, research on language relativity
has been given importance, the others focus on theories
like ‗Fractal Relativity‘, ‗Space Relativity‘ which are
mainly concerned with the time relativity as seen in
Physics and Mathematics domain [20]. With such diverse
views on relativity, ‗Relativity‘ can itself be viewed as a
relative term. However, this paper aims to apply the
philosophical concepts of Ontological/ Semantic
Relativity to the semantic web systems using fractal
concepts. At first glance this might sound you as ‗fractal
relativity‘. However, a deeper analysis of the two will
make you realize that the above mentioned ‗Fractal
Relativity‘ deals with the space time concepts which this
paper doesn‘t deal with. However, we intend to address
the cross- culture, language barriers for building GGG,
using Multi – Agent System by applying fractal
approach. In this regard, the similar works we could
relate to are [21] or [22] and [3], which is with respect to
the manufacturing domain. It presents a task model for
the hierarchical fractals in order to accomplish a given
task. However, it does not account for the linguistic
barriers within any two fractals and solely divides
fractals on the basis of the task at hand. Further, it
doesn‘t relate to Ontologies and the Semantic Web
Information System.
With regards to the Web System, a lot of
technological advancements have been made and it has
been reiterated several times that the next wave of web
(Semantic web) will rely highly on Multi – Agents and
Semantic Web Services as could be seen in [23], [3],
[24], [25]. Without a doubt, the backbone of this new era
would be provided by Ontologies which would define the
relationships, concepts between the meta-tagged data.
The interoperability of this tagged data would further be
provided by the ontology integration/mapping. Although
these form a major part, another crucial concern for
interoperability is relativity of data depending upon the
frame of Reference in which it is required.
3 Fractals and the Semantic
Relativity
Fractal is an age old concept as proposed by Mandelbrot
in his paper [26]. He described various natural structures
like mountains, clouds etc through fractal geometry. As
mentioned earlier, in the recent past this fractalness has
also been found in the web data itself [27], [28]. In order
to fully utilize the potential of these two, the merging of
‗Geographic Fractal‘ with the ‗Information Fractal‘ is of
utmost importance at the moment. Fractals are structures
or patterns which shows self similarity as they are
magnified, depicting the complexity which hides in them.
Other features of fractals are that of self-organization and
self-regulation. This Fractal geometry is best described
by using ‗Power Law‘. Keeping this mathematical
relation in mind, the fractal dimension ‗D‘ of a fractal
has been calculated as D= log s B [31], where ‗s‘ is the
scaling factor that is the depth of zooming done and ‗B‘
is the branching factor that is the number of branches or
sub-fractals patterns a given fractal is divided into. Here,
Fractal Dimension ‗D‘ is a non-integer number. Figure 1
explains this fractal concept. Here we have visualized the
whole globe as a Big Fractal in which various sub-
fractals are present. The number of these sub-fractals at
any level would be the branching factor of that level and
thus would in turn have many sub-fractals.
3. zz (2011) xxx–yy xxx
Figure1: Fractal Communities
Each of the depicted branches/sub-fractals in a given
fractal can be considered as its nodes. Corresponding to
each of the level, a fractal value is assigned to each of the
node in that level. Assuming fractal value of the top level
fractal node to be 1, the fractal value of sublevels is
computed as: F v+1 = s* Fv =C*Bv+1 ^(-1/D) [31], where
C is a constant such that 0<C<=1. Fractal dimension ‘D‘
denotes the amount of information contained in that
fractal. However, the fractal value Fv can aid us in setting
the dynamic abstraction level for viewing the
information. Works like [32],[33],[34] propose that these
fractal values help us in filtering the unneeded content by
setting a fractal threshold and information
above these threshold will be visible to the user to help
him find the correct information. These two concepts
play a vital role in designing a framework for semantic
communication across the globe by taking into account
the ‗Semantic Relativity‘. For us, to explain in simple
terms, the words ‗Semantic Relativity‘ imply that the
meaning of terms and concepts doesn‘t change with the
change in frame of reference. An instance explaining
this non- relativity of the semantics has been mentioned
above where for two different people in different frame
of references, the same term would be inferred
differently. The same has been depicted in Figure 2.
Here, for a person standing in 1st
frame of reference
(which is global view), the term ‗local‘ would mean
something that is adopted in the two countries X and Y.
However, for another person standing in 2nd
frame of
reference (as Country X), the same term local would
mean something that is conceptualized within 2 different
states of the same country ‗X‘. A need for semantic
elucidation by specifying the frame of reference along
with the concept in which it has been conceptualized is
of utmost importance to make the information
interoperable. This proposition has been explained in the
next section.
4 Proposed Approach
One would agree that the geographical divisions
(which also denote the non-relativity in semantics)
cannot be constrained within the Euclidean structures
like circle, triangle etc. Similar, is the case with data
which drives us to explore the world of fractals.
Ontological complexities which have become an
impediment to the fast pace progress of Semantic Web
vision can best be explained and simplified using
fractals. Thus, in order to address these complexities, we
propose an ‘OntoFrac-S’ Web which is an acronym for
‘Semantic- Ontology Fractals’ (Here, ‗S‘ represents the
Semantic Web).
Figure 2: Example of Non-Semantic Relativity
The ‗OntoFrac-S‘ federated Semantic Web system is
conceptualized to be a multi-layered, multi-agent
ontological system. The ‗OntoFrac-S‘ fractal federation
accounts for the fact that the world is divided into
various regions and sub-regions, each of which follow
their own unique ontology. This has been denoted by
4. zz (2011) xxx–yy xxx
several Frac-S (Semantic Fractals) who would represent
a specific geographical region. Each frac-S fractal would
have a Fractal Focal Point (following a common upper
level local ontology within that frac-S) as shown in
Figure 3.
OntoFrac :
Layer 1
OntoFrac :
Layer 4
OntoFrac :
Layer 3
OntoFrac :
Layer 2
Fractal
Focal Points Dynamic Inter-Fractal
Communication
Figure 3: Ontofrac-S
This Fractal Focal Point would be represented by an
agent in the software system and is not any physical
entity. Similar to OntoShells in GUN [35], a frac-S
would have a fractal profile.The frac-S focal agent would
have a OntoFrac-S profile. This profile would contain
the information about its sub-fractals and agents in that
fractal. Each fractal would have the autonomy to manage
the work inside that profile. Any Fractal Focal Point
would act as a Black Box for the outside world, sharing a
common ontology or a local ontology for that fractal.
This would define only an upper level ontology for the
fractals inside it. Not to forget that the term
‗foundational/ upper level ontology‘ is a relative term as
mentioned earlier. The actual referred ontology would
depend on the frame of reference one may find himself
into. The granularity of the ontology would increase as
the fractal layer level increases. The fractal would act as
an abstraction where one would not be needed to search
for each and every resource/agent. Instead fractal focal
point would do that for you. The fractal profile will help
to accomplish this task.
Any outside agent (outside a given fractal) will have
to contact its respective fractal focal point (frac-S focal
agent) before any interaction with the inside fractal
world. Some other responsibilities of this Fractal Focal
Point alias the Frac-S Focal agent would be:
- Broadcasting of Advertised Messages to relevant
Agents/ sub-fractals within that respective fractal.
- Establishing a communication channel between two
fractals by Ontology Mapping and integration. It may be
viewed as the entry point into the fractal.
- Containing information about local agents and sub-
fractals in the Fractal Profile.
- Reject messages which might not be considered as
relevant to that fractal profile.
- Entry/Exit point of all Global queries.
- Resolving local frac-S queries.
- Flexibility to manage information within its focus.
A ‗frac-S‘ is considered to be a region containing a
fixed amount of information ‘I‘. An automatic dynamic
re-configuration would happen within this fractal when
this information content ‗l‘ would exceed a given
Threshold ‗T‘ or become ‗unstable‘ to say so as seen in
Figure 4. This re-configuration will result into automatic
creation of new sub-fractal regions within that fractal.
The effect of this re-configuration will be to make the
fractal ‗stable‘ by keeping the managed information
content level by any frac-S focal point within a given
threshold as shown in Figure 4.
Q). How would fractal re-configuration result into
‗stability‘?
Answer: As mentioned earlier, fractal reconfiguration
would result into the creation of dynamic sub-fractals
which would themselves have the autonomy to manage
the information inside them, thereby relieving its
immediate upper level fractal of some responsibilities.
The outside fractal would have to contact that respective
sub- frac-S for referring to any information inside it.
Thus, issue of manageability of the information would be
solved. This would automatically provide stability as
then the fractal would have to manage less amount of
information (within T).
Some would further argue that how would adoption
of a tree like hierarchical ontological approach would
simply the things rather than complicating it? Concerns
like why to think about ontology relativity when
information is already relative can be raised? But
assigning each Frac-S focal agent with a local ontology
of that frac-S would help simplify the task of ontology
management and further enhance interoperability by
contacting respective Ontology OntoFrac-S Focal
Agents.
Similar to the concepts of sub-sets in Mathematics
and sub-classes in Computer Science, human
communities too inherit some global properties while
adhering to exclusive regional properties. Consider for
example, a person living in country C and state S inherits
some characteristics of its country, some from his state
and some other which would be individual. The same
should be replicated in the information present in the
Semantic Web system backed by an equally compatible
ontology. Fractal approach allows us to address this
feature. Consider a situation where a person says ―You
would have to take it from a nearby bank‖. It depends on
the frame of reference which will define as to which
‗bank‘ the person is referring to? Assuming, Frame of
Reference (FoR) = Context, Context can be thought of
as being divided into two major components namely,
physical and psychological. There are two ways of
determining these contexts- Sensors and Agents. Sensors
are definitely an effective way. But embedding sensors
everywhere would not sound to be a feasible idea.
However, these local agents or the software programs
would replicate the same without incurring physical
sensor‘s much higher cost. In order to perform this task,
OntoFrac-S model would be an aid. To ascertain the
context, local agents (local to any fractal) would be
5. zz (2011) xxx–yy xxx
Dynamic
Reconfiguration
Fractal
Focal Point
Unstable Frac-S
( Managed Information Content > Threshold )
Stable Frac-S
( Managed Information Content < Threshold )
Dynamic creation of
sub-fractals with
autonomy to
manage resources
within it.
Focal Agents
Figure 4: Dynamic Reconfiguration of Frac-S.
supported by that respective frac-S‘s ontology.
Referring to our previous example of bank, agent
would firstly determine the bank‘s FoR before
solving the task at hand as seen in Figure 5. This
nitial determination of FoR before solving the
problem is very important in order to find the correct
solution.
The human society is assumed to be divided into
nested fractals in the Ontofrac-S system and each of
its frac-S fractals would have their own local
terminology along with some inherited concepts from
upper levels fractals. An agent within a Frac-S would
refer the corresponding focal point (Frac-S Focal
agent) for knowing the local ontology that a
particular resource is following. This would reduce
the burden of referring to the complex higher level
ontologies. In our example, in order to determine the
‗bank‘ context, any arbitrary agent within the fractal
would find from Frac-S ‗X‘ that it follows Medical
Institution ontology. This in turn would infer that the
person might be referring to a ‗Blood Bank‘ instead
of the river bank or money bank as shown in Figure
6. The reason we mention the word ‗might‘ is that
this context only represents a part of the physical
context of Figure 5.
Frame of Reference (FoR)
CONTEXT
Physical Psychological
Determined using
Agents
Determined using
Physical Sensors
Determined using
Agents
Supported by Frac-S
ontology
Environmental
Syntactical
Or
Figure 5: Determination of Frame of Reference
It might happen that even though the person
is in fractal X, he might be referring to Ontology of
Fractal Z as shown in Figure 6. This might be the
case if the person has to withdraw some money from
the bank to pay for the medical bills. This draws our
attention to the sub-component ‗psychological
context‘ which is still not taken into account in this.
This would further be decided by an agent using the
learning algorithms as to which ontology he refers to
when he says ‗bank‘. Initial judgments would be
formed using the syntactical structure of the physical
context. In it, the analysis of the stated sentence is
made to find the context. If this doesn‘t help arriving
at a solution then psychological context of the person
is checked by referring to the experience database of
the person. Initially, if both strategies fail to provide a
confident solution then the environment of the person
is assumed to be the context of person by referring to
the associated frac-S focal point‘s ontology.
Although initially the physical context would
dominate, the psychological context would start
playing a major role with learning which would
denote the nearest frac-S ontology that the person
might be referring to. In order to search this ontology,
the concerned agent would contact its frac-S focal
point. If no nearest ontology search is found to match
then the frac-S shall query its upper level fractal to
search for that ontology in the neighboring fractals.
This process will continue adopting a more global
search fractal search each iteratively.
Thus in our example, a person although
standing in the physical context of hospital, might
refer to a financial institution as ‗bank‘. Initially, the
agent might not be able to correctly infer this if the
sentence would not clarify this. But over time built
psychological memory/context would help identify
that the person actually means ‗Financial Institution‘
ontology. On identifying the context, the agent
(inside Frac-S X) would firstly query its frac-S that is
6. zz (2011) xxx–yy xxx
frac-S ‗X‘ focal point to know if such ontology exists
inside it. Since, this query will return the answer as
‗No‘, Frac-S ‗X‘ would further query Frac-s ‗W‘
Focal Point (another agent) to search for the
‗Financial Institution‘ Ontology in Frac-S ‗X‘
neighboring frac-S fractals. Since Frac-S ‗W‘
contains such an ontology, it will map ‗Medical
Institution‘ and ‗Financial Institution‘ Ontologies and
a dynamic inter-fractal communication chain would
be formed between Frac-S ‗X‘ and Frac-S ‗W‘ for
further communication till the query has been solved.
A similar process would be adopted, if the
statement would be like ―You would have to take the
money from a nearby bank‖. Here instead of
psychological context, the syntactical context
(finding the semantics of the sentence using the
syntax) would be of much higher value as the
sentence itself mentions that the person is referring to
‗financial institution‘ ontology. But one would argue
as to how the syntactical context would be formed?
Firstly, the syntactical structure (a sub-component of
Physical context, see Figure 5) of the sentence will
identify the <Subject> and <Object> in the sentence.
Then, a search (similar to given above) for a
<Subject>ontology where, <Subject><Predicate>
<Object> triple (<Bank><Relation><Money> in our
case), RDF tag would occur. Thereafter, an inter-
frac-S communication channel would be established
for further communication. In case syntactical
structure is not of aid to form a RDF triple then other
parts of the context would be evaluated to find an
answer.
“You would have to take
it from a nearby bank.”
Frame of
Reference :
Hospital
Medical
Institution
Ontology
Institution
Ontology
River
Ontology
Fractal X
Financial
Institution
Ontology
Bank
Fractal Z
Fractal W
Bank
Water
Bodies
Ontology
Fractal Y
Fractal U
Hospital
Fractal Focal Point/ Frac-S Agent
determines Frame of Reference (with
regards to physical context) to be
Hospital using it’s associated Ontology.
Figure 6: Different contexts of the word ―Bank‖
It needs to be worth mentioning, that as it is
clearly visible from the above examples, a number of
ontologies would be generated (with respect to
different components and sub-components) for
determining a frame of Context. However, at
different points of time, different components/ sub-
components will hold different priorities and the
ontology having the highest priority would be
selected as the Frame of Reference.
This priority can be determined by attaching a
confidence coefficient to each of the ontology
generated from various components: FoR = Ontology
of Maximum of (C1*Syntactical + C2*Environmental
+ C 3*Psychological). Here, as a rule of thumb,
Syntactical Context would firstly be formed and if
it‘s confidence interval would be >0.8 then other
context won‘t be searched. This would save us from
the unnecessary headache of searching for other
possibilities when we would already know as to what
context the person actually means. In case, much
semantic information cannot be inferred from the
Syntactical context, then only the other two
components of FoR would be calculated. We strongly
urge that Personalization and filtering which is
always considered as the last module should be
considered as the first to reduce the overload on the
web system. Further, any new agent in OntoFrac-S
would only need to register with the nearby frac-S
fractal point instead of searching for a global
registration which makes the task easier. Figure 7
shows this registration process where basic data set is
enrolled with the frac-s focal agent to ease the search
process. As could be seen from the figure, there is no
need to register in the upper level frac-S fractals and
a simple registration would do as each frac-S has the
autonomy to manage agents within it. Note that the
last column of OntoFrac-S profile (Figure 7) is of
access rights. Any resource while registering must
need to mention inform its respective frac-S focal
point as to whom all and under what conditions will
have the access to them? Such a requirement caters
for the security concerns of sharing data only with
7. zz (2011) xxx–yy xxx
authorized users. This requirement is of utmost
importance in several cases like EHR (Electronic
Health Record) which although readily available
would be accessed by authorized users only who
would provide some encrypted code in their
request/query. It must be noted that any resource can
be registered using this process. This means that apart
from human beings‘ equipments etc. can also be
registered and have an agent associated with them,
which we call as ‗Factotum‘ here. However, although
resource would belong to various categories, it would
be initially registered by some person who would
provide the essential registration details. This would
later help in providing the so called ‗On the fly
Semantic Web Services‘ by various equipments,
instruments etc. Section 4 explains the step by step
procedure adopted in formalizing this decision.
New Registered Agent
Sends Registration Request
<Request Name, Resource Name, Resource Type, Domain Ontology (if any), Keywords list, access rights>
<Registration_Request, Dr. ABC, Human, Physician, http://www.abc.com/healthcare,
(heathcare,general_physician), All>
Person
GUI
Person
Frac-S Onto Profile
Frac-S Focal URI:
Frac-S foundational ontology :
Agent URI Resource Name Resource Type Domain Ontology Keywords
Fndtl domain
….. . …… ……… …. …..
Frac-S Onto Profile
Frac-S Focal URI:
Frac-S foundational ontology :
Agent URI Resource Name Resource Type Domain Ontology Keywords
Fndtl domain
…… ……. ………. ………. …..
Mr. ABC Human Physician http://…….
Registration Acknowledgement
<Successful, Associated_Agent_URI>
Http://
www.abc.com/
healthcare/
agent_a
Healthcare,
gen_physician
Frac-S Focal Agent
Frac-S Focal Agent
Access
Rights
Access
Rights
All
Figure 7: Registration Process of an Agent.
Autonomy and flexibility of Frac-S reduces the
complexity of registering
4 OntoFrac-S Communication
Algorithm
Due to the fractal nature of the divided regions
into frac-S, the adopted algorithm would hold to be
valid at any scale. This would aid testing and
implementation of the OntoFrac-S System wherein
small scale system would replicate the behaviours of
the global linked graph system. Also, OntoFrac-S
provides the flexibility of scaling it up without
incurring extra complexities. Let us see the algorithm
to be followed for finding solution to various queries
in the global system.
The following notations have been used in the
algorithms:
/* */ : Comments
-> : Assigment/send
=> : Processing LHS to find RHS
= = : equals to
{ } : group
A. OntoFrac-S Algorithm:
Step 1: Task Ti -> Agent Atg
/*Task Ti assignment to an Agent)*/
Step 2: Ti = (St1, St2,.., Stn)
/*Division of assigned task into sub-tasks by the agent.*/
Step 3: St1-> Agent Atg
/*Taking St1 to be solved by the agent*/
Step 4: Calculation of Frame of Reference (FoR):
a). Calculating Syntactical Context:
i). St1Statment => <Si> and <Oi>
If(<Si>== undetermined)then
Object_ To_be_found=<Si>
If(<Oi> == undetermined) then
object_To_be_found=<Oi>
Loop_var=1
While(loop_var!=n and
(object_to_be_found=”undetermined”)
{
If(loop_var ! = i)
Stloop_var =>object_to_be_found
}
/* Identification of <Si> (Subject) and
<Oi>(Object) of St1 by initially analysing St1.
If either <Si> or <Oi> could not be determined from St1
alone then using all other sub-tasks statements to identify the
<Si> or <Oi>(which ever not found from St1 */
If (<Oi>==undetermined or <Si>==
undetermined) then
proceed to Step 4b
/*Frame of Reference could not be
searched on the basis of statement
given.*/
else
goto Step 4aii.
/* Both subject and object found for
determination of the FoR */
ii). Bid Proposal=<query_id, initiator Agent
URI, associated frac-S agent URI, query,
deadline>
/* Bid Proposal formation for determination
of associated ontology location in which the
terms are used to remove relativity*/
iii). Bid Proposal=>Contract Net Protocol
(CNP) (See CNP algorithm below).
iv). If (Winner==1 and No_of
winners==1) then
FoR=Onti
Goto Step 5
else
go to 4b
8. zz (2011) xxx–yy xxx
/* If more than 1 ontology found using
the information given in the task
statement then short listing them using
psychological context to arrive at 1 FoR
*/
b). Calculation of Psychological Context:
i). If (Winner==1and no_ of_winners>1)
then
Goto 4b iv
else
goto 4b ii
ii).Search (Exp_DB,<Si>/<Oi>,Sti) =>
Ontology Onti .
/*Searching Experiential DB for relevant
Ontology Onti to which the person might be
referring to using the subject or object terms found
in the Taken sub-task. It may be possible that both
(Subject,Object) are not present in Sti then taking
whichever of the two is available*/
ii). If (ont_found==1 and num_ont==1 ) then
FoR=Onti,
proceed to Step 5
else
goto Step 4c
/*If FoR could not be determined using
Psychological Context then proceeding to
Environmental context */
c). Calculation of Environmental Context:
i). if(num_ont>1)then
Query(nearest_Frac-Si Ontology,
<Si>,<Oi>)->Frac-Si Focal Agent
/*Querying Frac-S Focal Point (Fractal Agent) in
which the initiator agent is present to find the
nearest frac-S that contains ontology which has
required <Si>,<Oi> or both (whichever of them has
been identified using Syntactical Context*/
FoR=nearest_Onti,
/*As the nearest upper level frac-S (in which
ontology is found) would be considered as the
environment is which the person is in*/
Else
Goto Step 4c ii
ii). If (winner==0) then
query (User,Sti)-> Clarifications
/*If non-of the three methods fail to find one perfect
solution for determination of FoR then asking the
user to clarify which FoR is he talking to in current
activity*/
Wait(user_clarif==0)
If( user_clarif==1) then
Proceed to Step 5.
Step 5: Query( ontology integration, FoR Frac-S,
current Frac-S)
/* requesting initiator frac-S focal point for
ontology integration between FoR ontology &
initiator frac-S Ontology.*/
Wait(ont_int_under_process)
Step 6: Sub_task_Statement_Revision(Aci, FoR,
integrated ontology)
/* Providing clarification in the activity statement
using finalized FoR. Based on Ontology integration,
completing the sentence to make it a global query */
Step 7: Bid Proposal=<query_id, initiator Agent URI,
associated frac-S agent URI, query(using Revised
Activity Statement), deadline>
Step 8: CNP(Bid Proposal)
Step 9: Dynamic inter-frac-S chain (present frac-S,
selected bidder)
/*Establishment of dynamic inter-frac-S chain with
the selected bidder for further communication by
ontology integration*/
Step 10: Wait (Query Solution!=1)
/* Waiting till the announced winner provides the
query solution. It may be noted here that selected
winner can itself get the task/sub-task to be done by
some other agent using this process but that is
internal to the winner which is not included in this
algorithm*/
Step 11: Presentation (user,solution)->GUI
/* Presentation of information to user by the initiator
agent*/
Step 12: If (Person satisfied with solution) then
Update (Experience_DB)
/*Saving the query, solution and FoR details for
future reference.*/
Step 13: If Sti <= Stn go to Step 4 else Stop.
/* Repeat the same process for all other activites*/
B. Contract Net Protocol (CNP) Algorithm:
Step 1: Broadcast (Bid Proposal)->Frac-Si Focal
Agent
/*Send the bid proposal to frac-S focal point for
broadcast */
Step 2: Frac-Si Focal Agent (Bid Proposal)->
Agents/SubFrac-S Focal Agent
/*Frac-S Focal Point broadcasts the bid
proposal to all other agents and sub-frac-S
inside that frac-S.*/
Step 3: While(frac-S_focal_value<threshold)
Frac-Si Focal Agent (Bid Proposal)-> upper level
Frac-S Focal Agents
/* Focal Point also contact its upper level frac-S focal
points (not contacted before w.r.t this bid) to send the
bid to adjacent and other upper levels frac-Ss using
ontology mapping.*/
Step 4:
If (Frac-S Focal Agent (received_Bid) ==1) then
{
Compare (Bid, Frac-S Onto Profile)
Accept_or_Reject(Bid Proposal)
If (Accept_or_Reject(Bid Proposal) ==
“Accept”) then
Send(Bid Proposal)->Concerned SubFrac-
S Focal Agents, Agents
Else
Reply(Bid_Not_Accepted)->Initiated frac-Si
focal agent
}
/*Each Frac-S Focal Agent to which bid proposal is
sent compare the bid with their respective Fractal
Profiles and have the autonomy to accept
broadcasting (if thinks relevant) or reject broadcast
within its fractal.*/
Step 5: Agent_Response=<query_id, (initiator Agent
URI, associated frac-S agent URI),Bid
Acceptance(Y/N),Confidence match, specialization,
estimated time>.
/* Reply by giving bid response to the initiator agent
using ontology mapping*/
Step 6: If (Time spent T> deadline time) then
goto Step8.
9. zz (2011) xxx–yy xxx
Step 7: Bid evaluation (See part C of the algorithm)
Step 8: If (Bid_evalReply==‟Successful‟) then
goto step 10
/*Announcement of successful bidder*/
Else
goto Step 9
Step 9: If (Bid_evalReply==‟Unsuccessful‟) then
{ If (frac-S threshold>min_frac-S
threshold) then
/* More upper level fractals available*/
{
Frac-S threshold value= Frac-S
threshold value – decrease_factor
Goto Step 3
/* To Contact more upper level frac-S
fractals*/.
}
Else
Return (Winner=0)
}
Elseif (winner==1 && no_of_winner>1)then
Return((winner=1,number_of_winner>1,
details of equal scorers)
Step 10: Inform(successful_bidder,terms of
agreement)
Step 11: Wait(Successful_bidder->ack)
/*Wait for the acknowledgement from the
successful bidder for accepting terms of the
contract*/
Step 11: If (Bidder_ack==”Yes”) then
{ Clear(Bid_eval_buffer)
If(Agentx(submitted_bid)==1) then
{
Broadcast(query_id, (initiator Agent
URI, associated frac-S agent URI,
winner_details)
/*Announce successful winner to all
agents who have submitted*/
}
Return(Winner=1, No_of_Winners
=1 ,Winner_details)
}
else
Bid_eval(bidder_Ack=”No”)
/* Not accepted the bid*/
C. Bid Evaluation Algorithm:
Step 1: If ((bid_eval==1) && (bidder_ack=‟No‟))
then
{Remove(successful_bid->bid_eval_buffer)
Goto step 8
/* Removing not acknowledged bid from the bid
evaluation buffer and re-evaluating */
}
else
goto step 2
Step 2: Bid_eval=bid_response1
Step 3: /*Check received bid response announcement
identifier. */
If( received (and) =required (and))then,
proceed to Step 4,
else
reject Bid
Goto Step 6
Step 4: Bid_Response_sim_perct=Percenti
/*Assign similarity percentage between the
announced bid and received bid response to each
bid response*/
Step 5: Bid_Response_Scorei = Percenti /
Prpsd_Cmpltn_Timei
/*Assign a score to the bid using the formula: Specialization
Similarity Percentage/Proposed Task completion time*/
Step 6: If(All_Bid_evaluated==1)then
Goto Step 7
Else
{
If (Bid_Response_Scorei>Highest_Score)then
{ Highest_Score= Bid_Response_Scorei
}
eval_bid=Next_Bid_Response
goto step 3
}
/*While all the proposals not evaluated, take the next bid for
evaluation*/
Step 7: bid_eval_buffer=[All evaluated bids ]
Step 8: w_bid=bid_response1
No_of_winners=0
While(w_bid !=last_bid_response)
{
If ((Bid_Response_Scorei==Highest_Score) &&
(Bid_Response_Scorei>80)) then
{ if (No_of_winners==0) then
{ Successful_Bid= Bid_Responsei
Winner=1
No_of_winners+=1;
}
Else
Return(winner=1,number_of_winner>1,
details of equal scorers)
}
}
/*Assign the bid with the maximum score and having score
>80 as the „Successful bid‟ and winner=1.However, if more than 1
bidder has highest score then sending details of all these to the
agent*/
Step 9: If (No_of_Winners==0)then
Return(Bid_eval_response=”Unsuccessful”)
/*If no unanimous bidder (having score >80) wins then
„Unsuccessful‟ message is sent to the initiator agent.*/
5 Semantic Communication in
the Complex Multi-Agent Medical
World using OntoFrac-S
Semantic Web won‘t prove to a boon if it won‘t
aid the humans in performing their various
operations. Applications to which the ‗GGG‘ is put to
use will decide the fate of this next level technology.
One of the major area having high hopes from this
Globally Linked Graph is Medical Science. Long
written concept like tele-medicine would only
blossom full fledge after the successful
10. zz (2011) xxx–yy xxx
implementation of Semantic Web. Thus, we cite a
real world example from the Medical World and
show how such a situation would be efficiently
managed by adopting the ‗OntoFrac-S‘ methodology.
Physicians often need to consult with fellow
physicians to decide on some medical problem.
Further, they would require someone‘s assistance in
referring to similar cases or fetching some medical
data from a distance etc. In order to aid physicians in
performing these tasks, agents have been thought of
as a reliable option as seen in [36], [37], [38]. Using
this as a starting point for our example, we assume
that a ‘Factotum Agent’ is associated with each
physician. Here the word ‗factotum‘ means an ‗All
Purpose Assistant‘. Thus, a factotum agent would
perform all necessary tasks for a physician in the web
world and aid him in his efficient decision making. A
myriad of work done on Multi-Agent Systems [39],
[40], [41] concentrate on formation of a team by
various agents where each agent would be
performing a different task in order to fulfill a preset
aim. A very few of them propose on attaching agents
with individuals. We have adopted this latter
approach as we strongly feel that in the semantic web
each agent needs to have an individual identity like
the resource itself. As proposed in the Semantic Web
approach each resource be it human, thing etc would
have an URI associated with it. However, keeping in
mind the security and management concerns of the
resource, we would need to provide each resource
with a brain. This brain would be provided by
associating with them exclusive agents who would
provide access to authorized users, manage
information, provide flexibility, autonomy,
collaboration in the web system etc. Thus, in doing so
the URI linked to a resource would be efficiently
managed by its respective agent. This, agent in the
Medical World we have called as ‗Factotum‘.
Considering, this Factotum Agent to be present
in the OntoFrac-S World, let us see how will the real
world problems in the medical domain would easily
be tackled. Our situation is as follows:
Situation: ‗A patient approaches a physician to
get diagnosed. He has been having high fever for
some days and thus, he has got his medical tests done
from a nearby hospital named ‗HSPTL‘ on the
recommendation of a doctor named ‗DCTR‘.
However, he hasn‘t collected his reports from the
Pathology Section (PTHLGY) of the hospital. Now,
he approaches the physician for getting diagnosed.‘
Situation Management using OntoFrac-S:
Having provided the detailed algorithm of OntoFrac-
S, let us understand how this patient-physician
situation will be handled in Semantic Web using our
proposed approach. Aftermath the arrival of patient
(Pati), Physician (Phi) would assign his associated
Factotum ‗Fi‘, the task of ‗firstly getting EHR from
PTHLGY section of HSPTL and then getting
suggestions from friend physicians on the possible
illness of the patient using this EHR‘. In order to do
so Phi commands Factotum Fac_i as ―Get report of
Mr. ABC from PTHLGY section of HSPTL hospital
nearby and then consult other physicians on the
illness.‖ After receiving the request from Phi, Agent
Fac_i in Frac-S ‗Fr-Oi‘ divides the process into
Sub-TASK 1: Getting EHR of Mr. ABC, who is
having high fever from nearby HSPTL hospital,
Sub-TASK 2: using his EHR, Consulting friend
Physicians on the illness of Mr. ABC who is having
high fever.
In accomplishing each of the two sub-tasks the
following two major stages are encountered:
i). Identifying the Frame of Reference in which the task
been allotted in order to form a non-relative query
ii). Bidding and finding the solution using this non-relative
query as framed in i).
The 1st
stage is divided into three steps:
i) Searching the assigned task sentence to provide
non-relativity in the query. If this doesn‟t
succeed then follow step ii.
ii) Searching Experience Database of the physician.
If still not succeeded then follow step iii
iii) Finding the Environmental context and query
user/physician for clarifications (if
required).
Following this methodology, in the 1st
Sub-task, the
term ‗nearby‘ looks to be a vague term and would
sound differently to people in different frames of
reference. Thus, simply bidding on the identified sub-
task cannot be performed. In order to do so, the query
has to be made non-relative by clearly identifying as
to which HSPTL Hospital, the physician is referring
to as it might happen that more than one hospital may
have the name as ‗HSPTL‘. Thus, firstly clarity is
sought in finding the exact URI of this hospital
HSPTL using the FoR part of the algorithm (as
mentioned in the previous section). Next, a non-
relative query is formed by replacing the ambiguous
term ‗nearby‘ with the exact location (URI) of the
HSPTL to which the physician is referring to. Then
Contract Net Protocol is followed in an iterative frac-
S order (from local to global) to search for an agent
who would get this sub-task done. Having finished
with sub-task 1, Factotum proceeds to Sub-Task 2.
Here again, the two step process mentioned above is
followed. Here the ambiguity is with respect to the
word ‗friend physicians‘. This equivocalness is
removed by firstly finding as to which Physician, Phi
is referring to. Here syntactical sentence doesn‘t
provide much help and FoR is generally found in step
ii of Stage 1 (that is using experience database).
Physician‘s database will generally aid in finding out
11. zz (2011) xxx–yy xxx
Figure 8: A brief pictorial representation of the OntoFrac-S communication
12. zz (2011) xxx–yy xxx
as to who all are this physician‘s friend. Having
found the FoR, a non-relative query is formed by
clearly identifying the physicians‘ URIs to whom this
non-relative contract net bid would be sent for
identification of the illness.
Although we have tried explaining the
process in simpler terminologies, a rigorous
algorithm as mentioned earlier would be followed for
each of the divided sub-tasks.
It would be worthwhile mentioning one pre-
requisite which although implicit, needs a mention
over here, keeping in view the security concerns that
the medical world is facing with regards with the
semantic web. It is that Patient would provide a key
to his EHR (much like the bank account number) to
the Physician whom he has come for consultation.
This key would become a part of the essentials details
which would be provided to the successful bidder
after signing an online contract. This online contract
will mention that he would not use EHR unlawfully.
Figure 9 shows the sequence diagram of how the
assigned task to the factotum would be carried out.
7 OntoFrac-S Advantages
Let us see how some of the features essential for
the implementation of Semantic web will be provided
by OntoFrac-S approach:
Collaboration: Provided by using Multi-Agents
named Factotums
Autonomy: Each Frac-S fractal region would
have autonomy to self-manage the agents within
them and coordinate with the sub-frac-S fractals
within them. They would also have the
autonomy to follow their own local ontologies
without coming in the way of interoperability
and globalization.
Flexibility & Adaptability: Provided using
Contract Net Protocol which would form
dynamic frac-S chains depending on the task at
hand. Further, provides flexibility of registration
or removal of any agent without disturbing the
whole system. In other words it provides the
‗Re-configurability‘ option.
Interoperability: Using RDF tagging, ontology
mapping and integration
Context Awareness: Provided by finding Frame
of Reference before starting to solve the task and
framing a ‗non-relative‘ query for bidding
Intelligence: Using the experience database and
by providing context awareness feature.
Stability: Apart from the re-configurability
option mentioned above, stability is provided by
dynamic re-configuration of the frac-S (see
Figure 4) when information level crosses a given
threshold point.
Modularity: The global data has been divided
into frac-S modules which increases
manageability and encapsulation as each global
agent has to contact its respective frac-S focal
agent and it is up-to him to decide whether to
hide inside the data or disclose it.
Efficiency: As each resource would be attached
with an agent namely, factotum, it would
increase the efficiency of the queries over time
using the experiential learning capabilities of
associated agent. Further efficiency would be
increased by having to search less amount of
data , locations by contacting respective focal
points instead of all the agents and resources.
Distributed-ness: OntoFrac-S approach adopts a
distributed approach of providing
interoperability between the distributed data
using heterogenous ontology mapping.
Open Data and Accessibility: The main aim of
Semantic Web is to provide you the information
from all across the web ‗on a need to know
basis‘. Ontofrac-S algorithm explains accessing
of this open data using dynamic chains. Quick
accessibility would be provided in searching the
information as a hierarchal frac-S search
methodology (using bottom up approach that is
from local frac-S to a more global one) as less
data would have to be searched compared to the
whole data of the globe.
Semantic Relativity: This less known but an
essential feature for the successful
implementation of Semantic Web is being
provided by OntoFrac-S using the Frac-S fractal
approach. This feature so far has hardly been
addressed with context to the technological
perspective in the Semantic Web. It is high time
that people start addressing this much neglected
yet crucial feature soon.
Thus, OntoFrac-S provides the perfect
solution for the implementation of Semantic Web by
offering an integrated approach.
8 Implementation Methodology
Having discussed the conceptual framework, let us
shift our focus on some strategic and technological
perspective required for the implementation of this
methodology.
As it is well known that every thing in the
Semantic Web will be a resource and it will be
accessed on the Giant Global Graph through a unique
URI which will be given as
http://www.abc.xyz/resource. However, although
13. zz (2011) xxx–yy xxx
URI would be unique, there would be many who
would have the same resource name and it would be
difficult to identify if this resource refers to Context
A or B. Thus, here we would like to give a simple
proposition that prefixing resource with the frac-S
location in which the resource is present like
http://www.abc.xyz/frac-Slocation/resource would
help easily identify the Frame of Reference in which
that resource name has been used. This task would be
carried out by contacting the linked frac-S to know
ontology it is using. This in turn would tell the
context of resource. Although, a small change, it can
help easy retrieval of the information with higher
accuracy from GGG.
Further, although various schemes for Multi-Agent
have been proposed like but we preferred using
Contract Net Protocol for of our communication.
Some have even pointed out the concern of having
bandwidth requirement using Contract Net Protocol.
This shortcoming is eliminated in our approach as
during Contract Net, the agent doesn‘t need to send
the broadcast message to each and every agent. The
initiator agent (factotum in our case) only needs to
send the message to all Fractal Focal Points of
respective regions, who would further broadcast the
message in their respective fractal regions. This in
turn would reduce the bandwidth limitations.
Figure 9 shows the OntoFrac-S framework with
respect to the technological front. As could be seen in
the figure on receiving a task, each agent sends it to
the inference engine. This inference engine contains a
natural language processing module for dividing a
task into sub-tasks using lexical measures.
WORDNET therasus ontologies can be used for this
purpose. After passing to the NLP module, context
determination module will be called. After
determination of the context, a query would be
generated for finding the solution. This query would
be sent as SPARQL [42] query to the frac-S focal
agent would in turn generate a semantic SPARQL
query after Ontology Mapping. Several of lexical
measures provide ways for ontology matching like n
gram similarity, Hamming Distance, cosine similarity
etc.
Also, all inter agent communication would be
held using ACL (Agent Communication Language).
Here OWL, ACL and SPARQL are so chosen
because they are W3C standards.
Further, Concerns on interoperability between
EHRs etc need not be ruffled with for providing a
global schema. Instead of that EHR ontologies would
be mapped on a need to know basis.
GUI
AGENT
(Physici
an
Factotu
m)
Physican
Patient
Frac-S
i Focal
Point
(Agent)
Frac-S i
Ontology
Frac-S
j Focal
Point
(Agent)
SPAR
Q
L
query
Ontology Matching
RDF
Triple
Store
Frac-S j
Ontology
Hospital
EHR
RDF
Triple
Store
Hospital
Domain
OntologyACL Communication
Frac-S j Profile (K
base of 1st level
sub Fracs-S and
agent)
Semantic SPARQL Query
Physic
an K
base
Result
ACL reply
Task
Illness
Inference
Engine
Context
Determination
Query
Generation
NLP
Sub-
Frac-S
j
Hospita
l Focal
Point
(Agent)
Sub-
Frac-S
j Focal
Point
(Agent)
ACLCommunication
Physic
an K
base
Physican
Rule
Engine
C
ure
Figure 9: OntoFrac-S Model
9 Future Work
One of the biggest advantages with this approach
is in implementation wherein a small area would
replicate the behaviour of the actual global space due
to the properties of fractal approach adopted. Thus, as
mentioned earlier even an implementation in a small
university or area would be useful to able to justify
its usefulness for the whole globe.
Thus, the next step could be to implement this
approach on a small scale. Figure 10 shows a sample
GUI screen that a physician would have in front
of him in order to interact with his associated agent
alias ‗Factotum‘
10 Conclusion
Through this paper, we just to want to focus on
the fact that if human races follow irregular patterns
and are fractal in nature then its replication on the
web cannot be seen as merely adopting a global view
approach for providing semantic interoperability. It
must be remembered that ontologies which are
14. zz (2011) xxx–yy xxx
defined by the communities should be given their
autonomy. Simultaneously, the distributed-ness of the
global society must not pose an issue for
interoperability. Aggregation of these two concepts
was provided in this paper using effective Ontology
Management.
OntoFrac-S seems to be a promising approach
for the successful implementation of Semantic web
and provides a paradigm shift towards ‗Semantic
Relativity‘ in the Globally Linked Graph. Our aim
was to highlight this less trodden path of ‗Semantic
Relativity‘ which seems to address the cross
cultural/cross- geographical barriers using fractals.
We strongly feel that this approach seems to provide
the missing link in the Semantic Web. We hope that
more technological research in this field would open
rooms for many more exploration in the years to
come.
Figure 10: Sample GUI Screen in OntoFrac-S System for aiding Physicians using ‗Factotum Service‘
11 References
[1] Tim Berners Lee, The Semantic Web
http://www.scientificamerican.com/article.cfm?i
d=the-semantic-web
[2] A. Brasoveanu, A. Manolescu,
M.N.Spinu, Generic Multimodal ontologies for
Human-Agent Interaction, Int. J. of Computers,
Communication & Control, ISSN 1841-9836,
Vol. V(2010), No. 5, pp. 625-633
[3] I.F.Toma, Contributions to the Study of
Semantic Interoperability in Multi-Agent
Environments- An Ontology Based Approach,
Int. J. of Computers, Communication & Control,
ISSN-1841-9836, Vol. V(2010), No. 5, pp. 946-
952
[4] Ahmad Adel Abu Shareha and others,
Multimodal Integration (Image and Text) Using
Ontology Alignment, American Journal of
Applied Sciences 6 (6): 1217-1224,2009, ISSN
1546-9239
[5] Jérȏme Euzenat, An API for ontology
alignment, The Semantic Web – ISWC-2004,
Springer
[6] Mike Uschold, Creating, integrating and
maintaining local and global ontologies,
Citeseerx
15. zz (2011) xxx–yy xxx
[7] Farshad Hakimpour, Andreas Geppert,
Resolving Semantic Heterogeneity in Schema
Integration: An ontology Based Approach
http://portal.acm.org/citation.cfm?id=505168.50
5196
[8] Namyoun Choi, Il-Yeol Song, and Hyoil
Han, A Survey on Ontology Mapping, SIGMOD
Record, Vol. 35, No. 3, Sep. 2006.
http://portal.acm.org/citation.cfm?id=116809
7
[9] H.Wache and others, Ontology-Based
Integration of Information —A Survey of
Existing Approaches, In: Proceedings of IJCAI-
01 Workshop: Ontologies and Information
Sharing, Seattle, WA,2001,Vol. pp. 108-117
www.let.uu.nl/~paola.monachesi/personal/papers
/wache.pdf ???
[10] Isabel F. Cruz, Huiyong Xiao, Feihong
Hsu,An Ontology-based Framework for XML
Semantic Integration,
[11] Gerd Stumme Alexander Maedche,
Ontology Merging for Federated Ontologies on
the Semantic Web
[12] Diego Calvanese, Ontology of
integration and integration of ontologies
[13] Giant Global Graph
http://en.wikipedia.org/wiki/Giant_Global_Grap
h
[14] Tim Berners-Lee and Lalana Kagal,The
Fractal Nature, of the Semantic Web, AI
Magazine, Vol 29, No. ,
http://www.aaai.org/ojs/index.php/aimagazine/ar
ticle/viewArticle/2161
[15] Christopher C. Yang and other ,
Visualization of large category map for internet
browsing, Decision Support Systems
Volume 35, Issue 1, April 2003, Pages 89-102,
Elsevier
[16] Virgílio Almeida, On the Fractal Nature
of WWW and Its Application to Cache
Modelling, 1996
http://portal.acm.org/citation.cfm?id=859832
[17] Ravi Kumar, The Web and Social
Networks, IEEE, 2002, Volume: 35 Issue: 11,
Pages: 32 - 36
[18] Gilbert Harman, Quine‘s Semantic
Relativity
http://www.princeton.edu/~harman/Papers/Harm
an-Quine.pdf
[19] W.V. Quine, Ontological Relativity, The
Journal of Philosphy, Vol. 65, No. 7. (April 4,
1968), pp 185-212,
http://www.jstor.org/pss/2024305
[20] Laurent Nottale, Scale Relativity, Fractal
Space-Time, and Quantum Mechanics, Chaos,
Solitons & Fractals, Volume 4, Issue 3, March
1994, Pages 361-388 Elsevier,
http://linkinghub.elsevier.com/retrieve/pii/09
60077994900515
[21] Kwangyeol Ryu, Agent-based fractal
architecture and modelling for distributed
manufacturing systems, International Journal of
Production Research, 2003, Taylor & Francis
Ltd., Vol. 41, No. 17, 4233-4255
[22] Kwangyeol Ryu, Modeling and
Specifications of Dynamic Agents in Fractal
Manufacturing System, Volume 52 Issue 2,
October 2003, Computers in Industry, ACM
http://portal.acm.org/citation.cfm?id=948693
[23] Huiying Gao, Qian Zhu, Semantic Web
based Multi-agent Model for the Web Semantic
Service Retrieval, 2009 Computer Network and
Multimedia Technology, 2009. CNMT 2009.
International Symposium on
page:1 – 4, IEEE
[24] C. Cubillos, Towards Open Agent
Systems Through Dynamic Incorporation, Int. J.
of Computers, Communications & Control,
ISSN 1841-9836, Vol. V (2010), No. 5, pp. 675-
683
[25] Ioan Dzitac,Artificial Intelligence +
Distributed Systems = Agents, Int. J. Computers,
Communications & Control, Vol IV(2009), No.
1, pp. 17-26
[26] B.B. Mandelbrot, Fractals and
the Geometry of Nature, W. H. Freeman and Co.,
1982, ISBN 0-7167-1186-9
[27] Gary William Flake and David M.
Pennock,Yahoo! Research Labs, Self-
organization, self-regulation and self-similarity
of fractal web, The Colours of Infinity, Clear
Press, UK (2004)
http://dpennock.com/papers/flake-colours-
2004-fractal-web.pdf,
http://www.springerlink.com/content/ut65570
532670154/
[28] Stephen Dill, Self- Similarity in the
Web, ACM Transactions on Internet
Technology, Vol. 2, No. 3, August 2002, Pages
205–223.
[29] G. Caldarelli , Fractal properties of
Internet, http://arxiv.org/abs/cond-mat/0009178
[30] Damin Xu, Fractal and Mobile Agent-
based Inter-enterprise Quality Tracking and
Control, 2008 IEEE international conference on
industrial technology (IEEE ICIT 2008)
[31] Fractal Dimension, Wikipedia
http://en.wikipedia.org/wiki/Fractal_dimension
16. zz (2011) xxx–yy xxx
[32] Koike, H. ,Fractal Approaches for
Visualizing Huge Hierarchies Proceedings of the
1993 IEEE Symposium on Visual Languages,
pp.55-60, IEEE/CS, 1993
[33] Hideki Koike, Fractal Views: A Fractal
Based Method for Controlling Information
Display, ACM Transaction on Information
Systems, Vol. 13, No. 3, July, pp.305-323,
ACM, 1995,
http://portal.acm.org/citation.cfm?id=203065
[34] Christopher C. Yang, Fractal
summarization for mobile devices to access large
documents on the web, WWW '03 Proceedings
of the 12th international conference on World
Wide Web, 2003
http://portal.acm.org/citation.cfm?id=775183
[35] Vagan Terziyan , Semantic Web
Services for Smart Devices in a ―Global
Understanding Environment‖
http://www.cs.jyu.fi/ai/papers/HCISWWA-
2003.pdf
[36] Barna Iantovics, CMDS Medical
Diagnosis System, Ninth International
Symposium on Symbolic and Numeric
Algorithms for Scientific Computing, 2007,
IEEE, pg 246
[37] Rosa M. Vicari, A multi-agent intelligent
environment for medical Knowledge, Artificial
Intelligence in Medicine 27 (2003) 335–366,
Elsevier
http://www.cs.usask.ca/faculty/sal426/SensorGri
d/docs/MED_MULTIAGENT/A%20multi-
agent%20intelligent%20environment%20for.pdf
[38] V. Alves, Agent Based Decision Support
System in Medicine , WSEAS Transactions on
Biology and Biomedicine,2005
http://repositorium.sdum.uminho.pt/handle/1
822/2721
[39] M. Pipattanasomporn, Multi-Agent
Systems in a Distributed Smart Grid: Design and
Implementation, Power Systems Conference and
Exposition, 2009. PSCE '09. IEEE/PES, March
2009
[40] Xianyi Cheng, Study of Multi-
Agent Information Retrieval Model in Semantic
Web, 2008 International Workshop on Education
Technology and Training & 2008 International
Workshop on Geoscience and Remote Sensing,
IEEE
[41] Huiying Gao , Semantic Web based
Multi-agent Model for the Web Service Retrieval,
The First International Symposium on Computer
Network and Multimedia Technology
2009,CNMT 2009, IEEE
[42] SPARQL,
http://www.w3.org/TR/rdf-sparql-query/