The document describes a mapping effort between the STW Thesaurus for Economics and the JEL Classification System. It conducted two iterative mapping runs using a semantic alignment tool called AMALGAME. In the second run, it enriched the subject categories and classes with additional descriptors, synonyms, and mappings to other vocabularies, which increased the number of potential mappings found. The approach demonstrated that vocabulary enrichment can facilitate mappings between structurally different knowledge organization systems in economics. Future work could designate STW as the access vocabulary to JEL classes.
FCA-MERGE: Bottom-Up Merging of Ontologiesalemarrena
The document describes a new bottom-up method called FCA-MERGE for merging ontologies. It extracts instances from documents for each ontology to generate formal contexts. It then merges the contexts and computes a concept lattice using techniques from Formal Concept Analysis. This lattice provides a structural description of the merging process. The final merged ontology is then generated from the lattice with human guidance. FCA-MERGE circumvents the problem of finding instances classified in both ontologies by extracting instances from relevant documents.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
This document provides an overview of a literature review on fuzzy conceptual data modeling. It discusses different types of imperfect information like imprecision, uncertainty, and vagueness that can be represented using fuzzy set theory. It then summarizes several fuzzy conceptual data models proposed in literature, including fuzzy ER/EER models, IFO models, and fuzzy UML models. The document aims to serve as an updated review of research on fuzzy conceptual data modeling.
The document compares two sets of operators - Michalski's knowledge transmutations from machine learning and Baker's dialogue transformations that model knowledge negotiation in dialogue. It finds some overlap between the operators but also key differences related to the number of agents involved, how strategical aspects are represented, and the relationship between uttered and internal knowledge. The authors discuss how fusing these operator sets could help model collaborative learning and develop human-machine learning systems.
O NTOLOGY B ASED D OCUMENT C LUSTERING U SING M AP R EDUCE ijdms
Nowadays, document clustering is considered as a da
ta intensive task due to the dramatic, fast increas
e in
the number of available documents. Nevertheless, th
e features that represent those documents are also
too
large. The most common method for representing docu
ments is the vector space model, which represents
document features as a bag of words and does not re
present semantic relations between words. In this
paper we introduce a distributed implementation for
the bisecting k-means using MapReduce programming
model. The aim behind our proposed implementation i
s to solve the problem of clustering intensive data
documents. In addition, we propose integrating the
WordNet ontology with bisecting k-means in order to
utilize the semantic relations between words to enh
ance document clustering results. Our presented
experimental results show that using lexical catego
ries for nouns only enhances internal evaluation
measures of document clustering; and decreases the
documents features from thousands to tens features.
Our experiments were conducted using Amazon ElasticMapReduce to deploy the Bisecting k-means
algorithm
1. Semantic mapping involves connecting metadata structures like schemas and ontologies, as well as connecting vocabularies of values like knowledge organization systems and knowledge bases.
2. Significant progress has been made in mapping metadata structures to enable interoperability and data integration, though mapping knowledge organization systems remains a challenge due to the sparseness of links between concepts.
3. While automatic mapping tools have improved, manual mapping remains important and applications of mappings could be better categorized to inform mapping requirements and evaluation.
Semantic Relatedness of Web Resources by XESA - Philipp SchollCROKODIl consortium
This document discusses using extended explicit semantic analysis (XESA) to measure semantic relatedness between short text snippets for recommendation purposes. It proposes enhancing ESA by incorporating additional semantic information from Wikipedia, such as article links and categories. An evaluation compares the performance of ESA, XESA using the article graph, XESA using categories, and a combination. The results show that XESA using the article graph improves over ESA by up to 9% and performs best for recommending related snippets.
This document provides an introduction to design patterns. It begins by explaining what design patterns are, their benefits, and common elements of design patterns like name, problem, solution, and consequences. It then discusses different types of design patterns classified by purpose (creational, structural, behavioral) and scope (class, object). An example of applying the strategy pattern to a duck simulation is used to illustrate how patterns can solve common object-oriented design problems by separating variable aspects from those that remain the same. The document advocates programming to interfaces rather than implementations to avoid tight coupling and allow independent extension of behavior.
FCA-MERGE: Bottom-Up Merging of Ontologiesalemarrena
The document describes a new bottom-up method called FCA-MERGE for merging ontologies. It extracts instances from documents for each ontology to generate formal contexts. It then merges the contexts and computes a concept lattice using techniques from Formal Concept Analysis. This lattice provides a structural description of the merging process. The final merged ontology is then generated from the lattice with human guidance. FCA-MERGE circumvents the problem of finding instances classified in both ontologies by extracting instances from relevant documents.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
This document provides an overview of a literature review on fuzzy conceptual data modeling. It discusses different types of imperfect information like imprecision, uncertainty, and vagueness that can be represented using fuzzy set theory. It then summarizes several fuzzy conceptual data models proposed in literature, including fuzzy ER/EER models, IFO models, and fuzzy UML models. The document aims to serve as an updated review of research on fuzzy conceptual data modeling.
The document compares two sets of operators - Michalski's knowledge transmutations from machine learning and Baker's dialogue transformations that model knowledge negotiation in dialogue. It finds some overlap between the operators but also key differences related to the number of agents involved, how strategical aspects are represented, and the relationship between uttered and internal knowledge. The authors discuss how fusing these operator sets could help model collaborative learning and develop human-machine learning systems.
O NTOLOGY B ASED D OCUMENT C LUSTERING U SING M AP R EDUCE ijdms
Nowadays, document clustering is considered as a da
ta intensive task due to the dramatic, fast increas
e in
the number of available documents. Nevertheless, th
e features that represent those documents are also
too
large. The most common method for representing docu
ments is the vector space model, which represents
document features as a bag of words and does not re
present semantic relations between words. In this
paper we introduce a distributed implementation for
the bisecting k-means using MapReduce programming
model. The aim behind our proposed implementation i
s to solve the problem of clustering intensive data
documents. In addition, we propose integrating the
WordNet ontology with bisecting k-means in order to
utilize the semantic relations between words to enh
ance document clustering results. Our presented
experimental results show that using lexical catego
ries for nouns only enhances internal evaluation
measures of document clustering; and decreases the
documents features from thousands to tens features.
Our experiments were conducted using Amazon ElasticMapReduce to deploy the Bisecting k-means
algorithm
1. Semantic mapping involves connecting metadata structures like schemas and ontologies, as well as connecting vocabularies of values like knowledge organization systems and knowledge bases.
2. Significant progress has been made in mapping metadata structures to enable interoperability and data integration, though mapping knowledge organization systems remains a challenge due to the sparseness of links between concepts.
3. While automatic mapping tools have improved, manual mapping remains important and applications of mappings could be better categorized to inform mapping requirements and evaluation.
Semantic Relatedness of Web Resources by XESA - Philipp SchollCROKODIl consortium
This document discusses using extended explicit semantic analysis (XESA) to measure semantic relatedness between short text snippets for recommendation purposes. It proposes enhancing ESA by incorporating additional semantic information from Wikipedia, such as article links and categories. An evaluation compares the performance of ESA, XESA using the article graph, XESA using categories, and a combination. The results show that XESA using the article graph improves over ESA by up to 9% and performs best for recommending related snippets.
This document provides an introduction to design patterns. It begins by explaining what design patterns are, their benefits, and common elements of design patterns like name, problem, solution, and consequences. It then discusses different types of design patterns classified by purpose (creational, structural, behavioral) and scope (class, object). An example of applying the strategy pattern to a duck simulation is used to illustrate how patterns can solve common object-oriented design problems by separating variable aspects from those that remain the same. The document advocates programming to interfaces rather than implementations to avoid tight coupling and allow independent extension of behavior.
Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databasesijsrd.com
This document discusses semantic conflicts that can occur when integrating fuzzy relational databases and proposes a methodology for resolving these conflicts. It identifies five new types of conflicts specific to fuzzy databases: membership degree conflicts, inconsistent attribute values, missing attributes, missing fuzzy attribute values, and attribute domain conflicts. The methodology resolves these conflicts in a specific order to minimize the time needed for integration. It aims to resolve fuzzy database conflicts within the context of resolving other general integration conflicts.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
This chapter describes different ways to represent learning designs and interventions. It discusses textual, visual, and data-based representations at micro, meso, and macro levels. Examples include content maps, task swimlanes, pedagogy profiles, outcomes maps, and principles matrices. The representations make explicit different aspects of the underlying design and can support practitioners, researchers, and software designers. Evaluation found the visual representations helped practitioners think beyond content to learner activities.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
This chapter discusses different ways of representing learning interventions and designs. It identifies four main types of representations: verbal, textual, visual, and data-based. Examples of representations discussed include case studies, models, diagrams, and charts. Representations can have different formats, levels of granularity, and lenses. They can describe small activities or entire curriculums. Overall, the chapter analyzes different learning design representations and how they can be used to visualize and implement learning activities.
The document introduces the visual mapping sentence (MS) methodology for multifaceted research design and analysis. The MS is a visual representation that classifies research variables into facets. It guides hypothesis generation and systematic data collection/analysis. The MS links components of a research domain to suggest relationships between variables and facets. It maps all relevant variables and provides information about excluded variables. Two examples of MS are included in figures to illustrate their use. The MS methodology is presented as a valuable tool that can help address limitations of other research methods by emphasizing conceptual definition and structure of content areas.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
This document discusses different graphical representations that can be used in tutoring systems, including conceptual graphs, ontologies, and concept maps. It examines how each representation has been utilized in tutoring systems, including representing the domain or learning material, diagnosing student errors, and assessing student knowledge through visual concept mapping exercises. Concept maps, ontologies, and conceptual graphs each have their own characteristics that make them suitable for different uses in tutoring systems.
Visual representation and organization of the knowledge have been utilized in different ways in tutoring
systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms,
for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses
what is way of the utilization of every formalism and the offering of the potential outcomes to assist the
student in education systems.
GRAPHICAL REPRESENTATION IN TUTORING SYSTEMSijcsit
Visual representation and organization of the knowledge have been utilized in different ways in tutoring systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms, for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses what is way of the utilization of every formalism and the offering of the potential outcomes to assist the student in education systems.
Visualizer for concept relations in an automatic meaning extraction systemPatricia Tavares Boralli
This document discusses a visualizer interface that has been developed for an automatic meaning extraction (AME) system. The visualizer allows users to view concepts and their relationships extracted from text documents in a graph format. It maps concepts as nodes and relationships as edges. Users can search for concepts, view related concepts, and trace relationships back to the original text passages. The visualizer was created to help users interact with and understand the outputs of the AME system, which automatically extracts concepts and relations from documents across various domains.
The document describes latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA represents documents as random mixtures over latent topics, characterized by distributions over words. It is a three-level hierarchical Bayesian model where documents are generated by first sampling a per-document topic distribution from a Dirichlet prior, then repeatedly sampling topics and words from these distributions. LDA addresses limitations of previous models by capturing statistical structure within and between documents through the hierarchical Bayesian formulation.
Towards optimize-ESA for text semantic similarity: A case study of biomedical...IJECEIAES
Explicit Semantic Analysis (ESA) is an approach to measure the semantic relatedness between terms or documents based on similarities to documents of a references corpus usually Wikipedia. ESA usage has received tremendous attention in the field of natural language processing NLP and information retrieval. However, ESA utilizes a huge Wikipedia index matrix in its interpretation by multiplying a large matrix by a term vector to produce a high-dimensional vector. Consequently, the ESA process is too expensive in interpretation and similarity steps. Therefore, the efficiency of ESA will slow down because we lose a lot of time in unnecessary operations. This paper propose enhancements to ESA called optimize-ESA that reduce the dimension at the interpretation stage by computing the semantic similarity in a specific domain. The experimental results show clearly that our method correlates much better with human judgement than the full version ESA approach.
The object-oriented class is, in general, the most utilized element in programming and modeling. It is employed throughout the software development process, from early domain analysis phases to later maintenance phases. A class diagram typically uses elements of graph theory, e.g., boxes, ovals, lines. Many researchers have examined the class diagram layout from different perspectives, including visibility, juxtaposability, and aesthetics. While software systems can be incredibly complex, class diagrams represent a very broad picture of the system as a whole. The key to understanding of such complexity is use of tools such as diagrams at various levels of representation. This paper develops a more elaborate diagrammatic description of the class diagram that includes flows of attributes, thus providing a basic representation for specifying behavior and control instead of merely listing methods.
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
PROPERTIES OF RELATIONSHIPS AMONG OBJECTS IN OBJECT-ORIENTED SOFTWARE DESIGNijpla
One of the modern paradigms to develop a system is object oriented analysis and design. In this paradigm,
there are several objects and each object plays some specific roles. After identifying objects, the various
relationships among objects must be identified. This paper makes a literature review over relationships
among objects. Mainly, the relationships are three basic types, including generalization/specialization,
aggregation and association.This paper presents five taxonomies for properties of the relationships. The first
taxonomy is based on temporal view. The second taxonomy is based on structure and the third one relies on
behavioral. The fourth taxonomy is specified on mathematical view and fifth one related to the interface.
Additionally, the properties of the relationships are evaluated in a case study and several recommendations
are proposed.
An introduction to compositional models in distributional semanticsAndre Freitas
The document provides an overview of compositional distributional semantic models, which aim to develop principled and effective semantic models for real-world language use. It discusses using large corpora to extract distributional representations of word meanings and developing compositional models that combine these representations according to syntactic structure. Both additive and multiplicative mixture models as well as function-based models are described. Challenges including lack of training data and computational complexity are also outlined.
ConNeKTion: A Tool for Exploiting Conceptual Graphs Automatically Learned fro...University of Bari (Italy)
Studying, understanding and exploiting the content of a digital library, and extracting useful information thereof, require automatic techniques that can effectively support the users. To this aim, a relevant role can be played by concept taxonomies. Unfortunately, the availability of such a kind of resources is limited, and their manual building and maintenance are costly and error-prone. This work presents ConNeKTion, a tool for conceptual graph learning and exploitation. It allows to learn conceptual graphs from plain text and to enrich them by finding concept generalizations. The resulting graph can be used for several purposes: finding relationships between concepts (if any), filtering the concepts from a particular perspective, keyword extraction and information retrieval. A suitable control panel is provided for the user to comfortably carry out these activities.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
More Related Content
Similar to The Missing Link. A Vocabulary Mapping Effort in Economics NKOS Workshop 2015
Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databasesijsrd.com
This document discusses semantic conflicts that can occur when integrating fuzzy relational databases and proposes a methodology for resolving these conflicts. It identifies five new types of conflicts specific to fuzzy databases: membership degree conflicts, inconsistent attribute values, missing attributes, missing fuzzy attribute values, and attribute domain conflicts. The methodology resolves these conflicts in a specific order to minimize the time needed for integration. It aims to resolve fuzzy database conflicts within the context of resolving other general integration conflicts.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
This chapter describes different ways to represent learning designs and interventions. It discusses textual, visual, and data-based representations at micro, meso, and macro levels. Examples include content maps, task swimlanes, pedagogy profiles, outcomes maps, and principles matrices. The representations make explicit different aspects of the underlying design and can support practitioners, researchers, and software designers. Evaluation found the visual representations helped practitioners think beyond content to learner activities.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
This chapter discusses different ways of representing learning interventions and designs. It identifies four main types of representations: verbal, textual, visual, and data-based. Examples of representations discussed include case studies, models, diagrams, and charts. Representations can have different formats, levels of granularity, and lenses. They can describe small activities or entire curriculums. Overall, the chapter analyzes different learning design representations and how they can be used to visualize and implement learning activities.
The document introduces the visual mapping sentence (MS) methodology for multifaceted research design and analysis. The MS is a visual representation that classifies research variables into facets. It guides hypothesis generation and systematic data collection/analysis. The MS links components of a research domain to suggest relationships between variables and facets. It maps all relevant variables and provides information about excluded variables. Two examples of MS are included in figures to illustrate their use. The MS methodology is presented as a valuable tool that can help address limitations of other research methods by emphasizing conceptual definition and structure of content areas.
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
ABSTRACT
In this paper we propose a novel method to cluster categorical data while retaining their context. Typically, clustering is performed on numerical data. However it is often useful to cluster categorical data as well, especially when dealing with data in real-world contexts. Several methods exist which can cluster categorical data, but our approach is unique in that we use recent text-processing and machine learning advancements like GloVe and t- SNE to develop a a context-aware clustering approach (using pre-trained
word embeddings). We encode words or categorical data into numerical, context-aware, vectors that we use to cluster the data points using common clustering algorithms like K-means.
This document discusses different graphical representations that can be used in tutoring systems, including conceptual graphs, ontologies, and concept maps. It examines how each representation has been utilized in tutoring systems, including representing the domain or learning material, diagnosing student errors, and assessing student knowledge through visual concept mapping exercises. Concept maps, ontologies, and conceptual graphs each have their own characteristics that make them suitable for different uses in tutoring systems.
Visual representation and organization of the knowledge have been utilized in different ways in tutoring
systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms,
for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses
what is way of the utilization of every formalism and the offering of the potential outcomes to assist the
student in education systems.
GRAPHICAL REPRESENTATION IN TUTORING SYSTEMSijcsit
Visual representation and organization of the knowledge have been utilized in different ways in tutoring systems to upgrade their usefulness. This paper concentrates on the usage of various graphical formalisms, for example, the conceptual graph, ontology, and concept map in tutoring systems. The paper addresses what is way of the utilization of every formalism and the offering of the potential outcomes to assist the student in education systems.
Visualizer for concept relations in an automatic meaning extraction systemPatricia Tavares Boralli
This document discusses a visualizer interface that has been developed for an automatic meaning extraction (AME) system. The visualizer allows users to view concepts and their relationships extracted from text documents in a graph format. It maps concepts as nodes and relationships as edges. Users can search for concepts, view related concepts, and trace relationships back to the original text passages. The visualizer was created to help users interact with and understand the outputs of the AME system, which automatically extracts concepts and relations from documents across various domains.
The document describes latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA represents documents as random mixtures over latent topics, characterized by distributions over words. It is a three-level hierarchical Bayesian model where documents are generated by first sampling a per-document topic distribution from a Dirichlet prior, then repeatedly sampling topics and words from these distributions. LDA addresses limitations of previous models by capturing statistical structure within and between documents through the hierarchical Bayesian formulation.
Towards optimize-ESA for text semantic similarity: A case study of biomedical...IJECEIAES
Explicit Semantic Analysis (ESA) is an approach to measure the semantic relatedness between terms or documents based on similarities to documents of a references corpus usually Wikipedia. ESA usage has received tremendous attention in the field of natural language processing NLP and information retrieval. However, ESA utilizes a huge Wikipedia index matrix in its interpretation by multiplying a large matrix by a term vector to produce a high-dimensional vector. Consequently, the ESA process is too expensive in interpretation and similarity steps. Therefore, the efficiency of ESA will slow down because we lose a lot of time in unnecessary operations. This paper propose enhancements to ESA called optimize-ESA that reduce the dimension at the interpretation stage by computing the semantic similarity in a specific domain. The experimental results show clearly that our method correlates much better with human judgement than the full version ESA approach.
The object-oriented class is, in general, the most utilized element in programming and modeling. It is employed throughout the software development process, from early domain analysis phases to later maintenance phases. A class diagram typically uses elements of graph theory, e.g., boxes, ovals, lines. Many researchers have examined the class diagram layout from different perspectives, including visibility, juxtaposability, and aesthetics. While software systems can be incredibly complex, class diagrams represent a very broad picture of the system as a whole. The key to understanding of such complexity is use of tools such as diagrams at various levels of representation. This paper develops a more elaborate diagrammatic description of the class diagram that includes flows of attributes, thus providing a basic representation for specifying behavior and control instead of merely listing methods.
For non-grid 3D images like point clouds and meshes, and inherently graph-based data.
Inherently graph-based data include for example brain connectivity analysis, scientific article citation networks, (social) network analysis, etc.
Alternative download link:
https://www.dropbox.com/s/2o3cofcd6d6e2qt/geometricGraph_deepLearning.pdf?dl=0
PROPERTIES OF RELATIONSHIPS AMONG OBJECTS IN OBJECT-ORIENTED SOFTWARE DESIGNijpla
One of the modern paradigms to develop a system is object oriented analysis and design. In this paradigm,
there are several objects and each object plays some specific roles. After identifying objects, the various
relationships among objects must be identified. This paper makes a literature review over relationships
among objects. Mainly, the relationships are three basic types, including generalization/specialization,
aggregation and association.This paper presents five taxonomies for properties of the relationships. The first
taxonomy is based on temporal view. The second taxonomy is based on structure and the third one relies on
behavioral. The fourth taxonomy is specified on mathematical view and fifth one related to the interface.
Additionally, the properties of the relationships are evaluated in a case study and several recommendations
are proposed.
An introduction to compositional models in distributional semanticsAndre Freitas
The document provides an overview of compositional distributional semantic models, which aim to develop principled and effective semantic models for real-world language use. It discusses using large corpora to extract distributional representations of word meanings and developing compositional models that combine these representations according to syntactic structure. Both additive and multiplicative mixture models as well as function-based models are described. Challenges including lack of training data and computational complexity are also outlined.
ConNeKTion: A Tool for Exploiting Conceptual Graphs Automatically Learned fro...University of Bari (Italy)
Studying, understanding and exploiting the content of a digital library, and extracting useful information thereof, require automatic techniques that can effectively support the users. To this aim, a relevant role can be played by concept taxonomies. Unfortunately, the availability of such a kind of resources is limited, and their manual building and maintenance are costly and error-prone. This work presents ConNeKTion, a tool for conceptual graph learning and exploitation. It allows to learn conceptual graphs from plain text and to enrich them by finding concept generalizations. The resulting graph can be used for several purposes: finding relationships between concepts (if any), filtering the concepts from a particular perspective, keyword extraction and information retrieval. A suitable control panel is provided for the user to comfortably carry out these activities.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
Similar to The Missing Link. A Vocabulary Mapping Effort in Economics NKOS Workshop 2015 (20)
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
1.) Introduction
Our Movement is not new; it is the same as it was for Freedom, Justice, and Equality since we were labeled as slaves. However, this movement at its core must entail economics.
2.) Historical Context
This is the same movement because none of the previous movements, such as boycotts, were ever completed. For some, maybe, but for the most part, it’s just a place to keep your stable until you’re ready to assimilate them into your system. The rest of the crabs are left in the world’s worst parts, begging for scraps.
3.) Economic Empowerment
Our Movement aims to show that it is indeed possible for the less fortunate to establish their economic system. Everyone else – Caucasian, Asian, Mexican, Israeli, Jews, etc. – has their systems, and they all set up and usurp money from the less fortunate. So, the less fortunate buy from every one of them, yet none of them buy from the less fortunate. Moreover, the less fortunate really don’t have anything to sell.
4.) Collaboration with Organizations
Our Movement will demonstrate how organizations such as the National Association for the Advancement of Colored People, National Urban League, Black Lives Matter, and others can assist in creating a much more indestructible Black Wall Street.
5.) Vision for the Future
Our Movement will not settle for less than those who came before us and stopped before the rights were equal. The economy, jobs, healthcare, education, housing, incarceration – everything is unfair, and what isn’t is rigged for the less fortunate to fail, as evidenced in society.
6.) Call to Action
Our movement has started and implemented everything needed for the advancement of the economic system. There are positions for only those who understand the importance of this movement, as failure to address it will continue the degradation of the people deemed less fortunate.
No, this isn’t Noah’s Ark, nor am I a Prophet. I’m just a man who wrote a couple of books, created a magnificent website: http://www.thearkproject.llc, and who truly hopes to try and initiate a truly sustainable economic system for deprived people. We may not all have the same beliefs, but if our methods are tried, tested, and proven, we can come together and help others. My website: http://www.thearkproject.llc is very informative and considerably controversial. Please check it out, and if you are afraid, leave immediately; it’s no place for cowards. The last Prophet said: “Whoever among you sees an evil action, then let him change it with his hand [by taking action]; if he cannot, then with his tongue [by speaking out]; and if he cannot, then, with his heart – and that is the weakest of faith.” [Sahih Muslim] If we all, or even some of us, did this, there would be significant change. We are able to witness it on small and grand scales, for example, from climate control to business partnerships. I encourage, invite, and challenge you all to support me by visiting my website.
Gamify it until you make it Improving Agile Development and Operations with ...Ben Linders
So many challenges, so little time. While we’re busy developing software and keeping it operational, we also need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.
In this presentation, Ben Linders will show how playing games with the DevOps coaching cards can help to explore your current development and deployment (DevOps) practices and decide as a team what to improve or experiment with.
The games that we play are based on an engagement model. Instead of imposing change, the games enable people to pull in ideas for change and apply those in a way that best suits their collective needs.
By playing games, you can learn from each other. Teams can use games, exercises, and coaching cards to discuss values, principles, and practices, and share their experiences and learnings.
Different game formats can be used to share experiences on DevOps principles and practices and explore how they can be applied effectively. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
The Missing Link. A Vocabulary Mapping Effort in Economics NKOS Workshop 2015
1. The ZBW is a member of the Leibniz Association.
The Missing Link –
A Vocabulary Mapping Effort
in Economics
Andreas Oskar Kempf, Joachim Neubert, Manfred Faden
ZBW – Leibniz Information Centre for Economics –
German National Library of Economics
14th European Networked Knowledge Organization Systems (NKOS) Workshop
September 18th 2015
2. Introduction
- Mappings enable an
integrated search in a
distributed search
environment.
- Mappings translate search
terms into the vocabulary of
the target KOS.
page 2
Why do we do vocabulary
mappings in general?
3. Introduction
- … to offer an integrated search
space for our search portal for
economics EconBiz,
e. g. Integrated Authority File
- … to link the STW with other
vocabularies for the development
of semantic web applications.
page 3
For what reason did we at
ZBW do mappings in the
past?
4. Introduction
Context:
… increasing numbers of
publications and decreasing
personnel resources.
… complementary approaches
to conventional subject indexing
are needed,
i. a. reuse of user-generated
content.
page 4
What is new about the
current mapping effort?
5. Introduction
Regarding working paper series:
Verbal subject indexing:
inclusion of author keywords
into bibliographic records if
available.
Classificatory subject indexing:
inclusion of JEL classes into
bibliographic records if
available.
page 5
Current reuse scenario of
user-generated content
at ZBW:
6. Introduction
… building on the fact that
economists are usually quite
familiar with the JEL classification
codes.
… animate economists to use
STW subject headings in order to
provide a more fine-grained
content description with a
standardized vocabulary.
page 6
Future reuse scenario for a JEL – STW (systematic display)
mapping effort:
STW –
Thesaurus for Economics
(subject category system)
JEL –
Journal of Economic
Literature Classification
System
7. Research question
Regarding the use case we have in
mind to what extent is a useful
mapping between both KOS possible?
Dealing with this question on the one
hand includes a theoretical reflection
on the structure of both KOS. On the
other hand it includes the presentation
of a specific iterative semi-automatic
mapping approach.
page 7
8. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 8
9. JEL Classification System
It is published by the American
Economic Association (AEA),
which publishes the American
Economic Review and maintains
the searchable database EconLit.
The AEA Executive Committee
regularly reports on changes of
JEL classes in the American
Economic Review.
page 9
Institutional background:
10. JEL Classification
It is a precombined
classification system with a
monohierarchical structure
and polydimensional ordering
principles.
page 10
Scope:
It represents an Anglo-
American understanding
of economics mainly
focusing on (national)
economics [ger.: VWL].
Structural characteristics:
11. STW Thesaurus for Economics
It covers all economics-
related subject areas
and, on a broader level,
the most important
related subjects (e.g.
social sciences).
page 11
Institutional background:
Developed in cooperation
thanks to a project funded
by the Ministry for Economy
in the 1990s.
Scope:
12. STW Thesaurus for Economics
equivalent relations,
including synonyms and
quasi-synonyms (UF),
hierarchical relations,
including broader (BT) and
narrower terms (NT)
associate relations,
including related terms (RT)
page 12
Structural characteristics:
STW is a polyhierarchical
bilingual thesaurus.
Types of relations:
Links to other vocabularies:
Mappings to GND, TheSoz, AGROVOC, (DBpedia)
13. STW subject categories
page 13
Structural characteristics:
The STW subject categories (in total
497) constitute a monohierarchical
structure with polydimensional
– for subthesaurus V + B –
consistently subject-specific ordering
principles for vertical and horizontal
subdivision.
Subthesaurus V Subthesaurus B
1st level 1 1
2nd level 15 10
3rd level 62 38
4th level 43 21
Total 121 70
14. JEL Classification vs. STW Subject Categories
page 14
JEL Classification STW Subject Categories
Definition Class (ISO 25964-2: 3.10, „concept (3.17)
or group of similar or related concepts
(3.17) (sic!) used as a division or
subdivision in a classification scheme
(3.12).“)
Concept group (ISO 25964-2: 3.18, „group
of concepts selected by some specified
criterion…“)
Scope Domain-specific (USA, UK) Domain-specific (GER > international)
Here: Restriction to the subthesauri:
V: Economics and
B: Business economics.
Purpose All-embracing systematization of a
discipline.
Systematization of the thesaurus vocabulary
Structural
characteristics
- Precombined classification
- Monohierarchical
- Polydimensional ordering principles
- Monohierarchical
- Polydimensional ordering principles
Because of the structural heterogeneity between the two vocabularies mapping
relations for the most part are not expected to be relations of full equivalence.
Rather they are presumed to oftentimes consist of inexact equivalent relations.
15. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 15
16. Definition of interoperability
ISO 25964: Thesauri and interoperability with other vocabularies
Developed by an international working group (2008-2013)
- Part 1: Thesauri for information retrieval (published 2011)
Contains guidelines for establishing monolingual and
multilingual thesauri.
- Part 2: Interoperability with other vocabularies (published 2013)
Deals with mappings between thesauri and other types of
vocabularies for information retrieval.
page 16
17. Definition of interoperability
page 17
3.38
interoperability
ability of two or more systems or components to exchange information and to use
the information that has been exchanged.
NOTE Vocabularies can support interoperability by including mappings to other vocabularies, by presenting data in
standard formats and by using systems that support common computer protocols.
3.40
mapping, gerund (verbal noun)
process of establishing relationships between the concepts (3.17) in one
vocabulary and those of another
3.41
mapping, noun
(product of mapping process) relationships between a concept (3.17) in one
vocabulary and one or more concepts (3.17) in another
ISO 25964-2:2013(E)
18. Two different types of vocabularies
page 18
Structural unity:
The mapped vocabularies have the same structure.
The equivalence of the concepts of such vocabularies is expressed by their identical
structural position in the vocabulary. All the relationships of the concepts correspond
to each other (e.g. multilingual thesauri of public institutions)
Structural disunity:
The mapped vocabularies do not have the same structure.
Equivalence of concepts has nothing to do with their position in the vocabularies. The
mapping process produces either exact equivalence pairs or inexact equivalent pairs.
Different types of equivalences:
(Real) exact equivalence: =EQ
Inexact equivalence: ~EQ (e.g. voc.have emerged from different cultural communities)
Partial equivalence: The concept is broader: BM („Broader Mapping“)
The concept is narrower: NM („Narrower Mapping“)
The concepts are somehow related: RM („Related Mapping“).
19. Structural models for mapping
page 19
Three different structural models for mapping across vocabularies
Model 1: Structural unity (6.2)
„All the participating vocabularies share exactly the same structure of hierarchical
and associate relationships between concepts…“
(e.g. multilingual thesauri)
ISO 25964-2:2013(E)
Model 2: Direct-linked (6.3)
The direct-linked model addresses linkages
betweent two or more vocabularies that do not
share the same structure. As well as differing in
scope, language and structure, the vocabularies
may include other types of vocabulary
(classification scheme, name authority list, etc.) .
20. Structural models for mapping
page 20
Model 3: Hub structure (6.4)
One vocabulary is designated as „hub“, or
conprehensive structure to which each of the other
vocabularies is mapped as „satellite“. The concepts
of the different vocabularies are only mapped to the
concepts of the one vocabulary which has the role of
a hub. This model is appropriate if there is one
vocabulary with a dominating position.
ISO 25964-2:2013(E)
Model 4: Selective Mapping (6.5)
In cases where there is only small overlap
expected, it could be unnecessary to map the
vocabularies comprehensively.
In real applications combinations of these
types often occur and the boundaries might be
blurred (see ibd. p.20):
Voc X
Voc Y
Selected mapping in area of overlap.
21. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 21
22. Mapping process
page 22
Previous work:
Outdated mapping JEL > STW (descriptor level)
KoMoHe project context (2004-2007)
Mapping relations:
- equivalent relations (=)
- broader/narrower relations (>/<)
- associate relations (^)
- compound mappings (+)
- including a relevance rating
(high, medium, low)
Outdated concordance
STW (classification system) > JEL
On the third level of JEL classes,
No specified mapping relations.
STW classification
system
STW subject
categories
JEL classes
V02-000
Microeconomics
V.02
V.02.05
B21
D00
…
V02-010
Houlsehold
economics
V.02.01 D10
D11
…
23. Mapping process
page 23
What is new?
- …mapping on the level of the STW subject category system
(instead of the STW classification level),
- …referring to a web-based interactive mapping platform,
- …using the SKOS vocabulary to build and to manage the mapping
Note: this goes along with the assumption that both vocabularies could
be mapped bilaterally,
- …referring to an iterative mapping process of a first and a second
iteration and an approach of vocabulary enrichment of the mapping with
additional keywords (JEL) and subject headings from STW together with
equivalent concept relations from past vocabulary mappings .
24. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 24
25. Empirical examples
page 25
Selection of STW subject categories:
STW subthesaurus V – Economics:
V.02 – Microeconomics (1 subject category)
V.02.01 – V.02.05 (5 s.c.)
V.15 – Economic history (1 s.c.)
V.15 – (-)
STW subthesaurus B – Business economics
B.07 – Marketing (1 s.c.)
B.07.01 – B.07.06 (6 s.c.)
B.09 – Business information systems (1 s.c.)
B.09.01 – B.09.03 (3 s.c.)
26. Empirical examples
page 26
Mapping procedure:
Use of the interactive alignment server
AMALGAME
(AMsterdam ALignment GenerAtion
MEtatool)
Upload of the STW (v 9.0) in SKOS
http://zbw.eu/stw/versions/latest/downloa
d/about.de.html
Upload of the JEL classification in SKOS
http://zbw.eu/beta/external_identifiers/jel/
about.en.html
Exact language dependent string match
of STW subject categories and JEL
classes.
AMALGAME Mapping graph of the first run
27. Empirical examples
page 27
Second run:
(Same selection of subject categories.)
Enrichment of STW subject
categories and JEL classes:
STW subject categories enriched by:
STW descriptors + synonyms
Mapped (exactMatch) concepts
from other vocabularies –
descriptors + synonyms
(GND, TheSoz, DBpedia,
AGROVOC)
JEL classes enriched by:
JEL keywords scraped from JEL
guide
German + English (if available)
AMALGAME mapping graph 2nd run.
29. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 29
30. Intell.
eval.
1st run
(=)
(+) Certain
overlap ~
Wrong (-) 2nd run
(=)
(+) Certain
overlap ~
Wrong (-)
V.02 Microeconomics 5 1 (13) 1 3 7
V.02.01 Household economics 5 3 (21) 6 11
V.02.02 Theory of the firm 4 2 (10) 1 7
V.02.03 Welfare economics 6 3 (10) 2 5
V.02.04 Economics of information 4 2 (4) 2
V.02.05 Economy of time 7 1 (9) 8
V.15 Economic history 74 9 (15) 4 2
B.07 Marketing 3 1 (5) 4 2 (20) 1 1
B.07.01 Marketing management 6 (-)
B.07.02 Product Management 2 1 (15) 5 9
B.07.03 Pricing strategy 1 - (11) 2 9
B.07.04 Marketing communications 2 1 (2) 1
B.07.05 Distribution 1 - (12) 2 9
B.07.06 Market research 3 - (2) 1 1
B.09 Business information systems 1 - (5) 5
B.09.01 Information system components 1 - (4) 1 3
B.09.02 IS development and management 1 1 (6) 5
B.09.03 Corporate information systems 1 - (4) 4
31. - Introduction
- Knowledge organization systems in economics
- Definition of interoperability and structural models for mapping
- Mapping process
- Empirical examples
- Results
- Conclusion and future outlook
Outline
page 31
32. Conclusion and future outlook
page 32
- String match can only generate mapping candidates it is blind to structural
differences.
- The approach of vocabulary enrichment including JEL keywords, STW
descriptors, synonyms, translations and equivalent terms and their synonyms
from past vocabulary mappings led to a substantial increase of mapping
candidates also included in the intellectual mapping.
Note: A new use case for already established vocabulary alginments.
- Vocabulary enrichment has also led to new mapping candidates worth
revisioning the current intellecutal mapping.
- Optional mapping procedure in the future:
The STW as access vocabulary to JEL classes.
33. Thank you for your attention!
page 33
Contact:
Dr. Andreas Oskar Kempf
a.kempf@zbw.eu
Joachim Neubert
j.neubert@zbw.eu
Manfred Faden
m.faden@zbw.eu
http://zbw.eu/stw
https://www.aeaweb.org/econlit/jelCodes.php
Inofficial multilingual/LOD version:
http://zbw.eu/beta/external_identifiers/jel)
http://semanticweb.cs.vu.nl/amalgame/
34. References
BERTRAM, J. (2005) Einführung in die inhaltliche Erschließung. Ergon Verlag.
GÖDERT, W. (2010) Semantische Wissensrepräsentation und Interoperabilität.
In: Information – Wissenschaft & Praxis, p. 5-28.
HUBRICH, J. (2008) CRISSCROSS: SWD-DDC-Mapping. In: Mitteilungen der
VÖB 61, Nr. 3, p.50-58.
ISO 25964 Thesauri and interoperability with othr vocabularies. Part 1 (2011) +2
(2013).
MAYR, P./PETRAS, V. (2007) Building a terminology network for search: the
KoMoHe project. Proceedings of the International Conference on Dublin Core
and Metadata Applications.
SCHEVEN, E. (2013) ISO 25964 and GND, LIS`workshop 2013, Luxemburg
http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/2683976
page 34