Bruce Durling introduces himself and his work helping various Clojure and technology communities in London. He explains how the Clojure library Incanter can be used for data science and provides examples of importing and manipulating data and creating plots. Durling also discusses future plans for Incanter and invites attendees to contribute to its development at monthly sprints.
Designing a gui_description_language_with_topic_mapstmra
The document proposes a GUI Description Language (GDL) that uses Topic Maps to generate configurable and domain-specific user interfaces. GDL aims to simplify Topic Maps for end users by defining default values, restricting actions, and automatically generating identifiers and layouts corresponding to the semantic meaning of the data domain. However, GDL also inserts an additional layer of processing between the user and the Topic Map engine. The document discusses the goals and features of GDL, and concludes that GDL can bridge users and Topic Map internals without limiting the ontology, while allowing customizable but not hard-coded user interfaces.
The Live Integration Framework aims to provide a unified view of information stored across heterogeneous data stores by using topic maps to semantically merge the data sources. It allows read-only access to integrated data without modifying the original systems. The framework uses a mapping file to define how data from different sources like relational databases are translated into topic map constructs. It is currently implemented for MySQL integration but the architecture supports integrating other data stores and technologies in the future.
Evaluation of Instances Asset in a Topic Maps-Based Ontologytmra
The document discusses evaluating the information asset of topics in a topic maps ontology. It describes assigning partial weights to topics based on attribute richness and total weights based on surrounding topic descriptions. The user can set attribute weights and weights for three categories of associations. Normalizing total topic weights results in information asset values that can be used to rank search results based on usefulness to the user.
With the ongoing development of a standardized schema language for topic maps TMCL, it is necessary to develop tools for creating Topic Maps schemas. One approach could be the development a comfortable text editor which provides syntax highlighting and auto completion. Another approach would be a visual editor, which provides a diagram view and input masks for editing Topic Maps schemas, which is the topic of this paper.
The Effects of Topic Map Components on Serendipitous Information Retrievaltmra
The document discusses a study on how topic maps can facilitate serendipitous discovery of information. Ten participants searched for information using a topic map system and their experiences were analyzed. The analysis found that the association types within the topic map, which displayed related information, proved most effective at enabling participants to serendipitously discover relevant information. It also allowed them to clarify their information needs and refine their searches. However, improvements could be made to better semantically organize the information and relationships displayed in the topic map.
This document discusses potential mottos for the TMRA 2010 conference. It lists mottos from previous years' conferences and then suggests 14 possible mottos for 2010, including "Mashing", "Web 3.0", "Visible knowledge networks", and "Information wants to be a topic map". It also provides contact information for Dr. Lutz Maicher of the Topic Maps Lab at the University of Leipzig who is chairing the discussion on selecting a motto.
Bruce Durling introduces himself and his work helping various Clojure and technology communities in London. He explains how the Clojure library Incanter can be used for data science and provides examples of importing and manipulating data and creating plots. Durling also discusses future plans for Incanter and invites attendees to contribute to its development at monthly sprints.
Designing a gui_description_language_with_topic_mapstmra
The document proposes a GUI Description Language (GDL) that uses Topic Maps to generate configurable and domain-specific user interfaces. GDL aims to simplify Topic Maps for end users by defining default values, restricting actions, and automatically generating identifiers and layouts corresponding to the semantic meaning of the data domain. However, GDL also inserts an additional layer of processing between the user and the Topic Map engine. The document discusses the goals and features of GDL, and concludes that GDL can bridge users and Topic Map internals without limiting the ontology, while allowing customizable but not hard-coded user interfaces.
The Live Integration Framework aims to provide a unified view of information stored across heterogeneous data stores by using topic maps to semantically merge the data sources. It allows read-only access to integrated data without modifying the original systems. The framework uses a mapping file to define how data from different sources like relational databases are translated into topic map constructs. It is currently implemented for MySQL integration but the architecture supports integrating other data stores and technologies in the future.
Evaluation of Instances Asset in a Topic Maps-Based Ontologytmra
The document discusses evaluating the information asset of topics in a topic maps ontology. It describes assigning partial weights to topics based on attribute richness and total weights based on surrounding topic descriptions. The user can set attribute weights and weights for three categories of associations. Normalizing total topic weights results in information asset values that can be used to rank search results based on usefulness to the user.
With the ongoing development of a standardized schema language for topic maps TMCL, it is necessary to develop tools for creating Topic Maps schemas. One approach could be the development a comfortable text editor which provides syntax highlighting and auto completion. Another approach would be a visual editor, which provides a diagram view and input masks for editing Topic Maps schemas, which is the topic of this paper.
The Effects of Topic Map Components on Serendipitous Information Retrievaltmra
The document discusses a study on how topic maps can facilitate serendipitous discovery of information. Ten participants searched for information using a topic map system and their experiences were analyzed. The analysis found that the association types within the topic map, which displayed related information, proved most effective at enabling participants to serendipitously discover relevant information. It also allowed them to clarify their information needs and refine their searches. However, improvements could be made to better semantically organize the information and relationships displayed in the topic map.
This document discusses potential mottos for the TMRA 2010 conference. It lists mottos from previous years' conferences and then suggests 14 possible mottos for 2010, including "Mashing", "Web 3.0", "Visible knowledge networks", and "Information wants to be a topic map". It also provides contact information for Dr. Lutz Maicher of the Topic Maps Lab at the University of Leipzig who is chairing the discussion on selecting a motto.
This document summarizes a presentation given by Dr. Barry Norton on knowledge graphs for data fusion. Some key points discussed include:
- Knowledge graphs can integrate data from various sources like video analytics, access control, sensors and background information to analyze related events.
- Milestone's video management software has the capability to recognize individuals across camera streams and correlate suspicious access control events with later cybersecurity incidents using a knowledge graph approach.
- The presentation discusses the history and applications of knowledge graphs, highlighting how they can provide benefits for security, transportation and other use cases when combined with video and sensor data from an Internet of Things environment.
The Web of Data: do we actually understand what we built?Frank van Harmelen
Despite its obvious success (largest knowledge base ever built, used in practice by companies and governments alike), we actually understand very little of the structure of the Web of Data. Its formal meaning is specified in logic, but with its scale, context dependency and dynamics, the Web of Data has outgrown its traditional model-theoretic semantics.
Is the meaning of a logical statement (an edge in the graph) dependent on the cluster ("context") in which it appears? Does a more densely connected concept (node) contain more information? Is the path length between two nodes related to their semantic distance?
Properties such as clustering, connectivity and path length are not described, much less explained by model-theoretic semantics. Do such properties contribute to the meaning of a knowledge graph?
To properly understand the structure and meaning of knowledge graphs, we should no longer treat knowledge graphs as (only) a set of logical statements, but treat them properly as a graph. But how to do this is far from clear.
In this talk, I report on some of our early results on some of these questions, but I ask many more questions for which we don't have answers yet.
The document describes Dedalo, a system that automatically explains clusters of data by traversing linked data to find explanations. It evaluates different heuristics for guiding the traversal, finding that entropy and conditional entropy outperform other measures by reducing redundancy and search time. Experiments on authorship clusters, publication clusters, and library book borrowings demonstrate Dedalo's ability to discover explanatory linked data patterns within a limited domain. Future work includes extending Dedalo to handle more complex datasets by addressing issues such as sameAs linking and use of literals.
Convolutional Neural Networks and Natural Language ProcessingThomas Delteil
Presentation on Convolutional Neural Networks and their application to Natural Language Processing. In-depth walk-through the Crepe architecture from Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
Loosely based on ODSC London 2016 talk: https://www.slideshare.net/MiguelFierro1/deep-learning-for-nlp-67182819
Code: https://github.com/ThomasDelteil/TextClassificationCNNs_MXNet
Demo: https://thomasdelteil.github.io/TextClassificationCNNs_MXNet/
(flattened pdf, no animation, email author for .pptx)
This document is a 12-page report summarizing a final project that uses text mining algorithms to analyze documents about the cultural impact of historic Chicago high-rise buildings. It describes collecting data from JSTOR, preprocessing the data using named entity recognition to identify people, organizations, locations, and other entities. Network graphs were created connecting entities that co-occur in sentences, and power iteration was used to determine important entities. Results were structured, plotted, and compared to bag-of-words analysis to evaluate cultural trends over time.
The document describes an approach called Odyssey for optimizing federated SPARQL queries. It involves computing concise statistics about links between triple patterns, called characteristic sets (CS), at a single location. These CSs capture joins and are connected to each other through characteristic pairs (CP). The approach uses these statistics to efficiently optimize query execution plans through dynamic programming. This leads to significant improvements in optimization and execution times compared to existing federated query optimization techniques.
Interpretation, Context, and Metadata: Examples from Open ContextEric Kansa
Presentation given at the International Data Curation Conference (#IDCC!6) in Amsterdam, at the "A Context-driven Approach to Data Curation for Reuse" workshop (organized by Ixchel Faniel and Elizabeth Yakel) on Monday, February 22, 2015
An Incomplete Introduction to Artificial IntelligenceSteven Beeckman
This is the releasable version of an internal presentation on artificial intelligence. It includes a brief history of AI, a mathematical approach to deep learning and an overview of some use-cases of deep learning.
Spellcheck: "General Adversarial Networks" are actually called "Generative Adversarial Networks".
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
Topic Maps for improved access to and use of content in relational databases ...tmra
The document describes a case study using topic maps to improve access to content from a relational database of German variety lists. A topic maps-based web application was built on top of the relational data to offer subject-centric access through networked knowledge models, providing many access paths and perspectives not possible in the original data-centric interface. This increased the usability and answerability of questions over the restricted views of the original relational database interface.
In order to cope with large-scale topic maps that store a lot of information, it is necessary to utilize topic map databases. Although, database management systems should provide users with external schema functions such as views, topic map databases do not have such functions. In this paper, we propose a method of implementing a view function, by focusing on the fact that the substructure of topic maps can be regarded as a topic map. In order to realize the idea, we developed an access control system based on the view function. Through an experiment to measure the execution time, we confirmed that these functions work correctly and have little effect on the execution time.
1) A case study describes a Topic Maps-based web application that was built on top of a document-centric content management system (CMS) used for a website about a regional cluster of biotech companies.
2) The Topic Maps application improved usability by enabling subject-centric views of information rather than isolating related pieces of information across many documents. It allowed multiple access paths to information through different perspectives and views generated from the underlying topic map graph.
3) The Topic Maps application provided concise, one-click access to information about companies located in particular areas, active in specific fields, or related to other companies or projects, improving on the usability of isolating this information across many pages in the
Subject Headings make information to be topic mapstmra
This paper reports the efforts to make topic maps from Subject Headings (SHs) and discuss practical use of them for organizing information and knowledge. SHs are often maintained by libraries and used in bibliographic records. SHs are thesauri and they are well organized. Fortunately some SHs are published on the Web. We transformed them to topic maps. Usually each subject in SHs has own ID. It can play PSI role. By keeping the relationships included in SHs such as Broader-Narrower, Related, USE-UF etc in topic maps, information or knowledge can be linked together and organized according to the structure of SHs. In other words, by using SHs information and knowledge can be topic maps easily.
Inquiry Optimization Technique for a Topic Map Databasetmra
This document proposes an inquiry optimization technique for topic map databases. It discusses using an object-oriented data model for topic map databases to improve query performance compared to a relational model. The document defines cost estimation formulas to help the database system select the optimal retrieval route, either following associations or searching by topic, when answering queries. An experiment is needed to evaluate the effectiveness of using these cost estimations to optimize queries of a topic map database.
Topic Merge Scenarios for Knowledge Federationtmra
This paper introduces a socio-technical infrastructure, described as a boundary infrastructure, based on improvements to existing and emerging Issue-based Information Systems (IBIS) conversation platforms.
1. The document discusses using the tmjs Topic Maps engine, written in JavaScript, for server-side applications like a PSI server.
2. Tmjs allows full Topic Maps processing in JavaScript and can operate on servers via Node.js.
3. A sample PSI server application is shown that uses tmjs and Node.js to serve Topic Map-based information about subjects from an HTTP request.
This document discusses modeling QTI (IMS Question and Test Interoperability) assessments in topic maps. QTI is used to share assessment content between systems but has changing specifications that are challenging to support. Embedding QTI questions and responses as topics within a topic map allows the content to be richer than QTI and supports generating QTI output. An example shows embedding gaps and sounds within a fill-in-the-blank question topic. Authoring tools can generically edit embedded topics. This technique is useful for other content like images, links, and videos. In conclusion, embedding topics solved their needs and is used extensively in their production systems.
The document discusses Hatana, a virtual merging engine that creates a unified view of information from multiple data sources by merging them on demand according to Topic Map standards. Hatana behaves like a topic map layer over the underlying sources, merging topics, associations, and other constructs virtually based on equality rules while maintaining the original sources. This allows related information to be queried and browsed together seamlessly.
Maiana is a platform for structured data developed by Lutz Maicher and Uta Schulze at the University of Leipzig. It allows users to manage, browse, query, and validate topic maps. Maiana is social in that it enables users to discuss resources, observe data sources, and follow other users. Data sources on Maiana can be kept private or shared publicly. The platform also includes an API and semantic search capabilities.
1. The document proposes using the Nintendo Wii Remote as an intuitive interface for interacting with web-based learning content, such as a topic map-based science learning website.
2. Specifically, it describes using the Wii Remote as a pointer for real-world interactions like selecting constellations, and as a navigation device for exploring 3D representations and the structure of the topic map.
3. Motions and buttons on the Wii Remote are mapped to navigating different aspects of the topic map and triggering content from the website in an immersive way, allowing students to intuitively explore related science topics.
Automatic semantic interpretation of unstructured data for knowledge managementtmra
The document summarizes a demo of an automatic semantic analysis technique for knowledge discovery from unstructured data like Wikipedia articles. The demo shows a linked concept graph and linked data graph created by analyzing astronomy articles. It also discusses how the technique can be used for knowledge representation, discovery, navigation, and intelligence by linking isolated data and deriving a taxonomy. The technical solution takes a bottom-up approach using semantic data integration and analysis to dynamically create and update object and concept graphs in real-time from various data sources.
The document discusses putting Topic Maps to REST. It describes existing Topic Map APIs and their limitations. It then introduces Tropics, a proposed RESTful API for Topic Maps. Tropics would support resources like topics, associations, and search results. It advocates the HATEOAS principle to structure navigation between resources. The document outlines Tropics' proposed URI structure and status of implementation.
This document summarizes a presentation given by Dr. Barry Norton on knowledge graphs for data fusion. Some key points discussed include:
- Knowledge graphs can integrate data from various sources like video analytics, access control, sensors and background information to analyze related events.
- Milestone's video management software has the capability to recognize individuals across camera streams and correlate suspicious access control events with later cybersecurity incidents using a knowledge graph approach.
- The presentation discusses the history and applications of knowledge graphs, highlighting how they can provide benefits for security, transportation and other use cases when combined with video and sensor data from an Internet of Things environment.
The Web of Data: do we actually understand what we built?Frank van Harmelen
Despite its obvious success (largest knowledge base ever built, used in practice by companies and governments alike), we actually understand very little of the structure of the Web of Data. Its formal meaning is specified in logic, but with its scale, context dependency and dynamics, the Web of Data has outgrown its traditional model-theoretic semantics.
Is the meaning of a logical statement (an edge in the graph) dependent on the cluster ("context") in which it appears? Does a more densely connected concept (node) contain more information? Is the path length between two nodes related to their semantic distance?
Properties such as clustering, connectivity and path length are not described, much less explained by model-theoretic semantics. Do such properties contribute to the meaning of a knowledge graph?
To properly understand the structure and meaning of knowledge graphs, we should no longer treat knowledge graphs as (only) a set of logical statements, but treat them properly as a graph. But how to do this is far from clear.
In this talk, I report on some of our early results on some of these questions, but I ask many more questions for which we don't have answers yet.
The document describes Dedalo, a system that automatically explains clusters of data by traversing linked data to find explanations. It evaluates different heuristics for guiding the traversal, finding that entropy and conditional entropy outperform other measures by reducing redundancy and search time. Experiments on authorship clusters, publication clusters, and library book borrowings demonstrate Dedalo's ability to discover explanatory linked data patterns within a limited domain. Future work includes extending Dedalo to handle more complex datasets by addressing issues such as sameAs linking and use of literals.
Convolutional Neural Networks and Natural Language ProcessingThomas Delteil
Presentation on Convolutional Neural Networks and their application to Natural Language Processing. In-depth walk-through the Crepe architecture from Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
Loosely based on ODSC London 2016 talk: https://www.slideshare.net/MiguelFierro1/deep-learning-for-nlp-67182819
Code: https://github.com/ThomasDelteil/TextClassificationCNNs_MXNet
Demo: https://thomasdelteil.github.io/TextClassificationCNNs_MXNet/
(flattened pdf, no animation, email author for .pptx)
This document is a 12-page report summarizing a final project that uses text mining algorithms to analyze documents about the cultural impact of historic Chicago high-rise buildings. It describes collecting data from JSTOR, preprocessing the data using named entity recognition to identify people, organizations, locations, and other entities. Network graphs were created connecting entities that co-occur in sentences, and power iteration was used to determine important entities. Results were structured, plotted, and compared to bag-of-words analysis to evaluate cultural trends over time.
The document describes an approach called Odyssey for optimizing federated SPARQL queries. It involves computing concise statistics about links between triple patterns, called characteristic sets (CS), at a single location. These CSs capture joins and are connected to each other through characteristic pairs (CP). The approach uses these statistics to efficiently optimize query execution plans through dynamic programming. This leads to significant improvements in optimization and execution times compared to existing federated query optimization techniques.
Interpretation, Context, and Metadata: Examples from Open ContextEric Kansa
Presentation given at the International Data Curation Conference (#IDCC!6) in Amsterdam, at the "A Context-driven Approach to Data Curation for Reuse" workshop (organized by Ixchel Faniel and Elizabeth Yakel) on Monday, February 22, 2015
An Incomplete Introduction to Artificial IntelligenceSteven Beeckman
This is the releasable version of an internal presentation on artificial intelligence. It includes a brief history of AI, a mathematical approach to deep learning and an overview of some use-cases of deep learning.
Spellcheck: "General Adversarial Networks" are actually called "Generative Adversarial Networks".
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
Topic Maps for improved access to and use of content in relational databases ...tmra
The document describes a case study using topic maps to improve access to content from a relational database of German variety lists. A topic maps-based web application was built on top of the relational data to offer subject-centric access through networked knowledge models, providing many access paths and perspectives not possible in the original data-centric interface. This increased the usability and answerability of questions over the restricted views of the original relational database interface.
In order to cope with large-scale topic maps that store a lot of information, it is necessary to utilize topic map databases. Although, database management systems should provide users with external schema functions such as views, topic map databases do not have such functions. In this paper, we propose a method of implementing a view function, by focusing on the fact that the substructure of topic maps can be regarded as a topic map. In order to realize the idea, we developed an access control system based on the view function. Through an experiment to measure the execution time, we confirmed that these functions work correctly and have little effect on the execution time.
1) A case study describes a Topic Maps-based web application that was built on top of a document-centric content management system (CMS) used for a website about a regional cluster of biotech companies.
2) The Topic Maps application improved usability by enabling subject-centric views of information rather than isolating related pieces of information across many documents. It allowed multiple access paths to information through different perspectives and views generated from the underlying topic map graph.
3) The Topic Maps application provided concise, one-click access to information about companies located in particular areas, active in specific fields, or related to other companies or projects, improving on the usability of isolating this information across many pages in the
Subject Headings make information to be topic mapstmra
This paper reports the efforts to make topic maps from Subject Headings (SHs) and discuss practical use of them for organizing information and knowledge. SHs are often maintained by libraries and used in bibliographic records. SHs are thesauri and they are well organized. Fortunately some SHs are published on the Web. We transformed them to topic maps. Usually each subject in SHs has own ID. It can play PSI role. By keeping the relationships included in SHs such as Broader-Narrower, Related, USE-UF etc in topic maps, information or knowledge can be linked together and organized according to the structure of SHs. In other words, by using SHs information and knowledge can be topic maps easily.
Inquiry Optimization Technique for a Topic Map Databasetmra
This document proposes an inquiry optimization technique for topic map databases. It discusses using an object-oriented data model for topic map databases to improve query performance compared to a relational model. The document defines cost estimation formulas to help the database system select the optimal retrieval route, either following associations or searching by topic, when answering queries. An experiment is needed to evaluate the effectiveness of using these cost estimations to optimize queries of a topic map database.
Topic Merge Scenarios for Knowledge Federationtmra
This paper introduces a socio-technical infrastructure, described as a boundary infrastructure, based on improvements to existing and emerging Issue-based Information Systems (IBIS) conversation platforms.
1. The document discusses using the tmjs Topic Maps engine, written in JavaScript, for server-side applications like a PSI server.
2. Tmjs allows full Topic Maps processing in JavaScript and can operate on servers via Node.js.
3. A sample PSI server application is shown that uses tmjs and Node.js to serve Topic Map-based information about subjects from an HTTP request.
This document discusses modeling QTI (IMS Question and Test Interoperability) assessments in topic maps. QTI is used to share assessment content between systems but has changing specifications that are challenging to support. Embedding QTI questions and responses as topics within a topic map allows the content to be richer than QTI and supports generating QTI output. An example shows embedding gaps and sounds within a fill-in-the-blank question topic. Authoring tools can generically edit embedded topics. This technique is useful for other content like images, links, and videos. In conclusion, embedding topics solved their needs and is used extensively in their production systems.
The document discusses Hatana, a virtual merging engine that creates a unified view of information from multiple data sources by merging them on demand according to Topic Map standards. Hatana behaves like a topic map layer over the underlying sources, merging topics, associations, and other constructs virtually based on equality rules while maintaining the original sources. This allows related information to be queried and browsed together seamlessly.
Maiana is a platform for structured data developed by Lutz Maicher and Uta Schulze at the University of Leipzig. It allows users to manage, browse, query, and validate topic maps. Maiana is social in that it enables users to discuss resources, observe data sources, and follow other users. Data sources on Maiana can be kept private or shared publicly. The platform also includes an API and semantic search capabilities.
1. The document proposes using the Nintendo Wii Remote as an intuitive interface for interacting with web-based learning content, such as a topic map-based science learning website.
2. Specifically, it describes using the Wii Remote as a pointer for real-world interactions like selecting constellations, and as a navigation device for exploring 3D representations and the structure of the topic map.
3. Motions and buttons on the Wii Remote are mapped to navigating different aspects of the topic map and triggering content from the website in an immersive way, allowing students to intuitively explore related science topics.
Automatic semantic interpretation of unstructured data for knowledge managementtmra
The document summarizes a demo of an automatic semantic analysis technique for knowledge discovery from unstructured data like Wikipedia articles. The demo shows a linked concept graph and linked data graph created by analyzing astronomy articles. It also discusses how the technique can be used for knowledge representation, discovery, navigation, and intelligence by linking isolated data and deriving a taxonomy. The technical solution takes a bottom-up approach using semantic data integration and analysis to dynamically create and update object and concept graphs in real-time from various data sources.
The document discusses putting Topic Maps to REST. It describes existing Topic Map APIs and their limitations. It then introduces Tropics, a proposed RESTful API for Topic Maps. Tropics would support resources like topics, associations, and search results. It advocates the HATEOAS principle to structure navigation between resources. The document outlines Tropics' proposed URI structure and status of implementation.
Defining Domain-Specific Facets for Topic Maps With TMQL Path Expressionstmra
The automatic generation of facets works fairly bad for fine-modeled ontologies, in which not all information concerning a single Topic is available through occurrences and direct associations. In this paper, we share our conception of using TMQL path expressions for the definition of domain-specific facets by means of using standard-based Topic Maps technologies. The generated facets must be evaluated, even though they are defined manually by a domain expert. We therefore propose metrics for automatic evaluation of the defined facets, as well as a mechanism for using automatically stored user feedback.
The document outlines the schedule for a two-day Topic Maps tutorial. Day one includes talks on using Topic Maps for discourse semantics, developing ontologies and facet definitions, and Topic Maps tools and applications. Day two covers semantic integration approaches, integrating Topic Maps with content management systems, interpreting unstructured data, merging topic maps, and modeling learning standards. A poster session is also included on using the Wii remote for an educational website.
This document summarizes a PHP library called KBI Library that allows integration between PHP content management systems (CMS) and knowledge bases. The library acts as an information broker between the CMS and knowledge bases, enabling presentation of knowledge contained in knowledge bases through the CMS. It features a generic implementation to support standard operations and specific implementations for Ontopia knowledge bases. It also includes administration and editor interfaces for Joomla to manage remote sources and queries.
The document discusses Hatana, a virtual merging engine that creates a layer over multiple data sources and allows them to be queried and accessed as if they were a single topic map. Hatana merges the data sources on demand by creating virtual topics, associations, and other constructs according to the equality rules for topics maps. This allows different information sources to be merged without editing the original sources. Examples of merging participant data and opera information from different sources are provided.
Designing a GUI Description Language with Topic Mapstmra
This paper presents the concepts of a description language to be created to design a graphical user interface (GUI) for specific ontologies defined in Topic Maps.
AToM2 – a ”web database” with Topic Maps rootstmra
AToM2 is 1. an application framework for building semantically oriented projects
(like encyclopaedias, legal systems, vocabularies, knowledge bases, sophisticated CMSs …), 2. a high performance and usability oriented feature-rich web database, and 3. strongly influenced by Topic Maps concepts and slightly inspired by other semantic techniques
and approaches.
28. Credits and Sources
Text: Johannes Payr, Wolfgang Glas
Presentation: Johannes Payr
Illustrations: Johannes Payr
Additional Images: ÖBB, Illwerke, Efkon, TIWAG