Talk at Semantic Technology Conference, 2010, 23 June, 2010, San Francisco.
The LOD cloud has a potential for applicability in many AI-related tasks, such as open domain question answering, knowledge discovery, and the Semantic Web. An important prerequisite before the LOD cloud can enable these goals is allowing its users (and applications) to effectively pose queries to and retrieve answers from it. However, this prerequisite is still an open problem for the LOD cloud and has restricted it to “merely more data.” To transform the LOD cloud from "merely more data" to "semantically linked data” there are plenty of open issues which should be addressed. We believe this transformation of the LOD cloud can be performed by addressing the shortcomings identified by us: lack of conceptual description of datasets, lack of expressivity, and difficulties with respect to querying.
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
Wholi: The right people find each other (at the right time)
Two key elements in this talk:
•PART 1: Machine learning for entity extraction
Natural language processing (NLP), information extraction
•PART 2: Matching profiles using deep learning classifier
Deep learning, word embeddings
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
Wholi: The right people find each other (at the right time)
Two key elements in this talk:
•PART 1: Machine learning for entity extraction
Natural language processing (NLP), information extraction
•PART 2: Matching profiles using deep learning classifier
Deep learning, word embeddings
Presentation about - Semantic Web - Overview -Semantic Web
Web of Data, Giant Global Graph, Data Web, Web 3.0, Linked Data Web, Semantic Data Web, Enterprise Information Web, HTML, CSS,
Schema-agnositc queries over large-schema databases: a distributional semanti...Andre Freitas
The evolution of data environments towards the growth in the size, complexity, dy-
namicity and decentralisation (SCoDD) of schemas drastically impacts contemporary
data management. The SCoDD trend emerges as a central data management concern
in Big Data scenarios, where users and applications have a demand for more complete
data, produced by independent data sources, under different semantic assumptions and
contexts of use. Most Database Management Systems (DBMSs) today target a closed
communication scenario, where the symbolic schema of the database is known a priori
by the database user, which is able to interpret it in an unambiguous way. The context
in which the data is consumed and produced is well-defined and it is typically the
same context in which the data was created. In contrast, data management under the
SCoDD conditions target an open communication scenario where the symbolic system of
the database is unknown by the user and multiple interpretation contexts are possible.
In this case the database can be created under a different context from the database
user. The emergence of this new data environment demands the revisit of the semantic
assumptions behind databases and the design of data access mechanisms which can
support semantically heterogeneous (open communication) data environments.
This work aims at filling this gap by proposing a complementary semantic model for
databases, based on distributional semantic models. Distributional semantics provides a
complementary perspective to the formal perspective of database semantics, which supports
semantic approximation as a first-class database operation. Differently from models
which describe uncertain and incomplete data or probabilistic databases, distributional-
relational models focuses on the construction of conceptual approximation approaches
for databases, supported by a comprehensive semantic model automatically built from
large-scale unstructured data external to the database, which serves as a semantic/com-
monsense knowledge base. The semantic model can be used to support schema-agnosticqueries, i.e. abstracting the data consumer from a specific conceptualization behind the
data.
The proposed distributional-relational semantic model is supported by a distributional
structured vector space model, named τ −Space, which represents structured data under
a distributional semantic model representation which, in coordination with a query plan-
ning approach, supports a schema-agnostic query mechanism for large-schema databases.
The query mechanism is materialized in the Treo query engine and is evaluated using
schema-agnostic natural language queries.
The evaluation of the query mechanism confirms that distributional semantics provides
a high-recall, medium-high precision, and low maintainability solution to cope with
the abstraction and conceptual-level differences in schema-agnostic queries over largeschema/
schema-less open domain dataset
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Semantic Relation Classification: Task Formalisation and RefinementAndre Freitas
The identification of semantic relations between terms within texts is a fundamental task in Natural Language Processing which can support applications requiring a lightweight semantic interpretation model. Currently, semantic relation classification concentrates on relations which are evaluated over open-domain data. This work provides a critique on the set of abstract relations used for semantic relation classification with regard to their ability to express relationships between terms which are found in a domain-specific corpora. Based on this analysis, this work proposes an alternative semantic relation model based on reusing and extending the set of abstract relations present in the DOLCE ontology. The resulting set of relations is well grounded,
allows to capture a wide range of relations and could thus be used as a foundation for automatic classification of semantic relations.
SDA2013 Pundit: Creating, Exploring and Consuming AnnotationsMarco Grassi
This paper presents Pundit, a novel semantic web annotation tool, and demonstrates its use in producing structured data out of users annotations. Pundit allows communities of scholars to produce machine-readable annotations that can be made public and thus consumable as web data via SPARQL and ad-hoc REST APIs.
Pundit is highly configurable and can deployed in custom instances to include well-defined and agreed annotation vocabularies. Such instances can be distributed as bookmaklets to community users so they can create uniformly structured data in a certain application scenario. Basing on the provided APIs, some demonstrative applications have been developed, exploring different use scenarios, ranging from philosophy to journalism and cultural heritage.
The main aim of this paper is to demonstrate how such uniformly structured annotations can be quickly re-used on the web to make information discoverable or to visualize it in interesting ways.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
FLOWER VOICE: VIRTUAL ASSISTANT FOR OPEN DATAIJwest
Open Data is now collecting attention for innovative service creation, mainly in the area of
government, bioscience, and smart X project. However, to promote its application more for consumer
services, a search engine for Open Data to know what kind of data are there would be of help. This paper
presents a voice assistant which uses Open Data as its knowledge source. It is featured by improvement of
accuracy according to the user feedbacks, and acquisition of unregistered data by the user participation.
We also show an application to support for a field-work and confirm its effectiveness.
There is a vast amount of unstructured Arabic information on the Web, this data is always organized in
semi-structured text and cannot be used directly. This research proposes a semi-supervised technique that
extracts binary relations between two Arabic named entities from the Web. Several works have been
performed for relation extraction from Latin texts and as far as we know, there isn’t any work for Arabic
text using a semi-supervised technique. The goal of this research is to extract a large list or table from
named entities and relations in a specific domain. A small set of a handful of instance relations are
required as input from the user. The system exploits summaries from Google search engine as a source
text. These instances are used to extract patterns. The output is a set of new entities and their relations. The
results from four experiments show that precision and recall varies according to relation type. Precision
ranges from 0.61 to 0.75 while recall ranges from 0.71 to 0.83. The best result is obtained for (player, club)
relationship, 0.72 and 0.83 for precision and recall respectively.
How hard is this Query? Measuring the Semantic Complexity of Schema-agnostic ...Andre Freitas
The growing size, heterogeneity and complexity of databases demand the creation of strategies to facilitate users and systems to consume data. Ideally, query mechanisms should be schema-agnostic, i.e. they should be able to match user queries in their own vocabulary and syntax to the data, abstracting data consumers from the representation of the data. This work provides an informationtheoretical framework to evaluate the semantic complexity involved in the query-database communication, under a schema-agnostic query scenario. Different entropy measures are introduced to quantify the semantic phenomena involved in the user-database communication, including structural complexity, ambiguity, synonymy and vagueness. The entropy measures are validated using natural language queries over Semantic Web databases. The analysis of the semantic complexity is used to improve the understanding of the core semantic dimensions present at the query-data matching process, allowing the improvement of the design of schema-agnostic query mechanisms and defining measures which can be used to assess the semantic uncertainty or difficulty behind a schema-agnostic querying task.
Schema-Agnostic Queries (SAQ-2015): Semantic Web ChallengeAndre Freitas
The Challenge in a Nutshell
To create a query mechanism that semantically matches schema-agnostic user queries to knowledge base elements
The Goal
To support easy querying over complex databases with large schemata, relieving users from the need to understand the formal representation of the data
Relevance
The increase in the size and in the semantic heterogeneity of database schemas are bringing new requirements for users querying and searching structured data. At this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. At the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows. Addressing this gap is a fundamental part of the Semantic Web vision.
Schema-agnostic query mechanisms aim at allowing users to be abstracted from the representation of the data, supporting the automatic matching between queries and databases. This challenge aims at emphasizing the role of schema-agnosticism as a key requirement for contemporary database management, by providing a test collection for evaluating flexible query and search systems over structured data in terms of their level of schema-agnosticism (i.e. their ability to map a query issued with the user terminology and structure, mapping it to the dataset vocabulary). The challenge is instantiated in the context of Semantic Web datasets.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
Syntactic search relies on keywords contained in a query to find suitable documents. So, documents that do
not contain the keywords but contain information related to the query are not retrieved. Spreading
activation is an algorithm for finding latent information in a query by exploiting relations between nodes in
an associative network or semantic network. However, the classical spreading activation algorithm uses all
relations of a node in the network that will add unsuitable information into the query. In this paper, we
propose a novel approach for semantic text search, called query-oriented-constrained spreading activation
that only uses relations relating to the content of the query to find really related information. Experiments
on a benchmark dataset show that, in terms of the MAP measure, our search engine is 18.9% and 43.8%
respectively better than the syntactic search and the search using the classical constrained spreading
activation.
Semantics in Financial Services -David NewmanPeter Berger
David Newman serves as a Senior Architect in the Enterprise Architecture group at Wells Fargo Bank. He has been following semantic technology for the last 3 years; and has developed several business ontologies. He has been instrumental in thought leadership at Wells Fargo on the application of Semantic Technology and is a representative of the Financial Services Technology Consortium (FSTC)on the W3C SPARQL Working Group.
X api chinese cop monthly meeting feb.2016Jessie Chuang
Topics
XAPI Vocabulary spec. From ADL
Linked Data / Semantic web. / Web 3.0
Linked Data in education and content recommender
Semantic search and Google Knowledge Graph
APIs eat software (connect with partners and services)
How should we exploit data and build intelligence layer?
Case Study (Hong Ding Educational Technology)
Monetize your data and add value (intelligence)
Presentation about - Semantic Web - Overview -Semantic Web
Web of Data, Giant Global Graph, Data Web, Web 3.0, Linked Data Web, Semantic Data Web, Enterprise Information Web, HTML, CSS,
Schema-agnositc queries over large-schema databases: a distributional semanti...Andre Freitas
The evolution of data environments towards the growth in the size, complexity, dy-
namicity and decentralisation (SCoDD) of schemas drastically impacts contemporary
data management. The SCoDD trend emerges as a central data management concern
in Big Data scenarios, where users and applications have a demand for more complete
data, produced by independent data sources, under different semantic assumptions and
contexts of use. Most Database Management Systems (DBMSs) today target a closed
communication scenario, where the symbolic schema of the database is known a priori
by the database user, which is able to interpret it in an unambiguous way. The context
in which the data is consumed and produced is well-defined and it is typically the
same context in which the data was created. In contrast, data management under the
SCoDD conditions target an open communication scenario where the symbolic system of
the database is unknown by the user and multiple interpretation contexts are possible.
In this case the database can be created under a different context from the database
user. The emergence of this new data environment demands the revisit of the semantic
assumptions behind databases and the design of data access mechanisms which can
support semantically heterogeneous (open communication) data environments.
This work aims at filling this gap by proposing a complementary semantic model for
databases, based on distributional semantic models. Distributional semantics provides a
complementary perspective to the formal perspective of database semantics, which supports
semantic approximation as a first-class database operation. Differently from models
which describe uncertain and incomplete data or probabilistic databases, distributional-
relational models focuses on the construction of conceptual approximation approaches
for databases, supported by a comprehensive semantic model automatically built from
large-scale unstructured data external to the database, which serves as a semantic/com-
monsense knowledge base. The semantic model can be used to support schema-agnosticqueries, i.e. abstracting the data consumer from a specific conceptualization behind the
data.
The proposed distributional-relational semantic model is supported by a distributional
structured vector space model, named τ −Space, which represents structured data under
a distributional semantic model representation which, in coordination with a query plan-
ning approach, supports a schema-agnostic query mechanism for large-schema databases.
The query mechanism is materialized in the Treo query engine and is evaluated using
schema-agnostic natural language queries.
The evaluation of the query mechanism confirms that distributional semantics provides
a high-recall, medium-high precision, and low maintainability solution to cope with
the abstraction and conceptual-level differences in schema-agnostic queries over largeschema/
schema-less open domain dataset
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Semantic Relation Classification: Task Formalisation and RefinementAndre Freitas
The identification of semantic relations between terms within texts is a fundamental task in Natural Language Processing which can support applications requiring a lightweight semantic interpretation model. Currently, semantic relation classification concentrates on relations which are evaluated over open-domain data. This work provides a critique on the set of abstract relations used for semantic relation classification with regard to their ability to express relationships between terms which are found in a domain-specific corpora. Based on this analysis, this work proposes an alternative semantic relation model based on reusing and extending the set of abstract relations present in the DOLCE ontology. The resulting set of relations is well grounded,
allows to capture a wide range of relations and could thus be used as a foundation for automatic classification of semantic relations.
SDA2013 Pundit: Creating, Exploring and Consuming AnnotationsMarco Grassi
This paper presents Pundit, a novel semantic web annotation tool, and demonstrates its use in producing structured data out of users annotations. Pundit allows communities of scholars to produce machine-readable annotations that can be made public and thus consumable as web data via SPARQL and ad-hoc REST APIs.
Pundit is highly configurable and can deployed in custom instances to include well-defined and agreed annotation vocabularies. Such instances can be distributed as bookmaklets to community users so they can create uniformly structured data in a certain application scenario. Basing on the provided APIs, some demonstrative applications have been developed, exploring different use scenarios, ranging from philosophy to journalism and cultural heritage.
The main aim of this paper is to demonstrate how such uniformly structured annotations can be quickly re-used on the web to make information discoverable or to visualize it in interesting ways.
Ontology Based Approach for Semantic Information Retrieval SystemIJTET Journal
Abstract—The Information retrieval system is taking an important role in current search engine which performs searching operation based on keywords which results in an enormous amount of data available to the user, from which user cannot figure out the essential and most important information. This limitation may be overcome by a new web architecture known as the semantic web which overcome the limitation of the keyword based search technique called the conceptual or the semantic search technique. Natural language processing technique is mostly implemented in a QA system for asking user’s questions and several steps are also followed for conversion of questions to the query form for retrieving an exact answer. In conceptual search, search engine interprets the meaning of the user’s query and the relation among the concepts that document contains with respect to a particular domain that produces specific answers instead of showing lists of answers. In this paper, we proposed the ontology based semantic information retrieval system and the Jena semantic web framework in which, the user enters an input query which is parsed by Standford Parser then the triplet extraction algorithm is used. For all input queries, the SPARQL query is formed and further, it is fired on the knowledge base (Ontology) which finds appropriate RDF triples in knowledge base and retrieve the relevant information using the Jena framework.
FLOWER VOICE: VIRTUAL ASSISTANT FOR OPEN DATAIJwest
Open Data is now collecting attention for innovative service creation, mainly in the area of
government, bioscience, and smart X project. However, to promote its application more for consumer
services, a search engine for Open Data to know what kind of data are there would be of help. This paper
presents a voice assistant which uses Open Data as its knowledge source. It is featured by improvement of
accuracy according to the user feedbacks, and acquisition of unregistered data by the user participation.
We also show an application to support for a field-work and confirm its effectiveness.
There is a vast amount of unstructured Arabic information on the Web, this data is always organized in
semi-structured text and cannot be used directly. This research proposes a semi-supervised technique that
extracts binary relations between two Arabic named entities from the Web. Several works have been
performed for relation extraction from Latin texts and as far as we know, there isn’t any work for Arabic
text using a semi-supervised technique. The goal of this research is to extract a large list or table from
named entities and relations in a specific domain. A small set of a handful of instance relations are
required as input from the user. The system exploits summaries from Google search engine as a source
text. These instances are used to extract patterns. The output is a set of new entities and their relations. The
results from four experiments show that precision and recall varies according to relation type. Precision
ranges from 0.61 to 0.75 while recall ranges from 0.71 to 0.83. The best result is obtained for (player, club)
relationship, 0.72 and 0.83 for precision and recall respectively.
How hard is this Query? Measuring the Semantic Complexity of Schema-agnostic ...Andre Freitas
The growing size, heterogeneity and complexity of databases demand the creation of strategies to facilitate users and systems to consume data. Ideally, query mechanisms should be schema-agnostic, i.e. they should be able to match user queries in their own vocabulary and syntax to the data, abstracting data consumers from the representation of the data. This work provides an informationtheoretical framework to evaluate the semantic complexity involved in the query-database communication, under a schema-agnostic query scenario. Different entropy measures are introduced to quantify the semantic phenomena involved in the user-database communication, including structural complexity, ambiguity, synonymy and vagueness. The entropy measures are validated using natural language queries over Semantic Web databases. The analysis of the semantic complexity is used to improve the understanding of the core semantic dimensions present at the query-data matching process, allowing the improvement of the design of schema-agnostic query mechanisms and defining measures which can be used to assess the semantic uncertainty or difficulty behind a schema-agnostic querying task.
Schema-Agnostic Queries (SAQ-2015): Semantic Web ChallengeAndre Freitas
The Challenge in a Nutshell
To create a query mechanism that semantically matches schema-agnostic user queries to knowledge base elements
The Goal
To support easy querying over complex databases with large schemata, relieving users from the need to understand the formal representation of the data
Relevance
The increase in the size and in the semantic heterogeneity of database schemas are bringing new requirements for users querying and searching structured data. At this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. At the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows. Addressing this gap is a fundamental part of the Semantic Web vision.
Schema-agnostic query mechanisms aim at allowing users to be abstracted from the representation of the data, supporting the automatic matching between queries and databases. This challenge aims at emphasizing the role of schema-agnosticism as a key requirement for contemporary database management, by providing a test collection for evaluating flexible query and search systems over structured data in terms of their level of schema-agnosticism (i.e. their ability to map a query issued with the user terminology and structure, mapping it to the dataset vocabulary). The challenge is instantiated in the context of Semantic Web datasets.
Semantic similarity and semantic relatedness
measure in particular is very important in the current scenario
due to the huge demand for natural language processing based
applications such as chatbots and information retrieval systems
such as knowledge base based FAQ systems. Current approaches
generally use similarity measures which does not use the context
sensitive relationships between the words. This leads to erroneous
similarity predictions and is not of much use in real life
applications. This work proposes a novel approach that gives an
accurate relatedness measure of any two words in a sentence by
taking their context into consideration. This context correction
results in a more accurate similarity prediction which results in
higher accuracy of information retrieval systems.
Syntactic search relies on keywords contained in a query to find suitable documents. So, documents that do
not contain the keywords but contain information related to the query are not retrieved. Spreading
activation is an algorithm for finding latent information in a query by exploiting relations between nodes in
an associative network or semantic network. However, the classical spreading activation algorithm uses all
relations of a node in the network that will add unsuitable information into the query. In this paper, we
propose a novel approach for semantic text search, called query-oriented-constrained spreading activation
that only uses relations relating to the content of the query to find really related information. Experiments
on a benchmark dataset show that, in terms of the MAP measure, our search engine is 18.9% and 43.8%
respectively better than the syntactic search and the search using the classical constrained spreading
activation.
Semantics in Financial Services -David NewmanPeter Berger
David Newman serves as a Senior Architect in the Enterprise Architecture group at Wells Fargo Bank. He has been following semantic technology for the last 3 years; and has developed several business ontologies. He has been instrumental in thought leadership at Wells Fargo on the application of Semantic Technology and is a representative of the Financial Services Technology Consortium (FSTC)on the W3C SPARQL Working Group.
X api chinese cop monthly meeting feb.2016Jessie Chuang
Topics
XAPI Vocabulary spec. From ADL
Linked Data / Semantic web. / Web 3.0
Linked Data in education and content recommender
Semantic search and Google Knowledge Graph
APIs eat software (connect with partners and services)
How should we exploit data and build intelligence layer?
Case Study (Hong Ding Educational Technology)
Monetize your data and add value (intelligence)
From Linked Data to Semantic ApplicationsAndre Freitas
In this talk we will discuss how to build (today) semantically intelligent systems, i.e. systems with the ability to process and interpret information by its meaning. We will take a multidisciplinary perspective showing how recent advances in other computer science areas such as Information Retrieval and Natural Language Processing can enable, together with Linked Data and Semantic Web resources, the construction of the next generation of information systems. A summary of the core principles and available
resources from these areas will give a concrete understanding on how to jump-start your own semantic system.
Information Extraction and Linked Data CloudDhaval Thakker
In the media industry there is a great emphasis on providing descriptive metadata as part of the media assets to the consumers. Information extraction (IE) is considered an important tool for metadata generation process and its performance largely depend on the knowledge base it utilizes. The advances in the “Linked Data Cloud” research provide a great opportunity for generating such knowledge base that benefit from the participation of wider community. In this talk, I will discuss our experiences of utilizing Linked Data Cloud in conjunction with a GATE-based IE system.
Talk at 3th Keystone Training School - Keyword Search in Big Linked Data - Institute for Software Technology and Interactive Systems, TU Wien, Austria, 2017
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
SADI SWSIP '09 'cause you can't always GET what you want!Mark Wilkinson
My presentation to the IEEE Asia Pacific Services Computing Conference 2009 - Semantic Web Services In Practice (SWSIP 09) track. This show introduces the SADI (Semantic Automated Discovery and Integration) Framework - the replacement for our earlier explorations with the BioMoby project. In this slideshow I explore what SADI is, and why it is able to generate such interesting (and useful!) Semantic-Webby behaviours from Web Services. I also discuss our current research activities around how we are trying to exploit the SADI system to create much more natural query interfaces for Cardiovascular researchers.
Hypertext2007 Carole Goble Keynote - "The Return of the Prodigal Web"hypertext2007
Carole Goble, Professor in the School of Computer Science in the University of Manchester. This is the slides of the keynote presentation opening the Hypertext 2007 Conference in Manchester, UK on the 10th September 2007.
Visit http://www.ht07.org for more details
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
Similar to How To Make Linked Data More than Data (20)
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
"Protectable subject matters, Protection in biotechnology, Protection of othe...
How To Make Linked Data More than Data
1. How To Make Linked Data More than Data Semantic Technology Conference 2010, June 23, 2010, San Francisco Prateek Jain, Pascal Hitzler, Amit Sheth Kno.e.sis: Ohio Center of Excellence onKnowledge-enabled Computing Wright State University, Dayton, OH http://www.knoesis.org Peter Z. Yeh, KunalVerma Accenture Technology Labs San Jose, CA
2. What is Semantic Web Semantics? Semantic Web Semantics:shareable (independent of your particular software)declarative (not dependent on imperative algorithms)computable (otherwise we don’t gain much) meaning You can do Mashups without Semantic Web semantics. You can do information integration without Semantic Web semantics. You can do most things without Semantic Web semantics. But then it will be one-off, less scalable, less reusable.
3. What Is Semantic Web Semantics? Semantic Web requires a shareable, declarative and computablesemantics. I.e., the semantics must be a formal entity which is clearly defined and automatically computable. Ontology languages provide this by means of their formal semantics. Semantic Web Semantics is given by a relation – the logical consequence relation. Note: This is considerably more than saying that the semantics of an ontology is the set of its logical consequences!
4. In other words We capture the meaning of information not by specifying its meaning directly (which is impossible) but by specifying, precisely, how information interacts with other information. We describe the meaning indirectly through its effects. - An example (from LoD) of unintended errors when adequate semantics is not used: Linked MDB links to Dbpedia URI for Hollywood for country
7. Example: GovTrack “Nancy Pelosi voted in favor of the Health Care Bill.” Vote: 2009-887 vote:hasOption Votes:2009-887/+ vote:vote vote:votedBy rdfs:label Aye vote:hasAction people/P000197 Where is the semantics? H.R. 3962: Affordable Health Care for America Act dc:title name On Passage: H R 3962 Affordable Health Care for America Act Nancy Pelosi dc:title Bills:h3962
8. Don’t get us wrong Linked Open Data is great, useful, cool, and a very important step. But if we stay semantics-free, Linked Open Data will be of limited usefulness!
9. The Semantic Data Web Layer Cake To leverage LoD, we require schema knowledge application-type driven (reusable for same kind of application) less messy than LoD(as required by application) overarching several LoD datasets (as required by application) ... Application Application Application Application Application Application Application Application Application Application Application Application Application Application Application Application ... Schema Schema Schema Schema less messy Linked Open Data messy humaneyes only Traditional Web content
11. Schema on top of the LOD Cloud Obvious solution to create an ontology capturing the relationships on top of the LOD Schema datasets. Perform a matching of the LOD Schemas using state of the art ontology matching tools. The datasets can be mapped to an upper level ontology which can capture the relationships. Considering the size, heterogeneity and complexity of LOD, at least have results which can be curated by a human being.
15. They are tuned to perform on the established benchmarks, but do not seem to work well in more unconstrained/preselected cases. Most current systems excel on Ontology Alignment Evaluation Initiative Benchmark.
18. LOD has so far emphasized number of instances, not number of meaningful relationships.
19.
20. Step 1: Enrich SchemasBLOOMS – Bootstrapping based Linked Open Data Ontology Matching Systems.
21. Step 1: Semantic Enrichment BLOOMS – Bootstrapping based Linked Open Data Ontology Matching Systems. At the highest level of abstraction our approach takes in two different ontologies and tries to match them using the following steps (1) Using Alignment API to identify direct correspondences. (2) Using the categorization of concepts using Wikipedia. (3) Running a reasoner on the results found using step (2) and directly on the ontologies.
22. Creation Wikipedia Category Hierarchy Utilizes the Wikipedia Web service to identify the matching concepts. Thus for the term Conductor the following definitions are obtained Electrical Conductor Conducting Conductor_(album) Conductor (architecture) Mr. Conductor Conductor (ring theory) These terms correspond to articles on Wikipedia for the concepts in the ontology.
23. Build Category Tree Next step utilize the Web service for identifying Wikipedia categories for building the Wikipedia category tree. Conductor Electrical conductor Conducting Conductor (album) cat:Occupations_in_music cat:Musical_Terminology cat:Musical_Notation cat:Music performance
24. For each different sense of concept c, match it with the different possible senses of the c’. Artist Conductor cat: Arts occupations Conducting cat:Occupations_in_music cat:Music performance cat: Arts_occupations
25. Connected Classes Using the position of the categories identify the relationships. Conductor Is-a Conducting Artist cat:Music performance cat:Occupations_in_music cat: Arts_occupations Ponzetto & Strube, 2007 Thus this helps in identifying approximately the relationship between the various concepts.
26. Disconnected Classes Some senses do not relate to each other Conductor Artist Conductor_(transportation) cat:Occupations_in_music cat:Bus_Transport cat: :Transportation occupations cat: Arts_occupations cat: Transportation Thus this helps in identifying disconnected relationships.
27. Equivalent Classes Some senses are identical to each other Lady_Finger Okra cat: Abelmoschus Okra cat: Hibisceae cat: Abelmoschus cat: Hibisceae cat: Malvoideae Thus this helps in identifyingequivalence relationships.
28. LOD Schema Alignment using BLOOMS Testing done on 10 different pairs of LOD schemas
29. Linked Schema’s DBpedia Ontology Music Ontology Schema Jamendo Music Brainz DBTunes Geonames SWC Pisa IEEE BBC Program ACM FOAF SIOC AKT Portal Ontology
30.
31. Step 2: Integrated Access/Federated QueryingLOQUS: Linked Open Data SPARQL Querying System (LOQUS)
32. Federated Querying Transform a query and broadcast it to a group of disparate and relevant datasets with the appropriate syntax. Merging the results collected from the datasets. Presenting them succinctly and unified format with least duplication. Automatically sort the merged result set.
33. Federated Querying Challenges User is required to have intimate knowledge about the domain of datasets. User needs to understand the exact structure of datasets. For each relevant dataset user needs to form separate queries. Entity disambiguation has to be performed on similar entities. Retrieved results have to be processed and merged.
34. Querying Federated Sources Identify artists, whose albums have been tagged as punk and the population of the places they are based near.
36. Querying the Datasets Music Ontology Give me artists with punk as genre and their locations? Geonames Data Give me the identifier used by Census Bureau for geographic locations? Census Data Give me population figures of geographical entities?
37. LOQUS Linked Open Data SPARQL Querying System. User can pose federated queries without having to know the exact structure and links between the different datasets. Automatically maps user’s query to the relevant datasets using mapping repository created using BLOOMS. Executes individual queries and merges the results into a single, complete answer.
38. Traditionally to Retrieve Results User has to …. Music Data Geographic Data Census Data Perform disambiguation Perform Union and Join Process Results
39. LOQUS Architecture A single source of reference consisting of mapping to the specific LOD datasets. Module to identify concepts contained in the query and perform the translations to the LOD cloud datasets. Module to split the query mapped to LOD datasets concepts into sub-queries corresponding to different datasets. Module to execute the queries remotely and process the results and deliver the final result to the user.
40. Querying using LOQUS Give me artists with punk as genre and their locations? Identify artists, whose albums have been tagged as punk and the population of the places they are based near. Music Data Give me artists with punk as genre and their locations? Give me the identifier used by Census Bureau for geographic locations? LOQUS Give me the identifier used by Census Bureau for geographic locations? Query is decomposed into sub-queries User looks up mapping repository to identify concepts of interest and formulates query Query is routed to the appropriate dataset Geographic Data Give me population figures of geographical entities? Census Data Give me population figures of geographical entities? Mapping Repository
41. Querying Using LOQUS Music Data Results are returned for the sub-queries. LOQUS Geographic Data Census Data
42. LOQUS Processes Partial Results Partial results are processed for union, join and disambiguation by LOQUS. LOQUS
43. Results are Returned to User LOQUS combines the results and presents them back to the user.
48. Conclusions…. continued BLOOMS is one approach for semi-automatically linking different ontologies A new approach for ontology mapping that leverages knowledge in DBPedia A more semantic LOD cloud can enable more intelligent applications such as open question answering LOQUS shows how enriched schemas can enable automatic federated queries, making LOD significantly more useful
49. References Prateek Jain, Pascal Hitzler, Peter Z. Yeh, KunalVerma, Amit P. Sheth, Linked Data is Merely More Data , AAAI Spring Symposium "Linked Data Meets Artificial Intelligence",March 22-24, 2010 Prateek Jain, KunalVerma, Pascal Hitzler, Peter Z. Yeh, Amit P. Sheth, “LOQUS: Linked Open Data SPARQL Querying System”
50. Thanks! This work is funded primarily by NSF Award:IIS-0842129, titled ''III-SGER: Spatio-Temporal-Thematic Queries of Semantic Web Data: a Study of Expressivity and Efficiency''. More at Kno.e.sis – Ohio Center of Excellence on Knowledge-enabled Computing: http://knoesis.org
Editor's Notes
For each concept in the ontology , do a text search using Wikipedia webservice. Using that try to identify the articles which are related to these terms. Once these different terms are identified, build their category trees. The category trees are built upto level 4, since after that, the category tree is too abstract and not much useful for this particular purpose of Ontology Matching.
Take the category of each of these senses and compare them. For example for Conductor, its different senses would be Conducting, Conducting_Album and so on. Try to compare each of these senses to each other. Thus the sense Conducting is being matched here to the term Artist.
Wikipedia categorization has been demonstrated as a taxonomy in the work of : Ponzetto, S.P., Strube, M.: Deriving a large scale taxonomy from Wikipedia. In: AAAI’07: Proceedings of the 22nd national conference on Artificial intelligence, AAAI Press (2007) 1440–1445.The overlap of the two categorization trees helps us in determining the relationship between the trees. The overlap is a numerical amount (threshold) which can be specified by the user. The numerical amount depends on a rough heuristics: (1) If the two ontologies to be matched are of similar domains such as AKT Reference Ontology and Semantic Web Ontology (Publication Domain), then use a higher threshold. It means terms require a tighter integration. (2) If utilizing an upper level ontology, then terms will be abstract. Hence utilize a lower threshold for that. It depends on the kind of results user wants to obtain. To want a High Precision & Low Recall, choose a high threshold. To want a Low Precision & High Recall, choose a low threshold.
Some senses do not related to each other at all. They do not share any common categories or instances.
Wikipedia since its rich in language and terms, can help in identifying things which can’t be matched using normal syntactic tools.
System-1: Alignment APISystem-2: OMViaUO – Our approach outperforms actually 5 different state of the art systems published in the recent past.
1.Linked Open Data Cloud isn’t complete in terms of its linkage2. Possibility to add lot more meaningful connections which are motivated from the direction of Schema to Instance (Common-Sense) then the other way round. Unfortunately, as of now the other way round dominates.3. Using common reasoning, made possible through distributed and approximate reasoning, its possible to identify and clean the LOD Cloud. A lot of the messiness can be thrown away.