This document discusses computing commonalities between SPARQL conjunctive queries. It defines the concept of a least general generalization (lgg) of queries, which is a most general query that entails each of the input queries. The document presents definitions for lgg of basic graph pattern queries in SPARQL with respect to a set of RDF entailment rules and RDFS constraints. It focuses on computing the lgg of two queries by iteratively taking the lgg of query pairs. The goal is to study computing lgg in the conjunctive fragment of SPARQL to applications like query optimization and recommendation.
The document discusses defining and computing the least general generalization (lgg) of RDF graphs and SPARQL queries. It introduces the concepts of RDF graphs, entailment between graphs, and materializing implicit triples using RDFS and RDF entailment rules. The document outlines contributions in defining and computing the lgg in RDF and SPARQL, and reporting on experiments using datasets like DBpedia and LUBM.
This document discusses using decipherment techniques to improve machine translation when parallel data is scarce. It presents an overview of machine translation pipelines and notes that performance drops when parallel data is limited. The document proposes using monolingual data to improve machine translation in real-world scenarios with limited parallel data. It outlines contributions including fast, accurate decipherment of over 1 billion tokens with 93% accuracy, and using decipherment to improve machine translation for domain adaptation and low-resource languages.
The document discusses finding commonalities between RDF graphs by computing their least general generalization (lgg). It defines the lgg of RDF graphs as a generalization that entails all input graphs based on RDF entailment rules, and is entailed by any other generalization. The document focuses on computing the lgg of two RDF graphs, which can be used to iteratively find the lgg of multiple graphs. An example is provided to illustrate defining the lgg of two sample RDF graphs.
Dependent Haskell has been desired in the community of Haskell programmers for a long time. Our goal of this project is to make the core language of Haskell, known as System FC, dependently typed, as steps are taken towards dependent Haskell.
This is a working-in-progress project. As a small step towards our final goal, the focus of this talk is on coercion quantification. Coercion quantification is necessary to support homogeneous equality, which simplifies the core and is important for meta-theories of dependently typed core.
Coercion quantification is interesting for both people working in core and for Haskell users. For GHC hackers, the patch to core formalization is worth attention. Adding coercion quantification involves refactor to lots of files in the compilation pipeline and introduces several subtleties. For Haskell users, coercion quantification opens up new questions to the design space in source Haskell, which requires non-trivial extension of the solver. We would want Haskell users to answer if this feature is ever desired in their development.
In this talk, we will share the high-level story-line of the dependently typed core, our low-level progress in implementing coercion quantification, as well as the involving design space, and seek feedbacks from the broader community.
How to securely exchange key over a public channel.
This is not symmetric algorithm.
it is an Asymmetric algorithm.
It is also known as internet key exchange.
It is fundamental to many protocols including SSH, IPsec, SMTPs
Mechanized Proofs for a Recursive Authentication ProtocolLawrence Paulson
Presented at the 10th Computer Security Foundations Workshop, 1997.
One of the first papers concerning the inductive approach to verifying cryptographic protocols, demonstrated on a variable-length multi-party protocol.
The document describes an approach called Odyssey for optimizing federated SPARQL queries. It involves computing concise statistics about links between triple patterns, called characteristic sets (CS), at a single location. These CSs capture joins and are connected to each other through characteristic pairs (CP). The approach uses these statistics to efficiently optimize query execution plans through dynamic programming. This leads to significant improvements in optimization and execution times compared to existing federated query optimization techniques.
The document discusses defining and computing the least general generalization (lgg) of RDF graphs and SPARQL queries. It introduces the concepts of RDF graphs, entailment between graphs, and materializing implicit triples using RDFS and RDF entailment rules. The document outlines contributions in defining and computing the lgg in RDF and SPARQL, and reporting on experiments using datasets like DBpedia and LUBM.
This document discusses using decipherment techniques to improve machine translation when parallel data is scarce. It presents an overview of machine translation pipelines and notes that performance drops when parallel data is limited. The document proposes using monolingual data to improve machine translation in real-world scenarios with limited parallel data. It outlines contributions including fast, accurate decipherment of over 1 billion tokens with 93% accuracy, and using decipherment to improve machine translation for domain adaptation and low-resource languages.
The document discusses finding commonalities between RDF graphs by computing their least general generalization (lgg). It defines the lgg of RDF graphs as a generalization that entails all input graphs based on RDF entailment rules, and is entailed by any other generalization. The document focuses on computing the lgg of two RDF graphs, which can be used to iteratively find the lgg of multiple graphs. An example is provided to illustrate defining the lgg of two sample RDF graphs.
Dependent Haskell has been desired in the community of Haskell programmers for a long time. Our goal of this project is to make the core language of Haskell, known as System FC, dependently typed, as steps are taken towards dependent Haskell.
This is a working-in-progress project. As a small step towards our final goal, the focus of this talk is on coercion quantification. Coercion quantification is necessary to support homogeneous equality, which simplifies the core and is important for meta-theories of dependently typed core.
Coercion quantification is interesting for both people working in core and for Haskell users. For GHC hackers, the patch to core formalization is worth attention. Adding coercion quantification involves refactor to lots of files in the compilation pipeline and introduces several subtleties. For Haskell users, coercion quantification opens up new questions to the design space in source Haskell, which requires non-trivial extension of the solver. We would want Haskell users to answer if this feature is ever desired in their development.
In this talk, we will share the high-level story-line of the dependently typed core, our low-level progress in implementing coercion quantification, as well as the involving design space, and seek feedbacks from the broader community.
How to securely exchange key over a public channel.
This is not symmetric algorithm.
it is an Asymmetric algorithm.
It is also known as internet key exchange.
It is fundamental to many protocols including SSH, IPsec, SMTPs
Mechanized Proofs for a Recursive Authentication ProtocolLawrence Paulson
Presented at the 10th Computer Security Foundations Workshop, 1997.
One of the first papers concerning the inductive approach to verifying cryptographic protocols, demonstrated on a variable-length multi-party protocol.
The document describes an approach called Odyssey for optimizing federated SPARQL queries. It involves computing concise statistics about links between triple patterns, called characteristic sets (CS), at a single location. These CSs capture joins and are connected to each other through characteristic pairs (CP). The approach uses these statistics to efficiently optimize query execution plans through dynamic programming. This leads to significant improvements in optimization and execution times compared to existing federated query optimization techniques.
The Optimum Clustering Framework: Implementing the Cluster Hypothesisyaevents
The document proposes a framework for optimum document clustering based on the cluster hypothesis. It defines a cluster metric called pairwise precision that evaluates how well a clustering groups together documents that are relevant to the same queries. The metric considers the number of document pairs that are both relevant or both irrelevant to a query within each cluster. The framework aims to find the clustering that maximizes this metric to optimally satisfy the cluster hypothesis. The document outlines experiments to test the framework and examine whether it leads to improved clustering over traditional methods.
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Transl...Jungo Kasai
- Non-autoregressive machine translation (NAR MT) is a recent alternative to autoregressive (AR) MT that allows for faster parallel generation but often has lower accuracy.
- Previous work has found that NAR MT speeds up generation compared to AR MT on a GPU but underperforms in accuracy; this paper reexamines the speed-accuracy tradeoff.
- The paper finds that varying the depth allocation between encoders and decoders, applying knowledge distillation to AR baselines, and measuring maximum batch speed rather than single-sentence speed improves the speed and accuracy of AR MT models relative to NAR MT models.
The document presents research on access strategies for network caching. It introduces the data store selection problem of determining which data stores to access based on indicators to minimize miss costs and access costs. The paper proposes modeling this as a knapsack problem and provides three approximation algorithms - DSKnap, DSPot, and DSPP. An evaluation on a real Wikipedia trace and CDN topology shows the DSKnap algorithm outperforms existing heuristics in total access costs across different miss rates and number of accessed locations.
Hybrid acquisition of temporal scopes for rdf dataAnisa Rula
Information on the temporal interval of validity for facts described by RDF triples plays an important role in a large number of applications. Yet, most of the knowledge bases available on the Web of Data do not provide such information in an explicit manner. In this paper, we present a generic approach which addresses this drawback by inserting temporal information into knowledge bases. Our approach combines two types of information to associate RDF triples with time intervals. First, it relies on temporal information gathered from the document Web by an extension of the fact validation framework DeFacto. Second, it harnesses the time information contained in knowledge bases. This knowledge is combined within a three-step approach which comprises the steps matching,
selection and merging. We evaluate our approach against a corpus of facts gathered from Yago2 by using DBpedia and Freebase as input and different parameter settings for the underlying algorithms. Our results suggest that we can detect temporal information for facts from DBpedia
with an F-measure of up to 70%.
Financil Contracts (FCs) specify rights and obligations that parties are legally
bind.Hence effective management of FCs is vital.Domain Specific Language (DSL)
approach provides a method of defining rights and obligations of contracts using fixed
and precisely defined set of combinators and observables.As a result, any contract can
be composed using fixed set of symbols, the contract management becomes efficient and effective.The Haskell Contract Combinator Library (HCCL) is the driving forcebehind the DSL approach in finance sector
This document discusses model driven software development using Eclipse and Xtext. It describes Xtext as a domain specific language development framework based on Eclipse, the Eclipse Modeling Framework (EMF), and ANTLR parser generator. It provides an overview of the history and users of Xtext, and discusses how to generate code from models defined in an external DSL using Xtext.
This short presentation draws on the computational complexity of Perl 5 regexes, the experimental fetures introduced to P5 later on and the pattern expression grammars in Perl 6. It shows some examples of how PEGs can be used for data exploratory parsing.
Navigating and Exploring RDF Data using Formal Concept AnalysisMehwish Alam
In this study we propose a new approach based on Pattern Structures, an extension of Formal Concept Analysis, to provide exploration over Linked Data through concept lattices. It takes RDF triples and RDF Schema based on user requirements and provides one navigation space resulting from several RDF resources. This navigation space provides interactive exploration over RDF data and allows user to visualize only the part of data that is interesting for her.
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the semantics of the RDF query language SPARQL.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.pdfPo-Chuan Chen
The document describes the RAG (Retrieval-Augmented Generation) model for knowledge-intensive NLP tasks. RAG combines a pre-trained language generator (BART) with a dense passage retriever (DPR) to retrieve and incorporate relevant knowledge from Wikipedia. RAG achieves state-of-the-art results on open-domain question answering, abstractive question answering, and fact verification by leveraging both parametric knowledge from the generator and non-parametric knowledge retrieved from Wikipedia. The retrieved knowledge can also be updated without retraining the model.
Many experts believe that ageing can be delayed, this is one of the main goals of the the Institute of Healthy Ageing at University College London. I will present the results of my lifespan-extension research where we integrated publicly available genes databases in order to identify ageing related genes. I will show what challenges we met and what we have learned about the process of ageing.
Ageing is one of the fundamental mysteries in biology and many scientists are starting to study this fascinating process. I am part of the research group led by Dr Eugene Schuster at UCL Institute of Healthy Ageing. We experiment with Drosophila and Caenorhabditis elegans by modifying their genes in order to create long-lived mutants. The results of our experiments are quantified using high-throughput microarray analysis. Finally we apply information technology in order to understand how the ageing process works. I will show how we mine microarrays data in order to find the connections between thousands of genes and how we identify candidates for ageing genes.
We are interested in building a better understanding of genes functions by harnessing the large quantity of experimental microarray data in the public databases. Our hope is that after understanding the ageing process in simpler organisms we will be able to apply this knowledge in humans.
Cross-referencing expressions levels in thousands of genes and hundreds of experiments turned out to be a computationally challenging problem but Hadoop and Amazon cloud came to our rescue. In this talk I will present a case study based on our use of R with Amazon Elastic MapReduce and will give background on our bioinformatics challenges.
These slides were presented at ApacheCon Europe 2012:
http://www.apachecon.eu/schedule/presentation/3/
This document discusses personalised search for the social semantic web. It introduces Datalog± as a language for representing ontologies and preferences, and describes three frameworks for representing qualitative, quantitative, and conditional preferences in Datalog±:
1. PP-Datalog± combines Datalog± with partial qualitative preferences and probabilistic uncertainty. It defines preference combination operators and an algorithm for top-k queries.
2. GPP-Datalog± generalizes PP-Datalog± to handle group preferences from multiple users with and without probabilistic uncertainty. It defines operators for merging single-user preferences and aggregating preferences of a group.
3. Challenges include preference merging when user preferences disagree with probabilistic
Inductive Triple Graphs: A purely functional approach to represent RDFJose Emilio Labra Gayo
Slides of my presentation on 3rd International Workshop on Graph Structures for Knowledge Representation, part of the International Joint Conference on Artificial Intelligence, Beijing, China. 4 August 2013
Information access over linked data requires to determine
subgraph(s), in linked data's underlying graph, that correspond to the required information need. Usually, an information access framework is able to retrieve richer information by checking of a large number of possible subgraphs. However, on the ecking of a large number of possible subgraphs increases information access complexity. This makes information access frameworks less eective. A large number of contemporary linked data information access frameworks reduce the complexity by introducing dierent heuristics but they suer on retrieving richer information. Or, some frameworks do not care about the complexity. However, a practically usable framework should retrieve richer information with lower complexity. In linked data information access, we hypothesize that pre-processed data statistics of linked data can be used to eciently check a large number of possible subgraphs. This will help to retrieve comparatively richer information with lower data access complexity. Preliminary evaluation of our proposed hypothesis shows promising performance.
RSP-QL*: Querying Data-Level Annotations in RDF Streamskeski
This document proposes an extension to RSP-QL called RSP-QL* that allows querying of statement-level annotations in RDF streams. RSP-QL* uses the RDF* model, which allows embedding RDF triples as the subject or object of other triples. This provides an efficient way to represent statement-level metadata in RDF. The semantics of RSP-QL are extended to support RSP-QL* patterns, which can include basic graph patterns, named graphs, windows and other operators. Future work includes adding more functionality to the RDF* model, prototyping an implementation, and evaluating performance.
RuleML2015: Learning Characteristic Rules in Geographic Information SystemsRuleML
We provide a general framework for learning characterization
rules of a set of objects in Geographic Information Systems (GIS) relying
on the definition of distance quantified paths. Such expressions specify
how to navigate between the different layers of the GIS starting from
the target set of objects to characterize. We have defined a generality
relation between quantified paths and proved that it is monotonous with
respect to the notion of coverage, thus allowing to develop an interactive
and effective algorithm to explore the search space of possible rules. We
describe GISMiner, an interactive system that we have developed based
on our framework. Finally, we present our experimental results from a
real GIS about mineral exploration.
We present a new version of the data model stRDF and the query language stSPARQL for the representation and querying of geospatial data. The new versions of stRDF and stSPARQL use OGC standards to represent geometries where the original version of stSPARQL used linear constraints. In this sense stSPARQL is a subset of the recent standard GeoSPARQL proposed by OGC. We discuss the implementation of the system Strabon which is a storage and query evaluation module for stRDF/stSPARQL and the corresponding subset of GeoSPARQL. We study the performance of Strabon experimentally and show that it scales to very large data volumes.
The document discusses graph algorithms and their implementation using MapReduce. It describes how transitive closure, PageRank, and other graph algorithms can be computed in a distributed manner using MapReduce. While graph processing with MapReduce has challenges, systems like Pregel and Apache Hamburg aim to provide easier programming models for graph algorithms on large datasets.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
The Optimum Clustering Framework: Implementing the Cluster Hypothesisyaevents
The document proposes a framework for optimum document clustering based on the cluster hypothesis. It defines a cluster metric called pairwise precision that evaluates how well a clustering groups together documents that are relevant to the same queries. The metric considers the number of document pairs that are both relevant or both irrelevant to a query within each cluster. The framework aims to find the clustering that maximizes this metric to optimally satisfy the cluster hypothesis. The document outlines experiments to test the framework and examine whether it leads to improved clustering over traditional methods.
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Transl...Jungo Kasai
- Non-autoregressive machine translation (NAR MT) is a recent alternative to autoregressive (AR) MT that allows for faster parallel generation but often has lower accuracy.
- Previous work has found that NAR MT speeds up generation compared to AR MT on a GPU but underperforms in accuracy; this paper reexamines the speed-accuracy tradeoff.
- The paper finds that varying the depth allocation between encoders and decoders, applying knowledge distillation to AR baselines, and measuring maximum batch speed rather than single-sentence speed improves the speed and accuracy of AR MT models relative to NAR MT models.
The document presents research on access strategies for network caching. It introduces the data store selection problem of determining which data stores to access based on indicators to minimize miss costs and access costs. The paper proposes modeling this as a knapsack problem and provides three approximation algorithms - DSKnap, DSPot, and DSPP. An evaluation on a real Wikipedia trace and CDN topology shows the DSKnap algorithm outperforms existing heuristics in total access costs across different miss rates and number of accessed locations.
Hybrid acquisition of temporal scopes for rdf dataAnisa Rula
Information on the temporal interval of validity for facts described by RDF triples plays an important role in a large number of applications. Yet, most of the knowledge bases available on the Web of Data do not provide such information in an explicit manner. In this paper, we present a generic approach which addresses this drawback by inserting temporal information into knowledge bases. Our approach combines two types of information to associate RDF triples with time intervals. First, it relies on temporal information gathered from the document Web by an extension of the fact validation framework DeFacto. Second, it harnesses the time information contained in knowledge bases. This knowledge is combined within a three-step approach which comprises the steps matching,
selection and merging. We evaluate our approach against a corpus of facts gathered from Yago2 by using DBpedia and Freebase as input and different parameter settings for the underlying algorithms. Our results suggest that we can detect temporal information for facts from DBpedia
with an F-measure of up to 70%.
Financil Contracts (FCs) specify rights and obligations that parties are legally
bind.Hence effective management of FCs is vital.Domain Specific Language (DSL)
approach provides a method of defining rights and obligations of contracts using fixed
and precisely defined set of combinators and observables.As a result, any contract can
be composed using fixed set of symbols, the contract management becomes efficient and effective.The Haskell Contract Combinator Library (HCCL) is the driving forcebehind the DSL approach in finance sector
This document discusses model driven software development using Eclipse and Xtext. It describes Xtext as a domain specific language development framework based on Eclipse, the Eclipse Modeling Framework (EMF), and ANTLR parser generator. It provides an overview of the history and users of Xtext, and discusses how to generate code from models defined in an external DSL using Xtext.
This short presentation draws on the computational complexity of Perl 5 regexes, the experimental fetures introduced to P5 later on and the pattern expression grammars in Perl 6. It shows some examples of how PEGs can be used for data exploratory parsing.
Navigating and Exploring RDF Data using Formal Concept AnalysisMehwish Alam
In this study we propose a new approach based on Pattern Structures, an extension of Formal Concept Analysis, to provide exploration over Linked Data through concept lattices. It takes RDF triples and RDF Schema based on user requirements and provides one navigation space resulting from several RDF resources. This navigation space provides interactive exploration over RDF data and allows user to visualize only the part of data that is interesting for her.
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the semantics of the RDF query language SPARQL.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.pdfPo-Chuan Chen
The document describes the RAG (Retrieval-Augmented Generation) model for knowledge-intensive NLP tasks. RAG combines a pre-trained language generator (BART) with a dense passage retriever (DPR) to retrieve and incorporate relevant knowledge from Wikipedia. RAG achieves state-of-the-art results on open-domain question answering, abstractive question answering, and fact verification by leveraging both parametric knowledge from the generator and non-parametric knowledge retrieved from Wikipedia. The retrieved knowledge can also be updated without retraining the model.
Many experts believe that ageing can be delayed, this is one of the main goals of the the Institute of Healthy Ageing at University College London. I will present the results of my lifespan-extension research where we integrated publicly available genes databases in order to identify ageing related genes. I will show what challenges we met and what we have learned about the process of ageing.
Ageing is one of the fundamental mysteries in biology and many scientists are starting to study this fascinating process. I am part of the research group led by Dr Eugene Schuster at UCL Institute of Healthy Ageing. We experiment with Drosophila and Caenorhabditis elegans by modifying their genes in order to create long-lived mutants. The results of our experiments are quantified using high-throughput microarray analysis. Finally we apply information technology in order to understand how the ageing process works. I will show how we mine microarrays data in order to find the connections between thousands of genes and how we identify candidates for ageing genes.
We are interested in building a better understanding of genes functions by harnessing the large quantity of experimental microarray data in the public databases. Our hope is that after understanding the ageing process in simpler organisms we will be able to apply this knowledge in humans.
Cross-referencing expressions levels in thousands of genes and hundreds of experiments turned out to be a computationally challenging problem but Hadoop and Amazon cloud came to our rescue. In this talk I will present a case study based on our use of R with Amazon Elastic MapReduce and will give background on our bioinformatics challenges.
These slides were presented at ApacheCon Europe 2012:
http://www.apachecon.eu/schedule/presentation/3/
This document discusses personalised search for the social semantic web. It introduces Datalog± as a language for representing ontologies and preferences, and describes three frameworks for representing qualitative, quantitative, and conditional preferences in Datalog±:
1. PP-Datalog± combines Datalog± with partial qualitative preferences and probabilistic uncertainty. It defines preference combination operators and an algorithm for top-k queries.
2. GPP-Datalog± generalizes PP-Datalog± to handle group preferences from multiple users with and without probabilistic uncertainty. It defines operators for merging single-user preferences and aggregating preferences of a group.
3. Challenges include preference merging when user preferences disagree with probabilistic
Inductive Triple Graphs: A purely functional approach to represent RDFJose Emilio Labra Gayo
Slides of my presentation on 3rd International Workshop on Graph Structures for Knowledge Representation, part of the International Joint Conference on Artificial Intelligence, Beijing, China. 4 August 2013
Information access over linked data requires to determine
subgraph(s), in linked data's underlying graph, that correspond to the required information need. Usually, an information access framework is able to retrieve richer information by checking of a large number of possible subgraphs. However, on the ecking of a large number of possible subgraphs increases information access complexity. This makes information access frameworks less eective. A large number of contemporary linked data information access frameworks reduce the complexity by introducing dierent heuristics but they suer on retrieving richer information. Or, some frameworks do not care about the complexity. However, a practically usable framework should retrieve richer information with lower complexity. In linked data information access, we hypothesize that pre-processed data statistics of linked data can be used to eciently check a large number of possible subgraphs. This will help to retrieve comparatively richer information with lower data access complexity. Preliminary evaluation of our proposed hypothesis shows promising performance.
RSP-QL*: Querying Data-Level Annotations in RDF Streamskeski
This document proposes an extension to RSP-QL called RSP-QL* that allows querying of statement-level annotations in RDF streams. RSP-QL* uses the RDF* model, which allows embedding RDF triples as the subject or object of other triples. This provides an efficient way to represent statement-level metadata in RDF. The semantics of RSP-QL are extended to support RSP-QL* patterns, which can include basic graph patterns, named graphs, windows and other operators. Future work includes adding more functionality to the RDF* model, prototyping an implementation, and evaluating performance.
RuleML2015: Learning Characteristic Rules in Geographic Information SystemsRuleML
We provide a general framework for learning characterization
rules of a set of objects in Geographic Information Systems (GIS) relying
on the definition of distance quantified paths. Such expressions specify
how to navigate between the different layers of the GIS starting from
the target set of objects to characterize. We have defined a generality
relation between quantified paths and proved that it is monotonous with
respect to the notion of coverage, thus allowing to develop an interactive
and effective algorithm to explore the search space of possible rules. We
describe GISMiner, an interactive system that we have developed based
on our framework. Finally, we present our experimental results from a
real GIS about mineral exploration.
We present a new version of the data model stRDF and the query language stSPARQL for the representation and querying of geospatial data. The new versions of stRDF and stSPARQL use OGC standards to represent geometries where the original version of stSPARQL used linear constraints. In this sense stSPARQL is a subset of the recent standard GeoSPARQL proposed by OGC. We discuss the implementation of the system Strabon which is a storage and query evaluation module for stRDF/stSPARQL and the corresponding subset of GeoSPARQL. We study the performance of Strabon experimentally and show that it scales to very large data volumes.
The document discusses graph algorithms and their implementation using MapReduce. It describes how transitive closure, PageRank, and other graph algorithms can be computed in a distributed manner using MapReduce. While graph processing with MapReduce has challenges, systems like Pregel and Apache Hamburg aim to provide easier programming models for graph algorithms on large datasets.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
In this paper we have established the bounds of the extreme characteristic roots of nlap G and sLap G by their traces. Also found the bounds for n th characteristic roots of nLap G and sLap G . M M Jariya "Results on Characteristic Vectors" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28006.pdf Paper URL: https://www.ijtsrd.com/mathemetics/other/28006/results-on-characteristic-vectors/m-m-jariya
Merged Talk: A Verified Optimizer for Quantum Circuits & Verified Translation...Robert Rand
The document describes a verified optimizer for quantum circuits called VOQC. VOQC is implemented in Coq and consists of 8000 lines of code. It performs optimizations like gate propagation, cancellation, and merging on quantum circuits represented using a Small Quantum Intermediate Representation (SQIR). The optimizations are formally verified to be semantics-preserving by proving properties like circuits having the same effect on basis states. An example shows X and Z gate propagation rules and their associated proofs. The goal is to build a fully verified compiler stack for quantum programs.
An optimal and progressive algorithm for skyline queries slideWooSung Choi
The document presents an optimal and progressive algorithm for processing skyline queries using an R-tree index. It discusses two strategies - recursive nearest neighbor queries and a branch and bound skyline algorithm. The recursive NN query approach requires additional processing to eliminate duplicate results for higher dimensions, while the branch and bound skyline algorithm prunes non-skyline points during traversal to directly generate the skyline without duplicates. The algorithm processes the R-tree in a best-first manner by maintaining a priority queue of tree nodes ordered by their minimum possible skyline size.
The Persistent Homology of Distance Functions under Random ProjectionDon Sheehy
Given n points P in a Euclidean space, the Johnson-Lindenstrauss lemma guarantees that the distances between pairs of points is preserved up to a small constant factor with high probability by random projection into O(log n) dimensions. In this paper, we show that the persistent homology of the distance function to P is also preserved up to a comparable constant factor. One could never hope to preserve the distance function to P pointwise, but we show that it is preserved sufficiently at the critical points of the distance function to guarantee similar persistent homology. We prove these results in the more general setting of weighted k-th nearest neighbor distances, for which k=1 and all weights equal to zero gives the usual distance to P.
Understanding distributed calculi in HaskellPawel Szulc
The document discusses distributed calculi and the pi-calculus in particular. It begins with an overview of the pi-calculus syntax including processes, input/output prefixes, parallel composition, restriction and replication. Examples are given to demonstrate communication between processes using input/output prefixes. Structural congruence and reduction rules are also covered. The document concludes with an example of modeling a simple ping-pong protocol in pi-calculus.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
1. Learning Commonalities in
SPARQL
Sara El Hassad François Goasdoué Hélène Jaudoin
IRISA, Univ. Rennes 1, Lannion, France
ISWC 2017 - 21 - 26 October 2017
1/31
2. Introduction
Least general generalization (lgg)
Machine Learning in the early 70’s by Gordon Plotkin
Knowledge representation domain in the early 90’s
Recently in semantic web
2/31
3. Introduction
Least general generalization (lgg)
Machine Learning in the early 70’s by Gordon Plotkin
Knowledge representation domain in the early 90’s
Recently in semantic web
Applications of lgg
Query optimization: identify candidate views, or potiential query
sharing
Query approximation: a set of queries by a single query
Social context: recommending users asking for enough relates things
2/31
4. Introduction
Least general generalization (lgg)
Machine Learning in the early 70’s by Gordon Plotkin
Knowledge representation domain in the early 90’s
Recently in semantic web
Applications of lgg
Query optimization: identify candidate views, or potiential query
sharing
Query approximation: a set of queries by a single query
Social context: recommending users asking for enough relates things
Goal
To study the problem in the entire conjunctive fragment of SPARQL
setting.
2/31
6. RDF graphs
Specification of RDF graphs with triples:
(s, p, o) ∈ (U ∪ B) × U × (U ∪ L ∪ B) s op
Built-in property URIs to state RDF statements
RDF statement Triple
Class assertion (s, rdf:type, o)
Property assertion (s, p, o) with
p = rdf:type
b "LGG in RDF"
ConfPaper b1
hasTitle
τ hasContactAuthor
4/31
7. Adding ontological knowledge to RDF graphs
Built-in property URIs to state RDF Schema statements, i.e.,
ontological constraints.
RDFS statement Triple
Subclass (s, sc, o)
Subproperty (s, sp, o)
Domain typing (s, ←d , o)
Range typing (s, →r , o)
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthor
5/31
8. Deriving the implicit triples
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
Figure: RDF graph G
How to derive implicit triples of an RDF graph ?
6/31
10. Semantics of RDF graphs
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
Figure: Saturated RDF graph G∞
8/31
11. Basic graph pattern queries (BGPQ)
BGPQ : conjunctive fragment of SPARQL queries, is the counterpart
of the select-project-join queries for databases
(s, p, o) ∈ (V ∪ U) × (V ∪ U) × (V ∪ U ∪ L)
9/31
12. Basic graph pattern queries (BGPQ)
BGPQ : conjunctive fragment of SPARQL queries, is the counterpart
of the select-project-join queries for databases
(s, p, o) ∈ (V ∪ U) × (V ∪ U) × (V ∪ U ∪ L)
x1 ConfPaper
y1
τ
hasContactAuthor
body(q1)
Figure: Sample BGPQ q1(x1)
9/31
13. Entailing and answering queries
Query entailment
G |=R q ⇐⇒ G∞
|=R q
x1
x2
τ
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor
G∞q(x1, x2)
Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
14. Entailing and answering queries
Query entailment
G |=R q ⇐⇒ G∞
|=R q
x1
x2
τ
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor
G∞q(x1, x2)
Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
b
Publication
τ
10/31
15. Entailing and answering queries
Query answering
x1
x2
τ
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor
G∞q(x1, x2)
Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
16. Entailing and answering queries
Query answering
x1
x2
τ
b "LGG in RDF"
ConfPaper hasContactAuthor b1
Publication hasAuthor
G∞q(x1, x2)
Researcher
hasTitle
τ
sc sp
→r←d
hasContactAuthorτ
hasAuthor
τ→r←d
b
Publication
τ
ConfPaper
τ
Researcher
τ
b1
11/31
17. Entailing between BGPQs
q |=R q ⇐⇒ q∞
|= q
x1 Publication
y1
τ
hasAuthor
SA
z1
hasContactAuthor
title
x2 Publication
y2
τ
hasAuthor
q∞
(x1) q (x2)
20. Towards defining lgg in SPARQL conjunctive fragment
A least general generalization (lgg) of n descriptions d1, . . . , dn is a most
specific description d generalizing every d1≤i≤n for some
generalization/specialization relation between descriptions (G.Plotkin).
lgg in our SPARQL setting
descriptions are BGP Queries
relation generalization/specialization is entailment between queries
14/31
21. Defining the lgg of queries
lgg of BGPQs
Let q1, . . . , qn be BGPQs with the same arity and R a set of RDF
entailment rules.
A generalization of q1, . . . , qn is a BGPQ qg such that qi |=R qg for
1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn is a generalization qlgg of
q1, . . . , qn such that for any other generalization qg of q1, . . . , qn:
qlgg |=R qg .
15/31
22. Defining the lgg of queries
lgg of BGPQs
Let q1, . . . , qn be BGPQs with the same arity and R a set of RDF
entailment rules.
A generalization of q1, . . . , qn is a BGPQ qg such that qi |=R qg for
1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn is a generalization qlgg of
q1, . . . , qn such that for any other generalization qg of q1, . . . , qn:
qlgg |=R qg .
x1 ConfPaper
y1
τ
hasContactAuthor
x2 JourPaper
y2
τ
hasAuthor
bx1x2
bCPJP
τ
q1(x1) q2(x2) qlgg (bx1x2
)
23. Defining the lgg of queries
lgg of BGPQs
Let q1, . . . , qn be BGPQs with the same arity and R a set of RDF
entailment rules.
A generalization of q1, . . . , qn is a BGPQ qg such that qi |=R qg for
1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn is a generalization qlgg of
q1, . . . , qn such that for any other generalization qg of q1, . . . , qn:
qlgg |=R qg .
x1 ConfPaper
y1
τ
hasContactAuthor
x2 JourPaper
y2
τ
hasAuthor
bx1x2
bCPJP
τ
bx1x2
Publication
by1y2
Researcher
τ
hasAuthor
τ
q1(x1) q2(x2) qlgg (bx1x2
) qlggO(bx1x2
)
15/31
24. Entailment relation between BGPQs w.r.t. background
knowledge
Entailment between BGPQs w.r.t. R, O
Given a set R of RDF entailment rules, a set O of RDFS statements, and
two BGPQs q1 and q2 with the same arity, q1 entails q2 w.r.t. O,
denoted q1 |=R,O q2, iff q1
∞
O |= q2 holds.
Well-founded relation : q1 |=R,O q2
Query entailment: if G |=R q1 holds then G |=R q2 holds,
Query answering: q1(G) ⊆ q2(G) holds.
16/31
26. Defining the lgg of queries w.r.t. background knowledge
Definition (lgg of BGPQs w.r.t. RDFS constraints)
Let R be a set of RDF entailment rules, O a set of RDFS statements,
and q1, . . . , qn n BGPQs with the same arity.
A generalization of q1, . . . , qn w.r.t. O is a BGPQ qg such that
qi |=R,Oqg for 1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn w.r.t. O is a
generalization qlgg of q1, . . . , qn w.r.t. O such that for any other
generalization qg of q1, . . . , qn w.r.t. O: qlgg|=R,Oqg .
Theorem
An lgg of BGPQs w.r.t. RDFS statements may not exist for some set of
RDF entailment rules; when it exists, it is unique up to entailment
(|=R,O).
18/31
27. Defining the lgg of queries w.r.t. background knowledge
Definition (lgg of BGPQs w.r.t. RDFS constraints)
Let R be a set of RDF entailment rules, O a set of RDFS statements,
and q1, . . . , qn n BGPQs with the same arity.
A generalization of q1, . . . , qn w.r.t. O is a BGPQ qg such that
qi |=R,Oqg for 1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn w.r.t. O is a
generalization qlgg of q1, . . . , qn w.r.t. O such that for any other
generalization qg of q1, . . . , qn w.r.t. O: qlgg|=R,Oqg .
Result : lgg of n BGPQ queries vs lgg of two BGPQ queries
3(q1, q2, q3) ≡R,O 2( 2(q1, q2), q3)
· · · · · ·
n(q1, . . . , qn) ≡R,O 2( n−1(q1, . . . , qn−1), qn)
≡R,O 2( 2(· · · 2( 2(q1, q2), q3) · · · , qn−1), qn)
19/31
28. Defining the lgg of queries w.r.t. background knowledge
Definition (lgg of BGPQs w.r.t. RDFS constraints)
Let R be a set of RDF entailment rules, O a set of RDFS statements,
and q1, . . . , qn n BGPQs with the same arity.
A generalization of q1, . . . , qn w.r.t. O is a BGPQ qg such that
qi |=R,Oqg for 1 ≤ i ≤ n.
A least general generalization of q1, . . . , qn w.r.t. O is a
generalization qlgg of q1, . . . , qn w.r.t. O such that for any other
generalization qg of q1, . . . , qn w.r.t. O: qlgg|=R,Oqg .
Result : lgg of n BGPQ queries vs lgg of two BGPQ queries
3(q1, q2, q3) ≡R,O 2( 2(q1, q2), q3)
· · · · · ·
n(q1, . . . , qn) ≡R,O 2( n−1(q1, . . . , qn−1), qn)
≡R,O 2( 2(· · · 2( 2(q1, q2), q3) · · · , qn−1), qn)
We focus on computing lgg of two BGPQ queries
19/31
29. Defining the lgg of queries
x1 ConfPaper
y1
τ
hasContactAuthor
x2 JourPaper
y2
τ
hasAuthor
Publication hasAuthor Researcher
ConfPaper JourPaper
hasContactAuthor
←d →r
sp←d →r
scsc
q1(x1) q2(x2) O
31. Defining the lgg of queries
x1 ConfPaper
y1
τ
hasContactAuthor
x2 JourPaper
y2
τ
hasAuthor
Publication hasAuthor Researcher
ConfPaper JourPaper
hasContactAuthor
←d →r
sp←d →r
scsc
q1(x1) q2(x2) O
bx1x2
Publication
by1y2
Researcher
τ
hasAuthor
τ
qlggO
How to compute this query ?
20/31
32. The cover of SPARQL queries
Definition (Cover query)
Let q1, q2 be two BGPQs with the same arity n.
If there exists the BGPQ q such that
head(q1) = q(x1
1 , . . . , xn
1 ) and head(q2) = q(x1
2 , . . . , xn
2 ) iff
head(q) = q(vx1
1 x1
2
, . . . , vxn
1 xn
2
)
(t1, t2, t3) ∈ body(q1) and (t4, t5, t6) ∈ body(q2) iff
(t7, t8, t9) ∈ body(q) with, for 1 ≤ i ≤ 3, ti+6 = ti if ti = ti+3 and
ti ∈ U ∪ L, otherwise ti+6 is the variable vti ti+3
then q is the cover query of q1, q2.
21/31
37. Cover graph vs lgg
Theorem
Given a set R of RDF entailment rules, a set O of RDFS statements and
two BGPQs q1, q2 with the same arity,
1. the cover query q of q1
∞
O , q2
∞
O exists iff an lgg of q1, q2 w.r.t. O
exists;
2. the cover query q of q1
∞
O , q2
∞
O is an lgg of q1, q2 w.r.t. O.
Corollary
A cover query-based lgg of two BGPQs q1 and q2 is computed in
O(|body(q1
∞
O )| × |body(q2
∞
O )|) and its size is
|body(q1
∞
O )| × |body(q2
∞
O )|.
23/31
41. Related work
Structural approaches
RDF
Rooted graphs, ignore RDF entailment :
- [Colucci et al., 2016].
SPARQL : tree queries
- [Lehmann and Bühmann, 2011].
Description Logics
- [Zarrieß and Turhan, 2013].
- [Baader et al., 1999].
Approaches independent of the structure
RDF
- [Hassad et al., 2017].
- [Petrova et al., 2017].
Conceptual Graphs
- [Chein and Mugnier, 2009].
First Order Clauses
- [Nienhuys-Cheng and de Wolf, 1996].
- [Plotkin, 1970].
27/31
42. Conclusion
We revisited the problem of computing a least general generalization
of general BGPQs w.r.t. background knowledge.
We defined new entailment relationship between BGPQs
w.r.t. background knowledge.
We studied the added-value of considering background knowledge
when learning lggs.
Perspective:
Heuristics in order to compute lgg without redundants triples.
28/31
44. References I
[Baader et al., 1999] Baader, F., Kiisters, R., and Molitor, R. (1999).
Computing least common subsumers in description logics with existential restrictions.
In IJCAI.
[Chein and Mugnier, 2009] Chein, M. and Mugnier, M. (2009).
Graph-based Knowledge Representation - Computational Foundations of Conceptual Graphs.
Springer.
[Colucci et al., 2016] Colucci, S., Donini, F., Giannini, S., and Sciascio, E. D. (2016).
Defining and computing least common subsumers in RDF.
J. Web Semantics, 39(0).
[Hassad et al., 2017] Hassad, S. E., Goasdoué, F., and Jaudoin, H. (2017).
Learning commonalities in RDF.
In The 14th Extended Semantic Web Conference, ESWC 2017, Portorož, Slovenia, May 28 - June 1, 2017,
Proceedings, Part I, pages 502–517.
[Lehmann and Bühmann, 2011] Lehmann, J. and Bühmann, L. (2011).
Autosparql: Let users query your knowledge base.
In ESWC.
[Nienhuys-Cheng and de Wolf, 1996] Nienhuys-Cheng, S. and de Wolf, R. (1996).
Least generalizations and greatest specializations of sets of clauses.
J. Artif. Intell. Res.
[Petrova et al., 2017] Petrova, A., Sherkhonov, E., Grau, B. C., and Horrocks, I. (2017).
Entity comparison in RDF graphs.
In International Semantic Web Conference (ISWC). Springer.
[Plotkin, 1970] Plotkin, G. D. (1970).
A note on inductive generalization.
Machine Intelligence, 5.
[W3C-RDFS, 2014] W3C-RDFS (2014).
RDF 1.1 semantics.
https://www.w3.org/TR/rdf11-mt/.
30/31
45. References II
[Zarrieß and Turhan, 2013] Zarrieß, B. and Turhan, A. (2013).
Most specific generalizations w.r.t. general EL-TBoxes.
In IJCAI.
31/31