Presentation of the paper:
Szymon Klarman and Thomas Meyer. Querying Temporal Databases via OWL 2 QL (with appendix). In Proceedings of the 8th International Conference on Web Reasoning and Rule Systems (RR-14), 2014.
The document describes a method called the "Four Russians method" to speed up Bayesian Hidden Markov Model (HMM) classification by exploiting repetition in long observation sequences. The key ideas are to break the observation sequence into blocks of length k and compute the forward variables only at block boundaries, and to sample the hidden state sequence block-by-block from the backward-forward distribution rather than the full backward distribution. This reduces the computational complexity from O(TN^2) to O(TNk/k^2) = O(TN/k).
The box-fitting least squares (BLS) algorithm is used to detect periodic transits in light curves. It assumes the light curve only has two discrete values, fits box-shaped functions to folded light curves at different trial periods, and calculates the signal residue to identify the period with the maximum signal. The algorithm iterates over trial periods and transit durations to find the best-fitting five parameters that describe the period, depths, duration, and epoch of any transits present.
"Scalable Link Discovery for Modern Data-Driven Applications" as presented in the 15th International Semantic Web Conference ISWC, Doctoral Consortium, October 18th, 2016, held in Kobe, Japan
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
The document describes a discrete-time Kalman filter implemented in Matlab to estimate the position and velocity of an underwater target. It defines the state vector, system model, and measurement model. Process and measurement noise are added through the Q and R matrices. Simulation results show the position error converges initially and remains small by the end.
A Commutative Alternative to Fractional Calculus on k-Differentiable FunctionsMatt Parker
This document presents a method for creating a commutative operator that acts parallel to fractional calculus operators on continuous functions. It defines spaces Ck that contain images of continuous functions and combines these into a space Cdiff that contains a subset isomorphic to the space of continuous functions C(R). An operator Dk is defined on Cdiff that commutes with itself and acts equivalently to fractional derivatives on C(R) up to the differentiability of the function. This provides a commutative alternative to fractional calculus on continuous functions.
Representing and Querying Geospatial Information in the Semantic WebKostis Kyzirakos
The document discusses representing and querying geospatial information in the semantic web. It introduces stRDF, an extension of RDF that adds spatial literals and valid time to triples. It also introduces stSPARQL, an extension of SPARQL with functions for querying spatial data based on Open Geospatial Consortium standards. The document describes the Strabon system, which uses stRDF and supports both stSPARQL and the OGC standard GeoSPARQL for querying geospatial data stored in RDF graphs.
This document summarizes the derivation of an evidence lower bound (ELBO) for latent LSTM allocation, a model that uses an LSTM to determine topic assignments in a topic modeling framework. It expresses the ELBO as terms related to the variational posterior distributions over topics and topics proportions, the generative process of words given topics, and the LSTM's prediction of topic assignments. It also describes how to optimize the ELBO with respect to the variational and LSTM parameters through gradient ascent.
Building Scalable Semantic Geospatial RDF StoresKostis Kyzirakos
This document outlines a model called stRDF for representing geospatial and temporal data in RDF, along with a query language called stSPARQL. It also describes Strabon, a scalable geospatial RDF store for storing and querying stRDF data. Strabon extends the Semantic Web toolkit Sesame and uses PostGIS for geospatial indexing and functions. The document evaluates Strabon's performance against Sesame on geospatial linked data and synthetic datasets. Finally, it discusses other extensions like the RDFi framework for representing data with incomplete information.
The document describes a method called the "Four Russians method" to speed up Bayesian Hidden Markov Model (HMM) classification by exploiting repetition in long observation sequences. The key ideas are to break the observation sequence into blocks of length k and compute the forward variables only at block boundaries, and to sample the hidden state sequence block-by-block from the backward-forward distribution rather than the full backward distribution. This reduces the computational complexity from O(TN^2) to O(TNk/k^2) = O(TN/k).
The box-fitting least squares (BLS) algorithm is used to detect periodic transits in light curves. It assumes the light curve only has two discrete values, fits box-shaped functions to folded light curves at different trial periods, and calculates the signal residue to identify the period with the maximum signal. The algorithm iterates over trial periods and transit durations to find the best-fitting five parameters that describe the period, depths, duration, and epoch of any transits present.
"Scalable Link Discovery for Modern Data-Driven Applications" as presented in the 15th International Semantic Web Conference ISWC, Doctoral Consortium, October 18th, 2016, held in Kobe, Japan
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
The document describes a discrete-time Kalman filter implemented in Matlab to estimate the position and velocity of an underwater target. It defines the state vector, system model, and measurement model. Process and measurement noise are added through the Q and R matrices. Simulation results show the position error converges initially and remains small by the end.
A Commutative Alternative to Fractional Calculus on k-Differentiable FunctionsMatt Parker
This document presents a method for creating a commutative operator that acts parallel to fractional calculus operators on continuous functions. It defines spaces Ck that contain images of continuous functions and combines these into a space Cdiff that contains a subset isomorphic to the space of continuous functions C(R). An operator Dk is defined on Cdiff that commutes with itself and acts equivalently to fractional derivatives on C(R) up to the differentiability of the function. This provides a commutative alternative to fractional calculus on continuous functions.
Representing and Querying Geospatial Information in the Semantic WebKostis Kyzirakos
The document discusses representing and querying geospatial information in the semantic web. It introduces stRDF, an extension of RDF that adds spatial literals and valid time to triples. It also introduces stSPARQL, an extension of SPARQL with functions for querying spatial data based on Open Geospatial Consortium standards. The document describes the Strabon system, which uses stRDF and supports both stSPARQL and the OGC standard GeoSPARQL for querying geospatial data stored in RDF graphs.
This document summarizes the derivation of an evidence lower bound (ELBO) for latent LSTM allocation, a model that uses an LSTM to determine topic assignments in a topic modeling framework. It expresses the ELBO as terms related to the variational posterior distributions over topics and topics proportions, the generative process of words given topics, and the LSTM's prediction of topic assignments. It also describes how to optimize the ELBO with respect to the variational and LSTM parameters through gradient ascent.
Building Scalable Semantic Geospatial RDF StoresKostis Kyzirakos
This document outlines a model called stRDF for representing geospatial and temporal data in RDF, along with a query language called stSPARQL. It also describes Strabon, a scalable geospatial RDF store for storing and querying stRDF data. Strabon extends the Semantic Web toolkit Sesame and uses PostGIS for geospatial indexing and functions. The document evaluates Strabon's performance against Sesame on geospatial linked data and synthetic datasets. Finally, it discusses other extensions like the RDFi framework for representing data with incomplete information.
Transceiver design for single-cell and multi-cell downlink multiuser MIMO sys...T. E. BOGALE
The document outlines a presentation on transceiver design for single-cell and multi-cell downlink multiuser MIMO systems. It discusses MSE uplink-downlink duality under imperfect CSI, showing that the sum MSE, user MSE, and symbol MSE are dual between the uplink and downlink channels. It demonstrates how to ensure the uplink and downlink MSE values are equal to each other by appropriately setting the transmit covariance matrices. The presentation also covers transceiver design algorithms for coordinated base station systems and generalized duality for multiuser MIMO systems.
Reproducing Kernel Hilbert Space of A Set Indexed Brownian MotionIJMERJOURNAL
ABSTRACT: This study researches a representation of set indexed Brownian motion { : } X X A A A via orthonormal basis, based on reproducing kernel Hilbert space (RKHS). The RKHS associated with the set indexed Brownian motion X is a Hilbert space of real-valued functions on T that is naturally isometric to 2 L ( ) A . The isometry between these Hilbert spaces leads to useful spectral representations of the set indexed Brownian motion, notably the Karhunen-Loève (KL) representation: [ ] X e E X e A n A n where { }n e is an orthonormal sequence of centered Gaussian variables. In addition, we present two special cases of a representation of a set indexed Brownian motion, when ([0,1] ) d A A and A = A( ) Ls .
Run Or Walk In The Rain? (Orthogonal Projected Area of Ellipsoid)iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document provides an overview of algebraic aspects of quantum Lévy processes. It begins with background on algebraic terminology and stochastic processes like Lévy processes. It then defines quantum Lévy processes and describes some of their basic properties, including the correspondence between quantum Lévy processes and Schürmann triples. The document also discusses the Lévy-Khinchin decomposition property for quantum Lévy processes and provides examples and counterexamples. It concludes by mentioning some open questions and known results regarding classification of quantum Lévy processes over different algebraic structures.
Accelerating Collapsed Variational Bayesian Inference for Latent Dirichlet Al...Tomonari Masada
1. The document discusses accelerating collapsed variational Bayesian inference for latent Dirichlet allocation (CVB) using Nvidia CUDA compatible GPU devices.
2. It describes parallelizing CVB for LDA by assigning different topics to different GPU threads. This achieves near-linear speedup compared to a single-threaded CPU implementation.
3. Experiments on text and image datasets demonstrate that the GPU implementation provides faster inference over the CPU version, though data transfer latency and memory limits remain challenges for large-scale problems.
This document discusses different graph kernel methods including shortest path kernel, graphlet kernel, and Weisfeiler-Lehman kernel. It outlines the algorithms for each kernel and describes how they are used to compute similarity between graphs. An experiment is described that tests the performance of each kernel on different types of graph datasets using 10-fold SVM classification. The graphlet kernel achieved the highest accuracy while shortest path kernel had the lowest. Graphlet kernel also had the highest computational time complexity.
1. The document discusses sorting and searching algorithms and exercises related to analyzing their time complexities. It provides solutions for exercises involving sorting lists by different schemes, optimizing sorting algorithms by splitting arrays, and selecting subsets from sorted and unsorted arrays.
2. Key points made include that sorting then selecting items is faster than repeated searches for large lists, and that splitting an array into square root of n subsets each sorted and merged provides the optimal time complexity of O(n√n) for sorting algorithms like insertion sort.
3. Questions ask to determine complexities of sorting algorithms like insertion sort, merge sort, and selection from arrays, and to recommend optimal algorithms like quicksort or selection for given array sizes and problem constraints
This document contains 6 practice problems about vectors in R2 from a Calculus & Physics 101 course. The problems cover topics like finding the sum and scalar multiples of vectors, sketching triangles defined by vectors, calculating work done using dot products of force and displacement vectors, and using differentiation and integration to calculate work as a function of time or position when force is variable. The document provides space for showing work and includes teacher notes on vectors in R2 and the dot product from a PreCalculus textbook.
Wang-Landau Monte Carlo simulation is a method for calculating the density of states function which can then be used to calculate thermodynamic properties like the mean value of variables. It improves on traditional Monte Carlo methods which struggle at low temperatures due to complicated energy landscapes with many local minima separated by large barriers. The Wang-Landau algorithm calculates the density of states function directly rather than relying on sampling configurations, allowing it to overcome barriers and fully explore the configuration space even at low temperatures.
D. Vulcanov, REM — the Shape of Potentials for f(R) Theories in Cosmology and...SEENET-MTP
This document summarizes a presentation given at the 2013 Balkan Workshop in Vrnjacka Banja, Serbia on using the "reverse engineering method" (REM) to model cosmology. The presentation reviewed REM and how it can be used to determine scalar field potentials from a given scale factor evolution. Computer programs for numerically and graphically processing REM with different cosmologies were discussed. Examples presented included regular and tachyonic potentials, and cosmology with non-minimally coupled scalar fields and f(R) gravity. Specific examples plotted potentials and scale factors for exponential and linear expansion universes. The presentation concluded with references for further reading on REM and its applications in cosmology.
Achieving Spatial Adaptivity while Searching for Approximate Nearest NeighborsDon Sheehy
This document presents a new data structure for approximate nearest neighbor search that achieves spatially adaptive query times. It uses multiple shifted Z-order curves to map high-dimensional points to 1-dimensional spaces, allowing the use of finger search data structures. This results in O(d3/2) approximation quality, O(d log δ(p,q)) query time where δ(p,q) is the number of points between previous and current queries, O(dn) space complexity, and O(d2n log n) preprocessing time.
Joint CSI Estimation, Beamforming and Scheduling Design for Wideband Massive ...T. E. BOGALE
The document presents a new design for joint channel estimation, beamforming, and scheduling for wideband massive MIMO systems. It proposes using non-orthogonal pilots for channel estimation and a two-phase scheduling approach. Simulation results show the proposed design achieves higher total rates than conventional OFDM and performs better in dense multipath environments, especially with larger bandwidths and antenna arrays. An open issue discussed is comparing the proposed non-orthogonal pilot scheme to non-orthogonal multiple access techniques.
Reducing Structural Bias in Technology Mappingsatrajit
The document discusses techniques to reduce structural bias in technology mapping. It proposes using supergates, which combine multiple library gates, to allow matches that intermediate points not present in the original circuit. It also describes performing lossless synthesis to merge equivalent networks and add choice nodes. Experimental results show the combined approach of supergates and lossless synthesis improves delay and area over the baseline.
The document discusses different algorithms for solving the single-pair shortest path problem in graph theory. It describes the Dijkstra, Bellman-Ford, and Floyd-Warshall algorithms. The Floyd-Warshall algorithm finds the shortest paths between all pairs of vertices in a graph and can handle graphs with negative edge weights, though it cannot have negative cycles. Pseudocode is provided to illustrate how the algorithm works by iteratively updating a shortest path matrix.
Hamilton-Jacobi equations and Lax-Hopf formulae for traffic flow modelingGuillaume Costeseque
The document discusses using Hamilton-Jacobi equations and Lax-Hopf formulas to model traffic flow. It introduces the Lighthill-Whitham-Richards traffic model in both Eulerian and Lagrangian coordinates. In the Eulerian framework, the cumulative vehicle count satisfies a Hamilton-Jacobi equation, and Lax-Hopf formulas provide representations involving minimizing cost along trajectories. Similarly in the Lagrangian framework, vehicle position satisfies a Hamilton-Jacobi equation, and Lax-Hopf formulas involve minimizing cost along characteristic curves. The document outlines applying variational principles and optimal control interpretations to these traffic models.
The document discusses scheduling algorithms for hardware synthesis. It describes different types of scheduling problems including unconstrained scheduling, scheduling with timing constraints, and scheduling with resource constraints. It provides examples of algorithms for each type, such as ASAP for unconstrained scheduling, and Bellman-Ford and Liao-Wong algorithms for scheduling under detailed timing constraints. The goal of scheduling is to assign start times to operations under given constraints to optimize area and latency.
The all-electron GW method based on WIEN2k: Implementation and applications.ABDERRAHMANE REGGAD
The all-electron GW method based on WIEN2k:
Implementation and applications.
Ricardo I. G´omez-Abal
Fritz-Haber-Institut of the Max-Planck-Society
Faradayweg 4-6, D-14195, Berlin, Germany
15th. WIEN2k-Workshop
March, 29th. 2008
The document discusses temporal planning and modeling of actions over time. It introduces the concept of representing planning problems using a time-oriented view with timelines rather than a state-oriented view. A timeline consists of temporal assertions about state variables over time intervals along with constraints. Actions are modeled as triples containing a name, a set of temporal assertions describing the effects over time, and constraints. This allows overlapping actions and reasoning about how state variable values change over time to be represented.
Prediction and Explanation over DL-Lite Data StreamsSzymon Klarman
Presentation for the paper:
Szymon Klarman and Thomas Meyer. Prediction and Explanation over DL-Lite Data Streams. In Proceedings of the 19th International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR-19), 2013.
Transceiver design for single-cell and multi-cell downlink multiuser MIMO sys...T. E. BOGALE
The document outlines a presentation on transceiver design for single-cell and multi-cell downlink multiuser MIMO systems. It discusses MSE uplink-downlink duality under imperfect CSI, showing that the sum MSE, user MSE, and symbol MSE are dual between the uplink and downlink channels. It demonstrates how to ensure the uplink and downlink MSE values are equal to each other by appropriately setting the transmit covariance matrices. The presentation also covers transceiver design algorithms for coordinated base station systems and generalized duality for multiuser MIMO systems.
Reproducing Kernel Hilbert Space of A Set Indexed Brownian MotionIJMERJOURNAL
ABSTRACT: This study researches a representation of set indexed Brownian motion { : } X X A A A via orthonormal basis, based on reproducing kernel Hilbert space (RKHS). The RKHS associated with the set indexed Brownian motion X is a Hilbert space of real-valued functions on T that is naturally isometric to 2 L ( ) A . The isometry between these Hilbert spaces leads to useful spectral representations of the set indexed Brownian motion, notably the Karhunen-Loève (KL) representation: [ ] X e E X e A n A n where { }n e is an orthonormal sequence of centered Gaussian variables. In addition, we present two special cases of a representation of a set indexed Brownian motion, when ([0,1] ) d A A and A = A( ) Ls .
Run Or Walk In The Rain? (Orthogonal Projected Area of Ellipsoid)iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document provides an overview of algebraic aspects of quantum Lévy processes. It begins with background on algebraic terminology and stochastic processes like Lévy processes. It then defines quantum Lévy processes and describes some of their basic properties, including the correspondence between quantum Lévy processes and Schürmann triples. The document also discusses the Lévy-Khinchin decomposition property for quantum Lévy processes and provides examples and counterexamples. It concludes by mentioning some open questions and known results regarding classification of quantum Lévy processes over different algebraic structures.
Accelerating Collapsed Variational Bayesian Inference for Latent Dirichlet Al...Tomonari Masada
1. The document discusses accelerating collapsed variational Bayesian inference for latent Dirichlet allocation (CVB) using Nvidia CUDA compatible GPU devices.
2. It describes parallelizing CVB for LDA by assigning different topics to different GPU threads. This achieves near-linear speedup compared to a single-threaded CPU implementation.
3. Experiments on text and image datasets demonstrate that the GPU implementation provides faster inference over the CPU version, though data transfer latency and memory limits remain challenges for large-scale problems.
This document discusses different graph kernel methods including shortest path kernel, graphlet kernel, and Weisfeiler-Lehman kernel. It outlines the algorithms for each kernel and describes how they are used to compute similarity between graphs. An experiment is described that tests the performance of each kernel on different types of graph datasets using 10-fold SVM classification. The graphlet kernel achieved the highest accuracy while shortest path kernel had the lowest. Graphlet kernel also had the highest computational time complexity.
1. The document discusses sorting and searching algorithms and exercises related to analyzing their time complexities. It provides solutions for exercises involving sorting lists by different schemes, optimizing sorting algorithms by splitting arrays, and selecting subsets from sorted and unsorted arrays.
2. Key points made include that sorting then selecting items is faster than repeated searches for large lists, and that splitting an array into square root of n subsets each sorted and merged provides the optimal time complexity of O(n√n) for sorting algorithms like insertion sort.
3. Questions ask to determine complexities of sorting algorithms like insertion sort, merge sort, and selection from arrays, and to recommend optimal algorithms like quicksort or selection for given array sizes and problem constraints
This document contains 6 practice problems about vectors in R2 from a Calculus & Physics 101 course. The problems cover topics like finding the sum and scalar multiples of vectors, sketching triangles defined by vectors, calculating work done using dot products of force and displacement vectors, and using differentiation and integration to calculate work as a function of time or position when force is variable. The document provides space for showing work and includes teacher notes on vectors in R2 and the dot product from a PreCalculus textbook.
Wang-Landau Monte Carlo simulation is a method for calculating the density of states function which can then be used to calculate thermodynamic properties like the mean value of variables. It improves on traditional Monte Carlo methods which struggle at low temperatures due to complicated energy landscapes with many local minima separated by large barriers. The Wang-Landau algorithm calculates the density of states function directly rather than relying on sampling configurations, allowing it to overcome barriers and fully explore the configuration space even at low temperatures.
D. Vulcanov, REM — the Shape of Potentials for f(R) Theories in Cosmology and...SEENET-MTP
This document summarizes a presentation given at the 2013 Balkan Workshop in Vrnjacka Banja, Serbia on using the "reverse engineering method" (REM) to model cosmology. The presentation reviewed REM and how it can be used to determine scalar field potentials from a given scale factor evolution. Computer programs for numerically and graphically processing REM with different cosmologies were discussed. Examples presented included regular and tachyonic potentials, and cosmology with non-minimally coupled scalar fields and f(R) gravity. Specific examples plotted potentials and scale factors for exponential and linear expansion universes. The presentation concluded with references for further reading on REM and its applications in cosmology.
Achieving Spatial Adaptivity while Searching for Approximate Nearest NeighborsDon Sheehy
This document presents a new data structure for approximate nearest neighbor search that achieves spatially adaptive query times. It uses multiple shifted Z-order curves to map high-dimensional points to 1-dimensional spaces, allowing the use of finger search data structures. This results in O(d3/2) approximation quality, O(d log δ(p,q)) query time where δ(p,q) is the number of points between previous and current queries, O(dn) space complexity, and O(d2n log n) preprocessing time.
Joint CSI Estimation, Beamforming and Scheduling Design for Wideband Massive ...T. E. BOGALE
The document presents a new design for joint channel estimation, beamforming, and scheduling for wideband massive MIMO systems. It proposes using non-orthogonal pilots for channel estimation and a two-phase scheduling approach. Simulation results show the proposed design achieves higher total rates than conventional OFDM and performs better in dense multipath environments, especially with larger bandwidths and antenna arrays. An open issue discussed is comparing the proposed non-orthogonal pilot scheme to non-orthogonal multiple access techniques.
Reducing Structural Bias in Technology Mappingsatrajit
The document discusses techniques to reduce structural bias in technology mapping. It proposes using supergates, which combine multiple library gates, to allow matches that intermediate points not present in the original circuit. It also describes performing lossless synthesis to merge equivalent networks and add choice nodes. Experimental results show the combined approach of supergates and lossless synthesis improves delay and area over the baseline.
The document discusses different algorithms for solving the single-pair shortest path problem in graph theory. It describes the Dijkstra, Bellman-Ford, and Floyd-Warshall algorithms. The Floyd-Warshall algorithm finds the shortest paths between all pairs of vertices in a graph and can handle graphs with negative edge weights, though it cannot have negative cycles. Pseudocode is provided to illustrate how the algorithm works by iteratively updating a shortest path matrix.
Hamilton-Jacobi equations and Lax-Hopf formulae for traffic flow modelingGuillaume Costeseque
The document discusses using Hamilton-Jacobi equations and Lax-Hopf formulas to model traffic flow. It introduces the Lighthill-Whitham-Richards traffic model in both Eulerian and Lagrangian coordinates. In the Eulerian framework, the cumulative vehicle count satisfies a Hamilton-Jacobi equation, and Lax-Hopf formulas provide representations involving minimizing cost along trajectories. Similarly in the Lagrangian framework, vehicle position satisfies a Hamilton-Jacobi equation, and Lax-Hopf formulas involve minimizing cost along characteristic curves. The document outlines applying variational principles and optimal control interpretations to these traffic models.
The document discusses scheduling algorithms for hardware synthesis. It describes different types of scheduling problems including unconstrained scheduling, scheduling with timing constraints, and scheduling with resource constraints. It provides examples of algorithms for each type, such as ASAP for unconstrained scheduling, and Bellman-Ford and Liao-Wong algorithms for scheduling under detailed timing constraints. The goal of scheduling is to assign start times to operations under given constraints to optimize area and latency.
The all-electron GW method based on WIEN2k: Implementation and applications.ABDERRAHMANE REGGAD
The all-electron GW method based on WIEN2k:
Implementation and applications.
Ricardo I. G´omez-Abal
Fritz-Haber-Institut of the Max-Planck-Society
Faradayweg 4-6, D-14195, Berlin, Germany
15th. WIEN2k-Workshop
March, 29th. 2008
The document discusses temporal planning and modeling of actions over time. It introduces the concept of representing planning problems using a time-oriented view with timelines rather than a state-oriented view. A timeline consists of temporal assertions about state variables over time intervals along with constraints. Actions are modeled as triples containing a name, a set of temporal assertions describing the effects over time, and constraints. This allows overlapping actions and reasoning about how state variable values change over time to be represented.
Prediction and Explanation over DL-Lite Data StreamsSzymon Klarman
Presentation for the paper:
Szymon Klarman and Thomas Meyer. Prediction and Explanation over DL-Lite Data Streams. In Proceedings of the 19th International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR-19), 2013.
Quantum Annealing for Dirichlet Process Mixture Models with Applications to N...Shu Tanaka
Our paper entitled “Quantum Annealing for Dirichlet Process Mixture Models with Applications to Network Clustering" was published in Neurocomputing. This work was done in collaboration with Dr. Issei Sato (Univ. of Tokyo), Dr. Kenichi Kurihara (Google), Professor Seiji Miyashita (Univ. of Tokyo), and Prof. Hiroshi Nakagawa (Univ. of Tokyo).
http://www.sciencedirect.com/science/article/pii/S0925231213005535
The preprint version is available:
http://arxiv.org/abs/1305.4325
佐藤一誠さん(東京大学)、栗原賢一さん(Google)、宮下精二教授(東京大学)、中川裕志教授(東京大学)との共同研究論文 “Quantum Annealing for Dirichlet Process Mixture Models with Applications to Network Clustering" が Neurocomputing に掲載されました。
http://www.sciencedirect.com/science/article/pii/S0925231213005535
プレプリントバージョンは
http://arxiv.org/abs/1305.4325
からご覧いただけます。
This document discusses canonical-Laplace transforms and various testing function spaces. It begins by defining the canonical-Laplace transform and establishes some testing function spaces using Gelfand-Shilov technique, including CLa,b,γ, CLab,β, CLγa,b,β, CLa,b,β,n, and CLγa,,m,β,n. It then presents results on countable unions of s-type spaces, proving that various spaces can be expressed as countable unions and discussing topological properties. The document concludes by stating that canonical-Laplace transforms are generalized in a distributional sense and results on countable unions of s-type spaces are discussed, along with the topological structure
Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...Amit Bhattacharjee
Here we illustrate the classic Ginzburg-Landau-de Gennes theory of isotropic nematic phase transition and show how fluctuations as well as deterministic kinetics can lead to phase equilibria.
Job sequencing with deadlines(with example)Vrinda Sheela
The document describes an algorithm for job sequencing with deadlines. It takes as input the deadline array D and job array J of size n. It assigns jobs to time slots T while respecting the deadlines, with the goal of maximizing profit. The algorithm initializes the time slots to empty, then iterates through jobs to find the earliest slot k meeting the deadline and assigns the job if the slot is empty, else moves to the next slot. This produces a job sequence with maximum profit respecting all deadlines. An example is provided to illustrate the algorithm.
Discrete-time systems are systems that are digital or arise from sampling a continuous-time system. Signals in discrete-time systems are defined only for discrete time points like t=0, 1, 2, 3, etc. rather than continuously over time. The z-transform is an important tool for analyzing linear time-invariant discrete-time systems and relates the input and output signals similar to how the transfer function relates the input and output in continuous-time systems. The discrete transfer function G(z) describes the system and is equal to the z-transform of the impulse response sequence g[k].
This document contains problems involving vectors in three-dimensional space (R3) and their applications. It covers finding components of vectors, computing cross products, using cross products to find the area of triangles and polygons, relating cross products to torque, and applying angular momentum equations. The problems demonstrate key vector and multivariable calculus concepts taught in a Calculus & Physics 102 course on vectors in three-dimensional space and torque.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
This document contains the mid-term exam for the Communication Networks course EE 333. The exam consists of 3 questions covering topics related to communication networks including:
1) Analytic expressions for the time taken for transmission of N packet blocks in an ARQ scheme with propagation delay. Expressions are given for N=1,2,3.
2) Code words, circular shifts, and error detection for CRC codes. An example error pattern is shown to be undetectable.
3) A Markov chain model for a communication channel and derivation of the maximum success probability.
- The document discusses estimating structured vector autoregressive (VAR) models from time series data.
- A VAR model of order d is defined as xt = A1xt-1 + ... + Adxt-d + εt, where xt is a p-dimensional time series, Ak are parameter matrices, and εt is noise.
- The document proposes regularizing the VAR model estimation problem to promote structured sparsity in the parameter matrices Ak. This involves transforming the model into a linear regression form and applying group lasso or fused lasso regularization.
Vertical wind speed measurements from Doppler LIDAR were analyzed to characterize wind speed extremes. Gaussian process models were fit to capture the nonseparability of the spatial and temporal covariance structure. A spectral-in-time covariance function was developed that includes a frequency-dependent spatial coherence function. Fast fitting methods were used to approximate the likelihood for large datasets. Zero crossing statistics were also examined to analyze wind speed thresholds over time. Future work will include incorporating cyclostationary models and additional climate variables as covariates.
This document proposes a modular beamforming architecture for ultrasound imaging that uses FPGA DSP cells to overcome limitations of previous designs. It interleaves the interpolation and coherent summation processes, reducing hardware resources. This allows implementing a 128-channel beamformer in a single FPGA, achieving flexibility like FPGAs but with lower power consumption like ASICs. The design is scalable, allowing a tradeoff between number of channels, time resolution, and resource usage.
Computing the masses of hyperons and charmed baryons from Lattice QCDChristos Kallidonis
Poster presented at the Computational Sciences 2013 Conference (Winner of poster competition). We present results on the masses of all forty light, strange and charm baryons from Lattice QCD simulations, focusing particularly on the computational aspects and requirements of such calculations.
(1) The document discusses algorithms analysis using the divide and conquer paradigm and the master theorem. It analyzes the running times of binary search, merge sort, and quicksort using the master theorem.
(2) For quicksort, it shows that picking a random pivot element leads to an expected running time of O(n log n) since it balances the problem sizes on both sides of the pivot in each recursive call.
(3) It ultimately derives that the expected running time of quicksort is O(n log n).
This document describes using MATLAB to analyze a synthetic time series dataset representing climate data over 500,000 years. The time series contains periodic signals at 100ky, 41ky and 21ky. Random noise and a long term trend are added. Fourier analysis is used to identify the dominant periodic components in the frequency domain. A Hamming window and bandpass filter are applied to further analyze specific frequency bands like the 21ky signal. Autocorrelation is also examined to identify cyclic patterns in the time series.
This document discusses encoding data structures to answer range maximum queries (RMQs) in an optimal way. It describes how the shape of the Cartesian tree of an array A can be encoded in 2n bits to answer RMQ queries, returning the index of the maximum element rather than its value. It also discusses encodings for other problems like nearest larger values, range selection, and others. Many of these encodings use asymptotically optimal space of roughly n log k bits for an input of size n with parameter k.
This thesis examines the return interval distribution of extreme events in long memory time series that have two different scaling exponents. It first reviews long memory processes and their characterization using fractional autoregressive integrated moving average (ARFIMA) models. It then derives an analytical expression for the return interval distribution when the time series has two scaling exponents, as supported by numerical simulations. The thesis also considers a long memory probability process with two exponents and compares the return interval distribution to analytical results.
Queuing theory is a branch of mathematics that studies the behavior of waitin...Sonam704174
Queuing theory is a branch of mathematics that studies the behavior of waiting lines, or queues, in systems where entities such as customers, jobs, or data packets arrive at a service point and wait for service.
Response Surface in Tensor Train format for Uncertainty QuantificationAlexander Litvinenko
We apply low-rank Tensor Train format to solve PDEs with uncertain coefficients. First, we approximate uncertain permeability coefficient in TT format, then the operator and then apply iterations to solve stochastic Galerkin system.
Similar to Querying Temporal Databases via OWL 2 QL (20)
Formal Verification of Data Provenance RecordsSzymon Klarman
Szymon Klarman, Stefan Schlobach and Luciano Serafini. Formal Verification of Data Provenance Records. In Proceedings of the 11th International Semantic Web Conference (ISWC-12), 2012
Data driven approaches to empirical discoverySzymon Klarman
The document discusses several data-driven approaches to empirical discovery, including inductive machine systems. It describes the Function Induction System (FIS), BACON, FAHRENHEIT, and IDS systems. FIS used condition-action rules to detect patterns and recursively apply functions to residuals. BACON discovered laws relating independent and dependent terms using heuristics and defined new theoretical terms. FAHRENHEIT extended BACON by determining the scope of laws using separate numeric laws defining limits.
Presentation for paper:
Szymon Klarman, Ulle Endriss and Stefan Schlobach. ABox Abduction in the Description Logic ALC. In Journal of Automated Reasoning, 46(1), pp. 43-80, 2011.
Judgment Aggregation as Maximization of Epistemic and Social UtilitySzymon Klarman
Presentation for paper:
Szymon Klarman. Judgment Aggregation as Maximization of Epistemic and Social Utility. In Proceedings of the 2nd International Workshop on Computational Social Choice (COMSOC-08), 2008.
This document discusses representing knowledge with contexts in description logics. It proposes two-dimensional, two-sorted description logics of context that include:
1. Separate context and object languages to describe contexts and knowledge respectively.
2. Contexts represented as first-order objects that can be organized into relational structures and described in the context language.
3. A two-dimensional semantics where each possible world has its own description logic interpretation, allowing for alternative viewpoints.
Ontology learning from interpretations in lightweight description logicsSzymon Klarman
Presentation of the paper:
Szymon Klarman and Katarina Britz. Ontology Learning from Interpretations in Lightweight Description Logics. In Proceedings of the 25th International Conference on Inductive Logic Programming (ILP-15), 2015
What makes a linked data pattern interesting?Szymon Klarman
A short talk on the problem of mining linked data (RDF) patterns, introducing a few preliminary notions towards the definition of generic linked data mining algorithms.
SKOS: Building taxonomies with minimum ontological commitmentSzymon Klarman
A short introduction to Simple Knowledge Organisation System (SKOS) - a W3C standard for representing taxonomies, thesuari, and other classification systems. Presented at the Semantic Web London meetup (April, 2017)
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Microbial interaction
Microorganisms interacts with each other and can be physically associated with another organisms in a variety of ways.
One organism can be located on the surface of another organism as an ectobiont or located within another organism as endobiont.
Microbial interaction may be positive such as mutualism, proto-cooperation, commensalism or may be negative such as parasitism, predation or competition
Types of microbial interaction
Positive interaction: mutualism, proto-cooperation, commensalism
Negative interaction: Ammensalism (antagonism), parasitism, predation, competition
I. Mutualism:
It is defined as the relationship in which each organism in interaction gets benefits from association. It is an obligatory relationship in which mutualist and host are metabolically dependent on each other.
Mutualistic relationship is very specific where one member of association cannot be replaced by another species.
Mutualism require close physical contact between interacting organisms.
Relationship of mutualism allows organisms to exist in habitat that could not occupied by either species alone.
Mutualistic relationship between organisms allows them to act as a single organism.
Examples of mutualism:
i. Lichens:
Lichens are excellent example of mutualism.
They are the association of specific fungi and certain genus of algae. In lichen, fungal partner is called mycobiont and algal partner is called
II. Syntrophism:
It is an association in which the growth of one organism either depends on or improved by the substrate provided by another organism.
In syntrophism both organism in association gets benefits.
Compound A
Utilized by population 1
Compound B
Utilized by population 2
Compound C
utilized by both Population 1+2
Products
In this theoretical example of syntrophism, population 1 is able to utilize and metabolize compound A, forming compound B but cannot metabolize beyond compound B without co-operation of population 2. Population 2is unable to utilize compound A but it can metabolize compound B forming compound C. Then both population 1 and 2 are able to carry out metabolic reaction which leads to formation of end product that neither population could produce alone.
Examples of syntrophism:
i. Methanogenic ecosystem in sludge digester
Methane produced by methanogenic bacteria depends upon interspecies hydrogen transfer by other fermentative bacteria.
Anaerobic fermentative bacteria generate CO2 and H2 utilizing carbohydrates which is then utilized by methanogenic bacteria (Methanobacter) to produce methane.
ii. Lactobacillus arobinosus and Enterococcus faecalis:
In the minimal media, Lactobacillus arobinosus and Enterococcus faecalis are able to grow together but not alone.
The synergistic relationship between E. faecalis and L. arobinosus occurs in which E. faecalis require folic acid
CLASS 12th CHEMISTRY SOLID STATE ppt (Animated)eitps1506
Description:
Dive into the fascinating realm of solid-state physics with our meticulously crafted online PowerPoint presentation. This immersive educational resource offers a comprehensive exploration of the fundamental concepts, theories, and applications within the realm of solid-state physics.
From crystalline structures to semiconductor devices, this presentation delves into the intricate principles governing the behavior of solids, providing clear explanations and illustrative examples to enhance understanding. Whether you're a student delving into the subject for the first time or a seasoned researcher seeking to deepen your knowledge, our presentation offers valuable insights and in-depth analyses to cater to various levels of expertise.
Key topics covered include:
Crystal Structures: Unravel the mysteries of crystalline arrangements and their significance in determining material properties.
Band Theory: Explore the electronic band structure of solids and understand how it influences their conductive properties.
Semiconductor Physics: Delve into the behavior of semiconductors, including doping, carrier transport, and device applications.
Magnetic Properties: Investigate the magnetic behavior of solids, including ferromagnetism, antiferromagnetism, and ferrimagnetism.
Optical Properties: Examine the interaction of light with solids, including absorption, reflection, and transmission phenomena.
With visually engaging slides, informative content, and interactive elements, our online PowerPoint presentation serves as a valuable resource for students, educators, and enthusiasts alike, facilitating a deeper understanding of the captivating world of solid-state physics. Explore the intricacies of solid-state materials and unlock the secrets behind their remarkable properties with our comprehensive presentation.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
1. Querying Temporal Databases via OWL 2 QL
Szymon Klarman and Thomas Meyer
Centre for Artificial Intelligence Research,
CSIR Meraka Institute & University of KwaZulu-Natal,
South Africa
September 16, 2014
RR-14, Athens
2. RR 2014 Querying Temporal Databases via OWL 2 QL
Problem and motivation
Ontology-based data access is a paradigm of querying relational data via
ontological (semantic) layer (OWL 2 QL ∼ DL-Lite):
data + conjunctive query + ontology SQL query + RDBMS.
Problem:
Can we use a similar approach to accessing temporal databases?
(SQL:2011 in IBM DB2 10.1, Oracle DB 11g Workspace Manager, etc.)
Approach:
We propose an interval-based temporal query language which:
• modularly combines CQs with temporal logic (FOMLO),
• enables reuse of rewriting techniques for CQs (in OWL 2 QL),
• is easily rewritable into SQL (AC0 data complexity).
S. Klarman and T. Meyer 1 / 16
3. RR 2014 Querying Temporal Databases via OWL 2 QL
DL-Lites
A family of Description Logics used for ontology-based data access:
• ABox A (data): Employee(john), worksAt(john, dep1)
• TBox T (ontology): Employee Person, worksAt isEmplyedAt
• conjunctive query: ∃y.ϕ(x, y), where ϕ is a conjunction of atoms:
q(x) := ∃z.(Person(x) ∧ worksAt(x, z) ∧ basedIn(z, barcelona))
• CQ answering via FO rewriting, using existing RDBMSs:
T , A |= q iff db(A) |= qT
where db(A) is A viewed as a database (unique minimal model).
S. Klarman and T. Meyer 2 / 16
4. RR 2014 Querying Temporal Databases via OWL 2 QL
Ontology-based access to temporal DBs
Temporal database:
Emp
id name department from to
e1 john d1 1998 2000
e1 john d3 2000 2003
e2 mark d2 1999 2002
Dep
id type location from to
d1 financial madrid 1998 1999
d1 financial barcelona 1999 2003
d2 hr barcelona 2000 2003
d3 hq london 2000 2003
S. Klarman and T. Meyer 3 / 16
5. RR 2014 Querying Temporal Databases via OWL 2 QL
Ontology-based access to temporal DBs
Temporal database:
Emp
id name department from to
e1 john d1 1998 2000
e1 john d3 2000 2003
e2 mark d2 1999 2002
Dep
id type location from to
d1 financial madrid 1998 1999
d1 financial barcelona 1999 2003
d2 hr barcelona 2000 2003
d3 hq london 2000 2003
(Virtual) temporal ABox:
[1998, 2000] : Emp(e1)
[1998, 2000] : name(e1, john)
[1998, 2000] : department(e1, d1)
. . .
[1998, 1999] : Dep(d1)
[1998, 1999] : type(d1, financial)
[1998, 1999] : location(d1, madrid)
. . .
S. Klarman and T. Meyer 3 / 16
6. RR 2014 Querying Temporal Databases via OWL 2 QL
Ontology-based access to temporal DBs
Temporal database:
Emp
id name department from to
e1 john d1 1998 2000
e1 john d3 2000 2003
e2 mark d2 1999 2002
Dep
id type location from to
d1 financial madrid 1998 1999
d1 financial barcelona 1999 2003
d2 hr barcelona 2000 2003
d3 hq london 2000 2003
(Virtual) temporal ABox:
[1998, 2000] : Emp(e1)
[1998, 2000] : name(e1, john)
[1998, 2000] : department(e1, d1)
. . .
[1998, 1999] : Dep(d1)
[1998, 1999] : type(d1, financial)
[1998, 1999] : location(d1, madrid)
. . .
TBox: {Emp Person, department worksAt, location basedIn}
S. Klarman and T. Meyer 3 / 16
7. RR 2014 Querying Temporal Databases via OWL 2 QL
Ontology-based access to temporal DBs
Temporal database:
Emp
id name department from to
e1 john d1 1998 2000
e1 john d3 2000 2003
e2 mark d2 1999 2002
Dep
id type location from to
d1 financial madrid 1998 1999
d1 financial barcelona 1999 2003
d2 hr barcelona 2000 2003
d3 hq london 2000 2003
Query:
Find all persons X and times Y, such that X worked at a department based in
Barcelona during Y and in a department based in Madrid some time earlier.
Answers, e.g.:
X = e1 as Y = [1999, 2000].
S. Klarman and T. Meyer 3 / 16
8. RR 2014 Querying Temporal Databases via OWL 2 QL
Temporal data
Time:
A time domain is a pair (T, <) (linear, point-based). An interval
τ = [τ−, τ+], for τ− ≤ τ+ ∈ T, is the set {t ∈ T | τ− ≤ t ≤ τ+}.
Temporal ABoxes:
A concrete temporal ABox is a set A of time-stamped ABox axioms:
τ : α
where τ is an interval and α is an ABox axiom, e.g.:
[1, 2] : Employee(john), [2, 2] : worksAt(john, dep1)
Every A corresponds to some abstract temporal ABox via a mapping
· , such that A = (At)t∈T, where
At = {α | τ : α ∈ A and t ∈ τ}.
S. Klarman and T. Meyer 4 / 16
9. RR 2014 Querying Temporal Databases via OWL 2 QL
Temporal query language: syntax
• an interval-based temporal language (FOMLO),
• epistemic interpretation of CQs in the temporal language.
TQL formulas:
ψ ::= [q](u) | u∗ < v∗ | ¬ψ | ψ1 ∧ ψ2 | ∃y.ψ
where:
• q is a CQ,
• u, v, y are temporal interval terms,
• ∗ ∈ {−, +}.
Example:
ψ(x, y) := [∃z.(Person(x) ∧ worksAt(x, z) ∧ basedIn(z, barcelona))](y) ∧
∃v.(v+
< y−
∧ [∃z.(worksAt(x, z) ∧ basedIn(z, madrid))](v))
S. Klarman and T. Meyer 5 / 16
10. RR 2014 Querying Temporal Databases via OWL 2 QL
Temporal query language: semantics
For a (temporal) substitution π:
T , A, π |= u∗
< v∗
iff π(u)∗
< π(v)∗
,
T , A, π |= ¬ψ iff T , A, π |= ψ,
T , A, π |= ψ1 ∧ ψ2 iff T , A, π |= ψ1 and T , A, π |= ψ2,
T , A, π |= ∃y.ψ iff there exists τ ∈ I, such that
T , A, π[y → τ] |= ψ.
CQs are embedded in TQL using epistemic semantics.
T , A, π |= [q](u) iff T , At |= q, for every t ∈ π(u)
Therefore...
• read [q](τ) as: it is true that q is entailed in all time points in τ.
• ¬[q](τ) interpreted via negation-as-failure: it is not true that...
• CQ rewriting can be directly applied:
T , At |= q iff tdb(At) |= qT
S. Klarman and T. Meyer 6 / 16
11. RR 2014 Querying Temporal Databases via OWL 2 QL
From DLs to 2FO
Two-sorted FO language:
ϕ ::= R(d1, . . . , dn, t1, t2) | ¬ϕ | ϕ1 ∧ ϕ2 | t1 < t2 | ∃x.ϕ | ∃y.ϕ
Temporal database:
The temporal database corresponding to A is the tuple
tdb(A) = (NI, T, <, ·D), where:
• NI is the data domain and (T, <) the time domain,
• ·D is the interpretation over Γ = {Rα | α ∈ NC ∪ NR}, where:
• RD
A = {(a, τ−
, τ+
) | τ : A(a) ∈ A}, for every A ∈ NC,
• RD
r = {(a, b, τ−
, τ+
) | τ : r(a, b) ∈ A}, for every r ∈ NR.
S. Klarman and T. Meyer 7 / 16
12. RR 2014 Querying Temporal Databases via OWL 2 QL
Query answering
Temporal semantics is not effectively supported by SQL:2011 systems:
tdb(A) RA(a, t1, t2) iff (a, t1, t2) ∈ RD
A
E.g., for A = {[1, 2] : A(a), [2, 3] : A(a)} we get:
tdb(A) RA(a, 1, 3)
Key problems:
• computing temporal joins, i.e., identifying maximal time intervals
over which conjunctions of atoms are satisfied,
• applying coalescing, i.e., merging overlapping and adjacent
intervals for the (intermediate) query results.
S. Klarman and T. Meyer 8 / 16
21. RR 2014 Querying Temporal Databases via OWL 2 QL
Query rewriting
For [q(a)]([2, 9]), where q1(a) = B(a) ∧ C(a) and q2(a) = D(a) ∧ C(a):
[q(a)]([2, 9]) 2FO
∃t1, t2.(Rcoal
qT (a, t1, t2) ∧ t1 ≤ 2 ∧ 9 ≤ t2)
Rcoal
qT (a, t1, t2) ∃t3, t4.RqT (a, t1, t3) ∧ RqT (a, t4, t2) ∧
¬∃t5, t6.(RqT (a, t5, t6) ∧ t5 < t1 ∧ t1 ≤ t6) ∧
¬∃t5, t6.(RqT (a, t5, t6) ∧ t5 ≤ t2 ∧ t2 < t6) ∧
¬∃t5, t6.(RqT (a, t5, t6) ∧ t1 < t5 ∧ t6 ≤ t2 ∧
¬∃t7, t8.(RqT (a, t7, t8) ∧ t7 < t5 ∧ t5 ≤ t8))
RqT (a, u, v) Rq1
(a, u, v) ∨ Rq2
(a, u, v)
Rq1
(a, u, v) ∃t1, . . . , t4.(RB(a, t1, t2) ∧ RC(a, t3, t4)) ∧
u = max(t1, t3) ∧ v = min(t2, t4) ∧ u ≤ v
Rq2
(a, u, v) ∃t1, . . . , t4.(RD(a, t1, t2) ∧ RC(a, t3, t4)) ∧
u = max(t1, t3) ∧ v = min(t2, t4) ∧ u ≤ v
S. Klarman and T. Meyer 10 / 16
22. RR 2014 Querying Temporal Databases via OWL 2 QL
SQL translation
• Translation from 2FO to SQL is straightforward using standard
techniques.
• In practice, the domain T must be represented as an explicit
relation RT in the database and used to guard temporal
quantifiers.
• The coalescing on the query-level is known to be inefficient. It is
usually better to apply it on the data-level.
S. Klarman and T. Meyer 11 / 16
23. RR 2014 Querying Temporal Databases via OWL 2 QL
Results: correctness
Theorem (Correctness of 2FO rewriting)
For every TBox T , temporal ABox A, TQL query ψ, and answer σ to ψ, it
holds that:
T , A |= σ(ψ) iff tdb(A) σ(ψ) 2FO.
Corollary (TQL queries are · -generic)
Whenever A = A then:
1 T , A |= σ(ψ) iff T , A |= σ(ψ),
2 tdb(A) σ(ψ) 2FO iff tdb(A ) σ(ψ) 2FO.
S. Klarman and T. Meyer 12 / 16
24. RR 2014 Querying Temporal Databases via OWL 2 QL
Results: data complexity
Theorem (Data complexity)
The data complexity of TQL query entailment over finite time domains is in
AC0
.
Note:
The finite domain restrictions is necessary to ensure that queries can be
effectively evaluated on TDBs considered as finite FO structures.
S. Klarman and T. Meyer 13 / 16
25. RR 2014 Querying Temporal Databases via OWL 2 QL
Results: combined complexity
Size of TQL queries:
The size of ψ 2FO is linear in the joint size of ψ and the FO rewritings
qT
1 , . . . , qT
n of the CQs embedded in ψ.
Theorem (Combined complexity)
The combined complexity of TQL query entailment over finite time domains
is PSPACE-complete.
• The result transfers from the entailment of Boolean FO queries
over relational databases.
• A PSPACE procedure cannot directly rely on the rewriting...
S. Klarman and T. Meyer 14 / 16
26. RR 2014 Querying Temporal Databases via OWL 2 QL
Results: combined complexity
Instead:
• compute and coalesce intermediate answers to all embedded CQs,
• and evaluate the query without rewriting the CQs.
RB(a, 1, 7)
RD(a, 6, 12)
RC(a, 3, 10)
Rq1
(a, 3, 7)
Rq2
(a, 6, 10)
RqT (a, 1, 10)coal
RC(a, 1, 3)
Rq1
(a, 1, 3)
RqT (a, 1, 3)
RqT (a, 3, 7)
RqT (a, 6, 10)
Rqi
RqT
RqT
coal
1 2 3 4 5 6 7 8 9 10 11 12
coal
[q(a)]([2,9])
A potentially better-behaved approach based on materialized view
maintenance: incremental, more responsive to updates.
S. Klarman and T. Meyer 15 / 16
27. RR 2014 Querying Temporal Databases via OWL 2 QL
Conclusions
Temporal query language:
• on the CQ / DL interface the approach only reuses existing
techniques and tools,
• the expressive power of temporal component subsumes LTL,
• satisfying FO rewriting properties.
Outlook:
• SQL translation must be further optimized (materialized view
maintenance; temp. joins and coalescing in the future ver. of SQL).
• Examine the links to other approaches to temporalizing OBDA.
S. Borgwardt, M. Lippmann, V. Thost. Temporal query answering in the description logic
DL-Lite. In: Proc. of FroCoS-13, 2013.
A. Artale, R. Kontchakov, F. Wolter, M. Zakharyaschev. Temporal description logic for
ontology-based data access. In: Proc. of IJCAI-13, 2013.
S. Klarman and T. Meyer 16 / 16