Linked Open Data (LOD) is about publishing and interlinking data of different origin and purpose on the web. The Resource Description Framework (RDF) is used to describe data on the LOD cloud. In contrast to relational databases, RDF does not provide a fixed, pre-defined schema. Rather, RDF allows for flexibly modeling the data schema by attaching RDF types and properties to the entities. Our schema-level index called SchemEX allows for searching in large-scale RDF graph data. The index can be efficiently computed with reasonable accuracy over large-scale data sets with billions of RDF triples, the smallest information unit on the LOD cloud. SchemEX is highly needed as the size of the LOD cloud quickly increases. Due to the evolution of the LOD cloud, one observes frequent changes of the data. We show that also the data schema changes in terms of combinations of RDF types and properties. As changes cannot capture the dynamics of the LOD cloud, current work includes temporal clustering and finding periodicities in entity dynamics over large-scale snapshots of the LOD cloud with about 100 million triples per week for more than three years.
Formalization and Preliminary Evaluation of a Pipeline for Text Extraction Fr...Ansgar Scherp
We propose a pipeline for text extraction from infographics
that makes use of a novel combination of data mining and computer vision techniques. The pipeline defines a sequence of steps to identify characters, cluster them into text lines, determine their rotation angle, and apply state-of-the-art OCR to recognize the text. In this paper, we formally define the pipeline and present its current implementation. In addition, we have conducted preliminary evaluations over a data corpus of 121 manually annotated infographics from a broad range of illustration types such as bar charts, pie charts, and line charts, maps, and others. We assess the results of our text extraction pipeline by comparing it with two baselines. Finally, we sketch an outline for future work and possibilities for improving the pipeline. - http://ceur-ws.org/Vol-1458/
A Comparison of Different Strategies for Automated Semantic Document AnnotationAnsgar Scherp
We introduce a framework for automated semantic document annotation that is composed of four processes, namely concept extraction, concept activation, annotation selection, and evaluation. The framework is used to implement and compare different annotation strategies motivated by the literature. For concept extraction, we apply entity detection with semantic hierarchical knowledge bases, Tri-gram, RAKE, and LDA. For concept activation, we compare a set of statistical, hierarchy-based, and graph-based methods. For selecting annotations, we compare top-k as well as kNN. In total, we define 43 different strategies including novel combinations like using graph-based activation with kNN. We have evaluated the strategies using three different datasets of varying size from three scientific disciplines (economics, politics, and computer science) that contain 100, 000 manually labeled documents in total. We obtain the best results on all three datasets by our novel combination of entity detection with graph-based activation (e.g., HITS and Degree) and kNN. For the economic and political science datasets, the best F-measure is .39 and .28, respectively. For the computer science dataset, the maximum F-measure of .33 can be reached. The experiments are the by far largest on scholarly content annotation, which typically are up to a few hundred documents per dataset only.
Gregor Große-Bölting, Chifumi Nishioka, and Ansgar Scherp. 2015. A Comparison of Different Strategies for Automated Semantic Document Annotation. In Proceedings of the 8th International Conference on Knowledge Capture (K-CAP 2015). ACM, New York, NY, USA, , Article 8 , 8 pages. DOI=http://dx.doi.org/10.1145/2815833.2815838
Mining and Managing Large-scale Linked Open DataMOVING Project
Linked Open Data (LOD) is about publishing and interlinking data of different origin and purpose on the web. The Resource Description Framework (RDF) is used to describe data on the LOD cloud. In contrast to relational databases, RDF does not provide a fixed, pre-defined schema. Rather, RDF allows for flexibly modeling the data schema by attaching RDF types and properties to the entities. Our schema-level index called SchemEX allows for searching in large-scale RDF graph data. The index can be efficiently computed with reasonable accuracy over large-scale data sets with billions of RDF triples, the smallest information unit on the LOD cloud. SchemEX is highly needed as the size of the LOD cloud quickly increases. Due to the evolution of the LOD cloud, one observes frequent changes of the data. We show that also the data schema changes in terms of combinations of RDF types and properties. As changes cannot capture the dynamics of the LOD cloud, current work includes temporal clustering and finding periodicities in entity dynamics over large-scale snapshots of the LOD cloud with about 100 million triples per week for more than three years.
Knowledge Discovery in Social Media and Scientific Digital LibrariesAnsgar Scherp
The talk presents selected results of our research in the area of text and data mining in social media and scientific literature. (1) First, we consider the area of classifying microblogging postings like tweets on Twitter. Typically, the classification results are evaluated against a gold standard, which is either the hashtags of the tweets’ authors or manual annotations. We claim that there are fundamental differences between these two kinds of gold standard classifications and conducted an experiment with 163 participants to manually classify tweets from ten topics. Our results show that the human annotators are more likely to classify tweets like other human annotators than like the tweets’ authors (i. e., the hashtags). This may influence the evaluation of classification methods like LDA and we argue that researchers should reflect the kind of gold standard used when interpreting their results. (2) Second, we present a framework for semantic document annotation that aims to compare different existing as well as new annotation strategies. For entity detection, we compare semantic taxonomies, trigrams, RAKE, and LDA. For concept activation, we cover a set of statistical, hierarchy-based, and graph-based methods. The strategies are evaluated over 100,000 manually labeled scientific documents from economics, politics, and computer science. (3) Finally, we present a processing pipeline for extracting text of varying size, rotation, color, and emphases from scholarly figures. The pipeline does not need training nor does it make any assumptions about the characteristics of the scholarly figures. We conducted a preliminary evaluation with 121 figures from a broad range of illustration types.
URL: https://www.ukp.tu-darmstadt.de/ukp-home/news-singleview/artikel/guest-speaker-ansgar-scherp/
Big Data is a new term used in Business Analytics to identify datasets that we can not manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data.
In this talk, we will focus on advanced techniques in Big Data mining in real time using evolving data stream techniques: using a small amount of time and memory resources, and being able to adapt to changes. We will discuss a social network application of data stream mining to compute user influence probabilities. And finally, we will present the MOA software framework with classification, regression, and frequent pattern methods, and the SAMOA distributed streaming software that runs on top of Storm, Samza and S4.
An overview of streaming algorithms: what they are, what the general principles regarding them are, and how they fit into a big data architecture. Also four specific examples of streaming algorithms and use-cases.
Formalization and Preliminary Evaluation of a Pipeline for Text Extraction Fr...Ansgar Scherp
We propose a pipeline for text extraction from infographics
that makes use of a novel combination of data mining and computer vision techniques. The pipeline defines a sequence of steps to identify characters, cluster them into text lines, determine their rotation angle, and apply state-of-the-art OCR to recognize the text. In this paper, we formally define the pipeline and present its current implementation. In addition, we have conducted preliminary evaluations over a data corpus of 121 manually annotated infographics from a broad range of illustration types such as bar charts, pie charts, and line charts, maps, and others. We assess the results of our text extraction pipeline by comparing it with two baselines. Finally, we sketch an outline for future work and possibilities for improving the pipeline. - http://ceur-ws.org/Vol-1458/
A Comparison of Different Strategies for Automated Semantic Document AnnotationAnsgar Scherp
We introduce a framework for automated semantic document annotation that is composed of four processes, namely concept extraction, concept activation, annotation selection, and evaluation. The framework is used to implement and compare different annotation strategies motivated by the literature. For concept extraction, we apply entity detection with semantic hierarchical knowledge bases, Tri-gram, RAKE, and LDA. For concept activation, we compare a set of statistical, hierarchy-based, and graph-based methods. For selecting annotations, we compare top-k as well as kNN. In total, we define 43 different strategies including novel combinations like using graph-based activation with kNN. We have evaluated the strategies using three different datasets of varying size from three scientific disciplines (economics, politics, and computer science) that contain 100, 000 manually labeled documents in total. We obtain the best results on all three datasets by our novel combination of entity detection with graph-based activation (e.g., HITS and Degree) and kNN. For the economic and political science datasets, the best F-measure is .39 and .28, respectively. For the computer science dataset, the maximum F-measure of .33 can be reached. The experiments are the by far largest on scholarly content annotation, which typically are up to a few hundred documents per dataset only.
Gregor Große-Bölting, Chifumi Nishioka, and Ansgar Scherp. 2015. A Comparison of Different Strategies for Automated Semantic Document Annotation. In Proceedings of the 8th International Conference on Knowledge Capture (K-CAP 2015). ACM, New York, NY, USA, , Article 8 , 8 pages. DOI=http://dx.doi.org/10.1145/2815833.2815838
Mining and Managing Large-scale Linked Open DataMOVING Project
Linked Open Data (LOD) is about publishing and interlinking data of different origin and purpose on the web. The Resource Description Framework (RDF) is used to describe data on the LOD cloud. In contrast to relational databases, RDF does not provide a fixed, pre-defined schema. Rather, RDF allows for flexibly modeling the data schema by attaching RDF types and properties to the entities. Our schema-level index called SchemEX allows for searching in large-scale RDF graph data. The index can be efficiently computed with reasonable accuracy over large-scale data sets with billions of RDF triples, the smallest information unit on the LOD cloud. SchemEX is highly needed as the size of the LOD cloud quickly increases. Due to the evolution of the LOD cloud, one observes frequent changes of the data. We show that also the data schema changes in terms of combinations of RDF types and properties. As changes cannot capture the dynamics of the LOD cloud, current work includes temporal clustering and finding periodicities in entity dynamics over large-scale snapshots of the LOD cloud with about 100 million triples per week for more than three years.
Knowledge Discovery in Social Media and Scientific Digital LibrariesAnsgar Scherp
The talk presents selected results of our research in the area of text and data mining in social media and scientific literature. (1) First, we consider the area of classifying microblogging postings like tweets on Twitter. Typically, the classification results are evaluated against a gold standard, which is either the hashtags of the tweets’ authors or manual annotations. We claim that there are fundamental differences between these two kinds of gold standard classifications and conducted an experiment with 163 participants to manually classify tweets from ten topics. Our results show that the human annotators are more likely to classify tweets like other human annotators than like the tweets’ authors (i. e., the hashtags). This may influence the evaluation of classification methods like LDA and we argue that researchers should reflect the kind of gold standard used when interpreting their results. (2) Second, we present a framework for semantic document annotation that aims to compare different existing as well as new annotation strategies. For entity detection, we compare semantic taxonomies, trigrams, RAKE, and LDA. For concept activation, we cover a set of statistical, hierarchy-based, and graph-based methods. The strategies are evaluated over 100,000 manually labeled scientific documents from economics, politics, and computer science. (3) Finally, we present a processing pipeline for extracting text of varying size, rotation, color, and emphases from scholarly figures. The pipeline does not need training nor does it make any assumptions about the characteristics of the scholarly figures. We conducted a preliminary evaluation with 121 figures from a broad range of illustration types.
URL: https://www.ukp.tu-darmstadt.de/ukp-home/news-singleview/artikel/guest-speaker-ansgar-scherp/
Big Data is a new term used in Business Analytics to identify datasets that we can not manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data.
In this talk, we will focus on advanced techniques in Big Data mining in real time using evolving data stream techniques: using a small amount of time and memory resources, and being able to adapt to changes. We will discuss a social network application of data stream mining to compute user influence probabilities. And finally, we will present the MOA software framework with classification, regression, and frequent pattern methods, and the SAMOA distributed streaming software that runs on top of Storm, Samza and S4.
An overview of streaming algorithms: what they are, what the general principles regarding them are, and how they fit into a big data architecture. Also four specific examples of streaming algorithms and use-cases.
Mining Big Data Streams with APACHE SAMOAAlbert Bifet
In this talk, we present Apache SAMOA, an open-source platform for
mining big data streams with Apache Flink, Storm and Samza. Real time analytics is
becoming the fastest and most efficient way to obtain useful knowledge
from what is happening now, allowing organizations to react quickly
when problems appear or to detect new trends helping to improve their
performance. Apache SAMOA includes algorithms for the most common
machine learning tasks such as classification and clustering. It
provides a pluggable architecture that allows it to run on Apache
Flink, but also with other several distributed stream processing
engines such as Storm and Samza.
Streaming data presents new challenges for statistics and machine learning on extremely large data sets. Tools such as Apache Storm, a stream processing framework, can power range of data analytics but lack advanced statistical capabilities. These slides are from the Apache.con talk, which discussed developing streaming algorithms with the flexibility of both Storm and R, a statistical programming language.
At the talk I dicsussed issues of why and how to use Storm and R to develop streaming algorithms; in particular I focused on:
• Streaming algorithms
• Online machine learning algorithms
• Use cases showing how to process hundreds of millions of events a day in (near) real time
See: https://apacheconna2015.sched.org/event/09f5a1cc372860b008bce09e15a034c4#.VUf7wxOUd5o
Big Data and the Internet of Things (IoT) have the potential
to fundamentally shift the way we interact with our surroundings. The
challenge of deriving insights from the Internet of Things (IoT) has
been recognized as one of the most exciting and key opportunities for
both academia and industry. Advanced analysis of big data streams from
sensors and devices is bound to become a key area of data mining
research as the number of applications requiring such processing
increases. Dealing with the evolution over time of such data streams,
i.e., with concepts that drift or change completely, is one of the
core issues in stream mining. In this talk, I will present an
overview of data stream mining, and I will introduce
some popular open source tools for data stream mining.
Artificial intelligence and data stream miningAlbert Bifet
Big Data and Artificial Intelligence have the potential to
fundamentally shift the way we interact with our surroundings. The
challenge of deriving insights from data streams has been recognized
as one of the most exciting and key opportunities for both academia
and industry. Advanced analysis of big data streams from sensors and
devices is bound to become a key area of artificial intelligence
research as the number of applications requiring such processing
increases. Dealing with the evolution over time of such data streams,
i.e., with concepts that drift or change completely, is one of the
core issues in stream mining. In this talk, I will present an overview
of data stream mining, industrial applications, open source tools, and
current challenges of data stream mining.
Streaming data analysis in real time is becoming the fastest and most efficient way to obtain useful knowledge from what is happening now, allowing organizations to react quickly when problems appear or to detect new trends helping to improve their performance. Evolving data streams are contributing to the growth of data created over the last few years. We are creating the same quantity of data every two days, as we created from the dawn of time up until 2003. Evolving data streams methods are becoming a low-cost, green methodology for real time online prediction and analysis. We discuss the current and future trends of mining evolving data streams, and the challenges that the field will have to overcome during the next years.
Max-kernel search: How to search for just about anything?
Nearest neighbor search is a well studied and widely used task in computer science and is quite pervasive in everyday applications. While search is not synonymous with learning, search is a crucial tool for the most nonparametric form of learning. Nearest neighbor search can directly be used for all kinds of learning tasks — classification, regression, density estimation, outlier detection. Search is also the computational bottleneck in various other learning tasks such as clustering and dimensionality reduction. Key to nearest neighbor search is the notion of “near”-ness or similarity. Mercer kernels form a class of general nonlinear similarity functions and are widely used in machine learning. They can define a notion of similarity between pairs of objects of any arbitrary type and have been successfully applied to a wide variety of object types — fixed-length data, images, text, time series, graphs. I will present a technique to do nearest neighbor search with this class of similarity functions provably efficiently, hence facilitating faster learning for larger data.
Meetup MLDD: Machine Learning Dresden, 8th May 2018
Signals from outer space
How NASA Benefits from Graph-Powered NLP
Vlasta Kus talked about the advantages of graph-based natural language processing (NLP) using a public NASA dataset as example. From his abstract: "[...] we are building a platform (from large part open-source) that integrates Neo4j and NLP (such as Named Entity Recognition, sentiment analysis, word embeddings, LDA topic extraction), and we test and develop further related features and tools, lately, for example, integrating Neo4j and Tensorflow for employing deep learning techniques (such as deep auto-encoders for automatic text summarisation)."
Vlasta holds a Ph.D. in Physics from the Charles University in Prague and has worked for SecureOps, as a freelance Data Scientist, and since 2017 as a Data Scientist at GraphAware (https://graphaware.com/), a London-based company that builds solutions around Neo4j.
Presentation for the Softskills Seminar course @ Telecom ParisTech. Topic is the paper by Domings Hulten "Mining high speed data streams". Presented by me the 30/11/2017
While much of the recent literature in spatial statistics has evolved around addressing the big data issue, practical implementations of these methods on high performance computing systems for truly large data are still rare. We discuss our explorations in this area at the National Center for Atmospheric Research for a range of applications, which can benefit from large scale computing infrastructure. These applications include extreme value analysis, approximate spatial methods, spatial localization methods and statistically-based data compression and are implemented in different programming languages. We will focus on timing results and practical considerations, such as speed vs. memory trade-offs, limits of scaling and ease of use.
Efficient Online Evaluation of Big Data Stream ClassifiersAlbert Bifet
The evaluation of classifiers in data streams is fundamental so that poorly-performing models can be identified, and either improved or replaced by better-performing models. This is an increasingly relevant and important task as stream data is generated from more sources, in real-time, in large quantities, and is now considered the largest source of big data. Both researchers and practitioners need to be able to effectively evaluate the performance of the methods they employ. However, there are major challenges for evaluation in a stream. Instances arriving in a data stream are usually time-dependent, and the underlying concept that they represent may evolve over time. Furthermore, the massive quantity of data also tends to exacerbate issues such as class imbalance. Current frameworks for evaluating streaming and online algorithms are able to give predictions in real-time, but as they use a prequential setting, they build only one model, and are thus not able to compute the statistical significance of results in real-time. In this paper we propose a new evaluation methodology for big data streams. This methodology addresses unbalanced data streams, data where change occurs on different time scales, and the question of how to split the data between training and testing, over multiple models.
About Multimedia Presentation Generation and Multimedia Metadata: From Synthe...Ansgar Scherp
ACM SIGMM Rising Stars Symposium
The ACM SIGMM Rising Stars Symposium, inaugurated in 2015, will highlight plenary presentations of six selected rising SIGMM members on their vision and research achievements, and dialogs with senior members about the future of multimedia research.
See: http://www.acmmm.org/2016/?page_id=706
Events in Multimedia - Theory, Model, ApplicationAnsgar Scherp
Talk by Ansgar Scherp.
Title: Events in Multimedia - Theory, Model, Application
Event: Workshop on Event-based Media Integration and Processing, ACM Multimedia, 2013
Mining Big Data Streams with APACHE SAMOAAlbert Bifet
In this talk, we present Apache SAMOA, an open-source platform for
mining big data streams with Apache Flink, Storm and Samza. Real time analytics is
becoming the fastest and most efficient way to obtain useful knowledge
from what is happening now, allowing organizations to react quickly
when problems appear or to detect new trends helping to improve their
performance. Apache SAMOA includes algorithms for the most common
machine learning tasks such as classification and clustering. It
provides a pluggable architecture that allows it to run on Apache
Flink, but also with other several distributed stream processing
engines such as Storm and Samza.
Streaming data presents new challenges for statistics and machine learning on extremely large data sets. Tools such as Apache Storm, a stream processing framework, can power range of data analytics but lack advanced statistical capabilities. These slides are from the Apache.con talk, which discussed developing streaming algorithms with the flexibility of both Storm and R, a statistical programming language.
At the talk I dicsussed issues of why and how to use Storm and R to develop streaming algorithms; in particular I focused on:
• Streaming algorithms
• Online machine learning algorithms
• Use cases showing how to process hundreds of millions of events a day in (near) real time
See: https://apacheconna2015.sched.org/event/09f5a1cc372860b008bce09e15a034c4#.VUf7wxOUd5o
Big Data and the Internet of Things (IoT) have the potential
to fundamentally shift the way we interact with our surroundings. The
challenge of deriving insights from the Internet of Things (IoT) has
been recognized as one of the most exciting and key opportunities for
both academia and industry. Advanced analysis of big data streams from
sensors and devices is bound to become a key area of data mining
research as the number of applications requiring such processing
increases. Dealing with the evolution over time of such data streams,
i.e., with concepts that drift or change completely, is one of the
core issues in stream mining. In this talk, I will present an
overview of data stream mining, and I will introduce
some popular open source tools for data stream mining.
Artificial intelligence and data stream miningAlbert Bifet
Big Data and Artificial Intelligence have the potential to
fundamentally shift the way we interact with our surroundings. The
challenge of deriving insights from data streams has been recognized
as one of the most exciting and key opportunities for both academia
and industry. Advanced analysis of big data streams from sensors and
devices is bound to become a key area of artificial intelligence
research as the number of applications requiring such processing
increases. Dealing with the evolution over time of such data streams,
i.e., with concepts that drift or change completely, is one of the
core issues in stream mining. In this talk, I will present an overview
of data stream mining, industrial applications, open source tools, and
current challenges of data stream mining.
Streaming data analysis in real time is becoming the fastest and most efficient way to obtain useful knowledge from what is happening now, allowing organizations to react quickly when problems appear or to detect new trends helping to improve their performance. Evolving data streams are contributing to the growth of data created over the last few years. We are creating the same quantity of data every two days, as we created from the dawn of time up until 2003. Evolving data streams methods are becoming a low-cost, green methodology for real time online prediction and analysis. We discuss the current and future trends of mining evolving data streams, and the challenges that the field will have to overcome during the next years.
Max-kernel search: How to search for just about anything?
Nearest neighbor search is a well studied and widely used task in computer science and is quite pervasive in everyday applications. While search is not synonymous with learning, search is a crucial tool for the most nonparametric form of learning. Nearest neighbor search can directly be used for all kinds of learning tasks — classification, regression, density estimation, outlier detection. Search is also the computational bottleneck in various other learning tasks such as clustering and dimensionality reduction. Key to nearest neighbor search is the notion of “near”-ness or similarity. Mercer kernels form a class of general nonlinear similarity functions and are widely used in machine learning. They can define a notion of similarity between pairs of objects of any arbitrary type and have been successfully applied to a wide variety of object types — fixed-length data, images, text, time series, graphs. I will present a technique to do nearest neighbor search with this class of similarity functions provably efficiently, hence facilitating faster learning for larger data.
Meetup MLDD: Machine Learning Dresden, 8th May 2018
Signals from outer space
How NASA Benefits from Graph-Powered NLP
Vlasta Kus talked about the advantages of graph-based natural language processing (NLP) using a public NASA dataset as example. From his abstract: "[...] we are building a platform (from large part open-source) that integrates Neo4j and NLP (such as Named Entity Recognition, sentiment analysis, word embeddings, LDA topic extraction), and we test and develop further related features and tools, lately, for example, integrating Neo4j and Tensorflow for employing deep learning techniques (such as deep auto-encoders for automatic text summarisation)."
Vlasta holds a Ph.D. in Physics from the Charles University in Prague and has worked for SecureOps, as a freelance Data Scientist, and since 2017 as a Data Scientist at GraphAware (https://graphaware.com/), a London-based company that builds solutions around Neo4j.
Presentation for the Softskills Seminar course @ Telecom ParisTech. Topic is the paper by Domings Hulten "Mining high speed data streams". Presented by me the 30/11/2017
While much of the recent literature in spatial statistics has evolved around addressing the big data issue, practical implementations of these methods on high performance computing systems for truly large data are still rare. We discuss our explorations in this area at the National Center for Atmospheric Research for a range of applications, which can benefit from large scale computing infrastructure. These applications include extreme value analysis, approximate spatial methods, spatial localization methods and statistically-based data compression and are implemented in different programming languages. We will focus on timing results and practical considerations, such as speed vs. memory trade-offs, limits of scaling and ease of use.
Efficient Online Evaluation of Big Data Stream ClassifiersAlbert Bifet
The evaluation of classifiers in data streams is fundamental so that poorly-performing models can be identified, and either improved or replaced by better-performing models. This is an increasingly relevant and important task as stream data is generated from more sources, in real-time, in large quantities, and is now considered the largest source of big data. Both researchers and practitioners need to be able to effectively evaluate the performance of the methods they employ. However, there are major challenges for evaluation in a stream. Instances arriving in a data stream are usually time-dependent, and the underlying concept that they represent may evolve over time. Furthermore, the massive quantity of data also tends to exacerbate issues such as class imbalance. Current frameworks for evaluating streaming and online algorithms are able to give predictions in real-time, but as they use a prequential setting, they build only one model, and are thus not able to compute the statistical significance of results in real-time. In this paper we propose a new evaluation methodology for big data streams. This methodology addresses unbalanced data streams, data where change occurs on different time scales, and the question of how to split the data between training and testing, over multiple models.
About Multimedia Presentation Generation and Multimedia Metadata: From Synthe...Ansgar Scherp
ACM SIGMM Rising Stars Symposium
The ACM SIGMM Rising Stars Symposium, inaugurated in 2015, will highlight plenary presentations of six selected rising SIGMM members on their vision and research achievements, and dialogs with senior members about the future of multimedia research.
See: http://www.acmmm.org/2016/?page_id=706
Events in Multimedia - Theory, Model, ApplicationAnsgar Scherp
Talk by Ansgar Scherp.
Title: Events in Multimedia - Theory, Model, Application
Event: Workshop on Event-based Media Integration and Processing, ACM Multimedia, 2013
A Framework for Iterative Signing of Graph Data on the WebAnsgar Scherp
Existing algorithms for signing graph data typically do not cover the whole signing process. In addition, they lack distinctive features such as signing graph data at different levels of granularity, iterative signing of graph data, and signing multiple graphs. In this paper, we introduce a novel framework for signing arbitrary graph data provided, e g., as RDF(S), Named Graphs, or OWL. We conduct an extensive theoretical and empirical analysis of the runtime and space complexity of different framework configurations. The experiments are performed on synthetic and real-world graph data of different size and different number of blank nodes. We investigate security issues, present a trust model, and discuss practical considerations for using our signing framework.
We released a Java-based open source implementation of our software framework for iterative signing of arbitrary graph data provided, e. g., as RDF(S), Named Graphs, or OWL. The software framework is based on a formalization of different graph signing functions and supports different configurations. It is available in source code as well as pre-compiled as .jar-file.
The graph signing framework exhibits the following unique features:
- Signing graphs on different levels of granularity
- Signing multiple graphs at once
- Iterative signing of graph data for provenance tracking
- Independence of the used language for encoding the graph (i. e., the signature does not break when changing the graph representation)
The documentation of the software framework and its source code is available from: http://icp.it-risk.iwvi.uni-koblenz.de/wiki/Software_Framework_for_Signing_Graph_Data
Smart photo selection: interpret gaze as personal interestAnsgar Scherp
Manually selecting subsets of photos from large collections in order to present them to friends or colleagues or to print them as photo books can be a tedious task. Today, fully automatic approaches are at hand for supporting users. They make use of pixel information extracted from the images, analyze contextual information such as capture time and focal aperture, or use both to determine a proper subset of photos. However, these approaches miss the most important factor in the photo selection process: the user. The goal of our approach is to consider individual interests. By recording and analyzing gaze information from the user's viewing photo collections, we obtain information on user's interests and use this information in the creation of personal photo selections. In a controlled experiment with 33 participants, we show that the selections can be significantly improved over a baseline approach by up to 22% when taking individual viewing behavior into account. We also obtained significantly better results for photos taken at an event participants were involved in compared with photos from another event.
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...NETWAYS
How to store billions of time series points and access them within a few milliseconds? Chronix!
Chronix is a young but mature open source project that allows one for example to store about 15 GB (csv) of time series in 238 MB with average query times of 21 ms. Chronix is built on top of Apache Solr a bulletproof distributed NoSQL database with impressive search capabilities. In this code-intense session we show how Chronix achieves its efficiency in both respects by means of an ideal chunking, by selecting the best compression technique, by enhancing the stored data with (pre-computed) attributes, and by specialized query functions.
A Fast and Efficient Time Series Storage Based on Apache SolrQAware GmbH
OSDC 2016, Berlin: Talk by Florian Lautenschlager (@flolaut, Senior Software Engineer at QAware)
Abstract: How to store billions of time series points and access them within a few milliseconds? Chronix! Chronix is a young but mature open source project that allows one for example to store about 15 GB (csv) of time series in 238 MB with average query times of 21 ms. Chronix is built on top of Apache Solr a bulletproof distributed NoSQL database with impressive search capabilities. In this code-intense session we show how Chronix achieves its efficiency in both respects by means of an ideal chunking, by selecting the best compression technique, by enhancing the stored data with (pre-computed) attributes, and by specialized query functions.
Real-Time Analytics with Apache Cassandra and Apache Spark, par Guido Schmutz, Technology Manager Trivadis. Conférence donnée dans le cadre du Swiss Data Forum à Lausanne le 24 novembre
Real-Time Analytics with Apache Cassandra and Apache SparkGuido Schmutz
Time series data is everywhere: IoT, sensor data, financial transactions. The industry has moved to databases like Cassandra to handle the high velocity and high volume of data that is now common place. However data is pointless without being able to process it in near real time. That's where Spark combined with Cassandra comes in! What was one just your storage system (Cassandra) can be transformed into an analytics system and it's really surprising how easy it is!
Time to Science/Time to Results: Transforming Research in the CloudAmazon Web Services
This session demonstrates how cloud can accelerate breakthroughs in scientific research by providing on-demand access to powerful computing. You will gain insight into how scientific researchers are using the cloud to solve complex science, engineering, and business problems that require high bandwidth, low latency networking and very high compute capabilities. You will hear how leveraging the cloud reduces the costs and time to conduct large scale, worldwide collaborative research. Researchers can then access computational power, data storage, and supercomputing resources, and data sharing capabilities in a cost-efficient manner without implementation delays. Disease research can be accomplished in a fraction of the time, and innovative researchers in small schools or distant corners of the world have access to the same computing power as those at major research institutions by leveraging Amazon EC2, Amazon S3, optimizing C3 instances and more to increase collaboration. This session will provide best practices and insight from UC Berkeley AMP Lab on the services used to connect disparate sets of data to drive meaningful new insight and impact.
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
The Open Science Data Cloud is a petabyte scale science cloud for managing, analyzing, and sharing large datasets. We give an overview of the Open Science Data Cloud and how it can be used for data science research.
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...MLconf
Spark and GraphX in the Netflix Recommender System: We at Netflix strive to deliver maximum enjoyment and entertainment to our millions of members across the world. We do so by having great content and by constantly innovating on our product. A key strategy to optimize both is to follow a data-driven method. Data allows us to find optimal approaches to applications such as content buying or our renowned personalization algorithms. But, in order to learn from this data, we need to be smart about the algorithms we use, how we apply them, and how we can scale them to our volume of data (over 50 million members and 5 billion hours streamed over three months). In this talk we describe how Spark and GraphX can be leveraged to address some of our scale challenges. In particular, we share insights and lessons learned on how to run large probabilistic clustering and graph diffusion algorithms on top of GraphX, making it possible to apply them at Netflix scale.
RISELab: Enabling Intelligent Real-Time Decisions keynote by Ion StoicaSpark Summit
A long-standing grand challenge in computing is to enable machines to act autonomously and intelligently: to rapidly and repeatedly take appropriate actions based on information in the world around them. To address this challenge, at UC Berkeley we are starting a new five year effort that focuses on the development of data-intensive systems that provide Real-Time Intelligence with Secure Execution (RISE). Following in the footsteps of AMPLab, RISELab is an interdisciplinary effort bringing together researchers across AI, robotics, security, and data systems. In this talk I’ll present our research vision and then discuss some of the applications that will be enabled by RISE technologies.
RISELab:Enabling Intelligent Real-Time DecisionsJen Aman
Spark Summit East Keynote by Ion Stoica
A long-standing grand challenge in computing is to enable machines to act autonomously and intelligently: to rapidly and repeatedly take appropriate actions based on information in the world around them. To address this challenge, at UC Berkeley we are starting a new five year effort that focuses on the development of data-intensive systems that provide Real-Time Intelligence with Secure Execution (RISE). Following in the footsteps of AMPLab, RISELab is an interdisciplinary effort bringing together researchers across AI, robotics, security, and data systems. In this talk I’ll present our research vision and then discuss some of the applications that will be enabled by RISE technologies.
Chronix Time Series Database - The New Time Series Kid on the BlockQAware GmbH
Apache Big Data Conference 2016, Vancouver BC: Talk by Florian Lautenschlager (@flolaut, Senior Software Engineer).
Abstract: There is a new open source time series database on the block that allows one to store billions of time series points and access them within a few milliseconds.
Chronix is a young but mature open source time series database that catches a compression rate of 98% compared to data in CSV files while an average query took 21 milliseconds. Chronix is built on top of Apache Solr, a bulletproof NoSQL database with impressive search capabilities. Chronix relies on Solr plugins and everyone who has a Solr running can create a new Chronix core within a few minutes.
In this presentation Florian shows how Chronix achieves its efficiency in both by means of an ideal chunking, by selecting the best compression technique, by enhancing the stored data with pre-computed attributes, and by specialized time series query functions.
Leveraging the Power of Solr with SparkQAware GmbH
Lucene Revolution 2016, Boston: Talk by Johannes Weigend (@JohannesWeigend, CTO at QAware).
Abstract: Solr is a distributed NoSQL database with impressive search capabilities. Spark is the new megastar in the distributed computing universe. In this code-intense session we show you how to combine both to solve real-time search and processing problems. We will show you how to set up a Solr/Spark combination from scratch and develop first jobs with runs distributed on shared Solr data. We will also show you how to use this combination for your next-generation BI platform.
Analysis of GraphSum's Attention Weights to Improve the Explainability of Mul...Ansgar Scherp
Slides of our presentation @iiWAS2021: The 23rd International Conference on Information Integration and Web Intelligence, Linz, Austria, 29 November 2021 - 1 December 2021. ACM 2021, ISBN 978-1-4503-9556-4
STEREO: A Pipeline for Extracting Experiment Statistics, Conditions, and Topi...Ansgar Scherp
Presentation for our paper @iiWAS2021: The 23rd International Conference on Information Integration and Web Intelligence, Linz, Austria, 29 November 2021 - 1 December 2021. ACM 2021, ISBN 978-1-4503-9556-4
Text Localization in Scientific Figures using Fully Convolutional Neural Netw...Ansgar Scherp
Text extraction from scientific figures has been addressed in the past by different unsupervised approaches due to the limited amount of training data. Motivated by the recent advances in Deep Learning, we propose a two-step neural-network-based pipeline to localize and extract text using Fully Convolutional Networks. We improve the localization of the text bounding boxes by applying a novel combination of a Residual Network with the Region Proposal Network based on Faster R-CNN. The predicted bounding boxes are further pre-processed and used as input to the of-the-shelf optical character recognition engine Tesseract 4.0. We evaluate our improved text localization method on five different datasets of scientific figures and compare it with the best unsupervised pipeline. Since only limited training data is available, we further experiment with different data augmentation techniques for increasing the size of the training datasets and demonstrate their positive impact. We use Average Precision and F1 measure to assess the text localization results. In addition, we apply Gestalt Pattern Matching and Levenshtein Distance for evaluating the quality of the recognized text. Our extensive experiments show that our new pipeline based on neural networks outperforms the best unsupervised approach by a large margin of 19-20%.
A Comparison of Approaches for Automated Text Extraction from Scholarly FiguresAnsgar Scherp
So far, there has not been a comparative evaluation of different approaches for text extraction from scholarly figures. In order to fill this gap, we have defined a generic pipeline for text extraction that abstracts from the existing approaches as documented in the literature. In this paper, we use this generic pipeline to systematically evaluate and compare 32 configurations for text extraction over four datasets of scholarly figures of different origin and characteristics. In total, our experiments have been run over more than 400 manually labeled figures. The experimental results show that the approach BS-4OS results in the best F-measure of 0.67 for the Text Location Detection and the best average Levenshtein Distance of 4.71 between the recognized text and the gold standard on all four datasets using the Ocropy OCR engine.
Can you see it? Annotating Image Regions based on Users' Gaze InformationAnsgar Scherp
Presentation on eyetracking-based annotation of image regions that I gave at Vienna on Oct 19, 2012. Download original PowerPoint file to enjoy all animations. For the papers, please refer to: http://www.ansgarscherp.net/publications
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
1. Slide 1Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Ansgar Scherp
Mining and Managing
Large-scale Linked Open Data
GVDB, Nörten-Hardenberg, May 25, 2016
Thanks to: Chifumi Nishioka, Renata Dividino, Thomas Gottron,
and many more …
2. Slide 2Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Team Knowledge Discovery @
Ansgar
Scherp
Ahmed
Saleh
Chifumi
Nishioka
Falk
Böschen
Mohammad
Abdel-Qader
Till Blume
Anke
Koslowski
(Secretariat)
Henrik
Schmidt
(Engineer)
Lukas
Galke
Florian
Mai
&
3. Slide 3Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Linked Open Data (LOD) Cloud
• Publishing and interlinking data on the web
• Different quality, purpose, and sources
• Using the Resource Description Framework(RDF)
World Wide Web LOD Cloud
Documents Data
Hyperlinks via <a> Typed Links
HTML RDF
Addresses (URIs) Addresses (URIs)
5. Slide 5Prof. Ansgar Scherp – asc@informatik.uni-kiel.de1000+ Datasets, 50+ Billion Triples
Media
Geographic
Publications
Web 2.0
eGovernment
Cross-Domain
Life
Sciences
Linked Data: May ‘07 August ‘14
Source: http://lod-cloud.net
Social Networking
6. Slide 6Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
LOD on One Slide: Example Graph
biglynx:matt-briggs
foaf:Person
rdf:type
Fully qualified URI using vocabulary prefixes:
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix biglynx: <http://biglynx.co.uk/people/> .
Object
Predicate
Subject
RDF Triple
7. Slide 7Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
LOD on One Slide: Example Graph
biglynx:matt-briggs
foaf:Person
rdf:type
Fully qualified URI using vocabulary prefixes:
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix biglynx: <http://biglynx.co.uk/people/> .
biglynx:Director
rdf:type …
…
8. Slide 8Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
LOD on One Slide: Example Graph
biglynx:matt-briggs
foaf:Person
biglynx:dave-smith
biglynx:Director
rdf:type
foaf:knows
rdf:type
_1:point
wgs84:
lat
wgs84:
long
dp:London
foaf:based_near
……
…
…
ex:loc
“-0.118”
“51.509”
Types
Properties
Entity
9. Slide 9Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Motivation for the SchemEX Index
• Single entry point to query the LOD cloud
• Search for data sources containing entities like
– ‘Persons, who are Politicians and Actors’
– ‘Research data sets’
– ‘Scientific publications’
Query
SELECT ?x
FROM …
WHERE {
?x rdf:type ex:Actor .
?x rdf:type ex:Politician . }
Index1
2
2
2
11. Slide 11Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
SchemEX Idea
• Schema-level index SchemEX
• Assign RDF entities to graph patterns
• Map graph patterns to data sources (context)
• Defined over entities, but store the context
• Construction of schema-level index
• Stream-based for scalability
• Stratified bi-simulation for detecting patterns
• Little loss of accuracy
[KGS+12]
12. Slide 12Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Building the Index from a Stream
• Stream of quads coming from a LD crawler
… Q16, Q15, Q14, Q13, Q12, Q11, Q10, Q9, Q8, Q7, Q6, Q5, Q4, Q3, Q2, Q1
FiFo
4
3
2
1
1
6
2
3
4
5
C3
C2
C2
C1
+ Reasonable accuracy at cache size of 50k
13. Slide 13Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Full BTC 2011Data Set: 2.17 Bn Triples
Cache size: 50 k
Winner
BTC’11
+ Linear runtime with respect to number of triples
+ Memory consumption scales with window size
14. Slide 14Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
[GSK+13] Generalization
Specialization
Result list with
examples
Inspired by
Google
15. Slide 15Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
LODatio Under the Hood
SPARQL
Snippets
Generalize
Retrieve
Data Sources
Query
translation
Rank
Specialize
Count
Select
Select
• Hybrid database with off-the-shelf components
16. Slide 16Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
LOD on One Slide: Recap
biglynx:matt-briggs
foaf:Person
biglynx:dave-smith
biglynx:Director
rdf:type
foaf:knows
rdf:type
_1:point
wgs84:
lat
wgs84:
long
dp:London
foaf:based_near
……
…
…
ex:loc
“-0.118”
“51.509”
Type Set (TS)
Property Set (PS)
Information theoretic analyses of LOD
• How much information is encoded in TS and PS?
• … information encoded, once TS or PS is known?
• … to which degree are TS and PS redundant?
• Example: 20% of PLDs do not need TS (6% for PS)
[GKS15]
17. Slide 17Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
• 29 weekly LOD snapshots of ~100 Mio triples
• Still running since May 2012 (now 200+ weeks)
Käfer et al.’s Temporal Analysis of LOD
• Data on the cloud changes a lot
[Käfer et al., 2013] T. Käfer, A. Abdelrahman, J. Umbrich, P. O'Byrne, A. Hogan: Observing Linked
Data Dynamics. ESWC 2013: 213-227
Changes?
• But vocabularies defining RDF types and
properties are highly static, e.g., RDF, FOAF
LOD cloud ~2012 LOD cloud ~2014
18. Slide 18Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
𝐻(𝑃𝑆|𝑇𝑆=𝑡𝑠)
𝐻(𝑇𝑆|𝑃𝑆=𝑝𝑠)
But:DoChangesOccurinPS and TS?
• Analysis: expected conditional entropy over time
• 𝐻(𝑃𝑆|𝑇𝑆 = 𝑡𝑠): entropy of 𝑃𝑆 given 𝑇𝑆 is known
• Observation: types become less important
• Changes in the use of TS and PS ? !
19. Slide 19Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Changes over Time
• Extended characteristic sets: ECS = PS ∪ TS
# of ECS
Avg.: 83.898 ECS per week
# of ECS
[DSG+13]
• Avg. 73% of ECS re-occur next week (orange)
• Avg. 35% of ECS remain unchanged (blue)
• Avg. 20% of entity sets of ECS change / week
[Neumann and Moerkotte, 2011] Thomas Neumann, Guido Moerkotte: Characteristic sets:
Accurate cardinality estimation for RDF queries with multiple joins. ICDE 2011: 984-994
[Neumann and
Moerkotte, 2011]
20. Slide 20Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Temporal Dynamics of the Entities?
• Notion of entity motivated by ECS: entity is a
set of triples 𝑋 sharing the same subject URI 𝑠
• Example:
–1 entity
–4 triples
w.l.o.g.
• Useful to keep LOD caches up-to-date?
• Can we predict when LOD sources will change?
21. Slide 21Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Dynamics Function Θ
• Definition of Θ over change rate function 𝑐(𝑋𝑡)
Time
X
𝑡𝑖 𝑡𝑗
Θ
Θ 𝑡 𝑖
𝑋 = Θ(𝑋𝑡 𝑗
) − Θ(𝑋𝑡 𝑖
) = 𝑡 𝑖
𝑡 𝑗
𝑐 𝑋𝑡 d𝑡
[DGS+14]
𝑡𝑗
≈
𝑘=𝑖+1
𝑗
𝛿(𝑋𝑡 𝑘−1
, 𝑋𝑡 𝑘
)
• Approximation as step function over changes
Monotone,
non-negative
c
22. Slide 22Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Update Strategies for LOD Sources
• Apply strategies from keeping caches of WWW
documents up-to-date to maintain LOD caches
• Assumptions
–LOD is fetched from various sources
–Sources are scored and prioritized based on
strategy
–Data of a source is fetched only when the
operation can be entirely executed
23. Slide 23Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Scheduling Update Strategies
a) HTTP Header [Dividino et al., 2014a]
b) Age or Last Visited [Dasdan et al., 2009, Cho and
Garcia-Molina, 2000]
c) PageRank [Page et al., 1999, Boldi et al., 2004,
Baeza-Yates et al., 2005]
d) LOD Sources Size
e) Change Ratio [Douglis et al., 1997, Cho et al., 2002.
Tan et al., 2007]
f) Change Rate [Olston et al., 2002, Ntoulas et al.,
2004, Dividino et al., 2013]
g) History Information: Dynamics [Dividino et al., 2014b]
We borrow strategies developed for the WWW and
metrics for data change analysis in the LOD cloud.
24. Slide 24Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Ranking
Sources which
changed (most)
Sources that not
changed/less changesTimeti tj
e) Change Ratio
• Captures the change
frequency of the data
(freshness)
• Percentage of data items
in the cache that are up-to-date
25. Slide 25Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
f) Change Rate
• Data from sources which are less similar which their
previous update (snapshot) should be updated first
• Comparison of two RDF data sets
– 𝑋 : Set of triple statements
– 𝛿 : Numeric expression (distance)
𝛿𝐽𝑎𝑐𝑐𝑎𝑟𝑑 𝑋1, 𝑋2 =
1 −
𝑋1 ∩ 𝑋2
𝑋1 ∪ 𝑋2
0,¥[ )
Time𝑡𝑖 𝑡𝑗
𝛿
Example:
26. Slide 26Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
g) History Information: Dynamics
• Data from sources which most evolve in a given
period of time should be updated first
• Uses both history information and change rate
Θ(𝑋𝑡 𝑗
) − Θ(𝑋𝑡 𝑖
) = 𝑡 𝑖
𝑡 𝑗
𝑐 𝑋𝑡 d𝑡
Time
X
𝑡𝑖 𝑡𝑗
Θ
c
≈
𝑘=𝑖+1
𝑗
𝛿(𝑋𝑡 𝑘−1
, 𝑋𝑡 𝑘
)
27. Slide 27Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Evaluation
Idea: simulation of limitations of available
computational resources (network bandwidth,
computation time)
Time
100%
𝑡𝑖 𝑡𝑖+1
28. Slide 28Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Evaluation: Single Step Update
Time
100%
15%
5%40%
75%
95%60%
𝑡𝑖 𝑡𝑖+1
30. Slide 30Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Dataset
• Dynamic Linked Data Observatory
• Weekly snapshots, 14 M triples
154 snapshots (approx. 3 years)
590 data sources (PLD)
Top 10 largest data sources Average size
dbpedia.org 3,406,364.5
edgarwrap.ontologycentral.com 982,631.0
dbtune.org 864,107.6
dbtropes.org 787,299.9
data.linkedct.org 498,986.3
aims.fao.org 416,708.9
www.legislation.gov.uk 399,601.6
kent.zpr.fer.hr 387,034.8
identi.ca 278,316.2
webenemasuno.linkeddata.es 250,557.9
31. Slide 31Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Metrics:Precision & Recall
• Precision: portion of cached data that are
actually up-to-date
• Recall: portion of data in the LOD cloud that
is identical to the cached data
Cached data
Actual data on the LOD cloud
(w.r.t. to the 590 sources considered)
32. Slide 32Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Results: Single Step Update
Time
t jti
100% 15%
5%40%
75%
95%60%
36. Slide 36Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Results: Summary
Best strategies: ones which
capture the change
behaviour over time
Specially for low relative
bandwidth
37. Slide 37Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Dynamics Function Θ: Revisited
Time
X
𝑡𝑖 𝑡𝑗
c
• Can we predict when LOD sources will change?
• Notion of dynamics to compute periodicities!
• Dynamics as vector of changes:
< 𝛿(𝑋𝑡1
, 𝑋𝑡2
), … , 𝛿(𝑋𝑡 𝑁−1
, 𝑋𝑡 𝑁
) >
39. Slide 39Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Periodicity of Entity Dynamics
• Examples: < 0, 3, 2, 0, 3, 2, 0 >, < 1, 2, 1, 2, 1, 2 >
# of
entities
Most likely
periodicity
C1 12,982 66
C2 168 23
C3 35 1
C4 12 1
C5 1 1
C6 1,541 56
C7 30 37
CS 50,725
[Elfeky et al., 2005] Mohamed G. Elfeky, Walid G. Aref, Ahmed K. Elmagarmid:
Periodicity Detection in Time Series Databases. IEEE Trans. Knowl. Data Eng.
17(7): 875-887 (2005)
• Convolution-based algorithm
[Elfeky et al. 2005]
• Entities of legislation.gov.uk
found in several clusters
(C1,C3,C4,C5,C6)
• No changes (CS): 77.29%
• CS: entities from w3.org and
ontologydesignpatterns.org
40. Slide 40Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Application Areas: More than One!
• Searching for LOD sources
[GSK+13,KGS+12]
• Strategies for updating data caches [DGS15]
• Programming queries against LOD [SSS12]
• Recommending LOD vocabularies [SGS16]
Foundation for Future Data-driven Applications
41. Slide 41Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Summary: KDD in Social Media & DL
How to deal with the vast amount of content related to
research and innovation?
• H2020 INSO-4 project, duration: 04/2016-03/2019
• Data mining & visualization tools enabling information
professionals to deal with large corpora
• Website: http://www.moving-project.eu/
New
43. Slide 43Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
References
[DGS15] R. Dividino, T. Gottron, A. Scherp: Strategies for Efficiently Keeping Local
Linked Open Data Caches Up-To-Date. International Semantic Web Conference (2)
2015: 356-373
[DGS+14] R. Dividino, T. Gottron, A. Scherp, G. Gröner: From Changes to Dynamics:
Dynamics Analysis of Linked Open Data Sources. PROFILES@ESWC 2014
[GKS15] T. Gottron, M. Knauf, A. Scherp: Analysis of schema structures in the Linked
Open Data graph based on unique subject URIs, pay-level domains, and vocabulary
usage. Distributed and Parallel Databases 33(4): 515-553 (2015)
[DSG+13] R. Dividino, A. Scherp, G. Gröner, T. Gottron: Change-a-LOD: Does the
Schema on the Linked Data Cloud Change or Not? COLD 2013
[GSK+13] T. Gottron, A. Scherp, B. Krayer, A. Peters: LODatio: using a schema-level
index to support users in finding relevant sources of linked data. K-CAP 2013: 105-108
[KGS+12] M. Konrath, T. Gottron, S. Staab, A. Scherp: SchemEX - Efficient construction
of a data catalogue by stream-based indexing of linked data. J. Web Sem. 16: 52-58
(2012)
[NS15] C. Nishioka, A Scherp: Temporal Patterns and Periodicity of Entity Dynamics in
the Linked Open Data Cloud. K-CAP 2015.
[SGS16] J. Schaible, T. Gottron, and A. Scherp: TermPicker Enabling the Reuse of
Vocabulary Terms by Exploiting Data from the Linked Open Data Cloud, ESWC,
Springer, 2016.
[SSS12] S. Scheglmann, A. Scherp, S. Staab: Declarative Representation of
Programming Access to Ontologies. ESWC 2012: 659-673
44. Slide 44Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
a) HTTP Header
• Data from sources which have been changed
since the last update should be updated first
HTTP Response
HEADER
…
Last-Modified: Tue, 15 Nov 1994 12:45:26
GMT
CONTENT
45. Slide 45Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
b) Age or Last Visited
• Time elapsed from last
update (the difference
between query time and
last update time)
• It guarantees that every
source is updated after a
period
Ranking
Sources that have been
at longer time updated
Sources that have
been recently updated
46. Slide 46Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
c) PageRank and d) Source Size
• PageRank captures popularity/
importance of the LOD source
• Data from sources with highest
PageRank are updated first
• LOD source size: data from the
biggest/smallest LOD sources
should be updated first
Ranking
Sources with
higher PR
Sources with
lower PR
47. Slide 47Prof. Ansgar Scherp – asc@informatik.uni-kiel.de
Results: Single Step Update
Time
t jti
100% 15%
5%40%
75%
95%60%