Charting the Digital Library Evaluation Domain with a Semantically Enhanced Mining Methodology

622 views
563 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
622
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Charting the Digital Library Evaluation Domain with a Semantically Enhanced Mining Methodology

  1. 1. Charting the Digital Library Evaluation Domain with a Semantically Enhanced Mining Methodology S Eleni Afiontzi,1 Giannis Kazadeis,1 Leonidas Papachristopoulos,2 Michalis Sfakakis,2 Giannis Tsakonas,2 Christos Papatheodorou2 13th ACM/IEEE Joint Conference on Digital Libraries, July 22-26, Indianapolis, IN, USA 1. Department of Informatics, Athens University of Economics & Business 2. Database & Information Systems Group, Department of Archives & Library Science, Ionian University
  2. 2. aim & scope of research
  3. 3. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature.
  4. 4. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature.
  5. 5. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature. • We question:
  6. 6. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature. • We question: - how we select relevant studies,
  7. 7. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature. • We question: - how we select relevant studies, - how we annotate them,
  8. 8. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature. • We question: - how we select relevant studies, - how we annotate them, - how we discover these patterns,
  9. 9. aim & scope of research • To propose a methodology for discovering patterns in the scientific literature. • Our case study is performed in the digital library evaluation domain and its conference literature. • We question: - how we select relevant studies, - how we annotate them, - how we discover these patterns, in an effective, machine-operated way, in order to have reusable and interpretable data?
  10. 10. why
  11. 11. why • Abundance of scientific information
  12. 12. why • Abundance of scientific information • Limitations of existing tools, such as reusability
  13. 13. why • Abundance of scientific information • Limitations of existing tools, such as reusability • Lack of contextualized analytic tools
  14. 14. why • Abundance of scientific information • Limitations of existing tools, such as reusability • Lack of contextualized analytic tools • Supervised automated processes
  15. 15. panorama
  16. 16. panorama 1. Document classification to identify relevant papers
  17. 17. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011.
  18. 18. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts
  19. 19. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts - We use a schema for semantic annotation, the Digital Library Evaluation Ontology, and a semantic annotation tool, GoNTogle.
  20. 20. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts - We use a schema for semantic annotation, the Digital Library Evaluation Ontology, and a semantic annotation tool, GoNTogle. 3. Clustering to form coherent groups (K=11)
  21. 21. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts - We use a schema for semantic annotation, the Digital Library Evaluation Ontology, and a semantic annotation tool, GoNTogle. 3. Clustering to form coherent groups (K=11) 4. Interpretation with the assistance of the ontology schema
  22. 22. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts - We use a schema for semantic annotation, the Digital Library Evaluation Ontology, and a semantic annotation tool, GoNTogle. 3. Clustering to form coherent groups (K=11) 4. Interpretation with the assistance of the ontology schema
  23. 23. panorama 1. Document classification to identify relevant papers - We use a corpus of 1,824 papers from the JCDL and ECDL (now TPDL) conferences, era 2001-2011. 2. Semantic annotation processes to mark up important concepts - We use a schema for semantic annotation, the Digital Library Evaluation Ontology, and a semantic annotation tool, GoNTogle. 3. Clustering to form coherent groups (K=11) 4. Interpretation with the assistance of the ontology schema • During this process we perform benchmarking tests to qualify specific components to effectively automate the exploration of the literature and the discovery of research patterns.
  24. 24. part 1 how we identify relevant studies
  25. 25. training phase
  26. 26. training phase • e aim was to train a classifier to identify relevant papers.
  27. 27. training phase • e aim was to train a classifier to identify relevant papers. • Categorization
  28. 28. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised
  29. 29. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords
  30. 30. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL
  31. 31. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL - inter-rater agreement: moderate levels of Cohen’s Kappa
  32. 32. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL - inter-rater agreement: moderate levels of Cohen’s Kappa - 12% positive # 88% negative
  33. 33. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL - inter-rater agreement: moderate levels of Cohen’s Kappa - 12% positive # 88% negative • Skewness of data addressed via resampling:
  34. 34. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL - inter-rater agreement: moderate levels of Cohen’s Kappa - 12% positive # 88% negative • Skewness of data addressed via resampling: - under-sampling (Tomek Links)
  35. 35. training phase • e aim was to train a classifier to identify relevant papers. • Categorization - two researchers categorized, a third one supervised - descriptors: title, abstract & author keywords - rater’s agreement: 82.96% for JCDL, 78% for ECDL - inter-rater agreement: moderate levels of Cohen’s Kappa - 12% positive # 88% negative • Skewness of data addressed via resampling: - under-sampling (Tomek Links) - over-sampling (random over-sampling)
  36. 36. corpus definition
  37. 37. corpus definition • Classification algorithm: Naïve Bayes
  38. 38. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%)
  39. 39. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set.
  40. 40. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set. 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0
  41. 41. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set. 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 Test Development
  42. 42. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set. 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 Test Development fp rate
  43. 43. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set. 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 Test Development fp rate tp rate
  44. 44. corpus definition • Classification algorithm: Naïve Bayes • Two sub-sets: a development (75%) and a test (25%) • Ten-fold validation: the development set was randomly divided to 10 equal; 9/10 as training set and 1/10 as test set. 0 0.2 0.4 0.6 0.8 1.0 0 0.2 0.4 0.6 0.8 1.0 Test Development fp rate tp rate
  45. 45. part 2 how we annotate
  46. 46. the schema - DiLEO
  47. 47. the schema - DiLEO • DiLEO aims to conceptualize the DL evaluation domain by exploring its key entities, their attributes and their relationships.
  48. 48. the schema - DiLEO • DiLEO aims to conceptualize the DL evaluation domain by exploring its key entities, their attributes and their relationships. • A two layered ontology:
  49. 49. the schema - DiLEO • DiLEO aims to conceptualize the DL evaluation domain by exploring its key entities, their attributes and their relationships. • A two layered ontology: - Strategic level: consists of a set of classes related with the scope and aim of an evaluation.
  50. 50. the schema - DiLEO • DiLEO aims to conceptualize the DL evaluation domain by exploring its key entities, their attributes and their relationships. • A two layered ontology: - Strategic level: consists of a set of classes related with the scope and aim of an evaluation. - Procedural level: consists of classes dealing with practical issues.
  51. 51. the instrument - GoNTogle
  52. 52. the instrument - GoNTogle
  53. 53. the instrument - GoNTogle • We used GoNTogle to generate a RDFS knowledge base.
  54. 54. the instrument - GoNTogle • We used GoNTogle to generate a RDFS knowledge base. • GoNTogle uses the weighted k-NN algorithm to support either manual, or automated ontology- based annotation.
  55. 55. the instrument - GoNTogle • We used GoNTogle to generate a RDFS knowledge base. • GoNTogle uses the weighted k-NN algorithm to support either manual, or automated ontology- based annotation.
  56. 56. the instrument - GoNTogle • We used GoNTogle to generate a RDFS knowledge base. • GoNTogle uses the weighted k-NN algorithm to support either manual, or automated ontology- based annotation. • http://bit.ly/12nlryh
  57. 57. the process - 1/3
  58. 58. the process - 1/3 • GoNTogle estimates a score for each class/subclass, calculating its presence in the k nearest neighbors.
  59. 59. the process - 1/3 • GoNTogle estimates a score for each class/subclass, calculating its presence in the k nearest neighbors. • We set a score threshold above which a class is assigned to a new instance (optimal score: 0.18).
  60. 60. the process - 1/3 • GoNTogle estimates a score for each class/subclass, calculating its presence in the k nearest neighbors. • We set a score threshold above which a class is assigned to a new instance (optimal score: 0.18). • e user is presented with a ranked list of the suggested classes/ subclasses and their score ranging from 0 to 1.
  61. 61. the process - 1/3 • GoNTogle estimates a score for each class/subclass, calculating its presence in the k nearest neighbors. • We set a score threshold above which a class is assigned to a new instance (optimal score: 0.18). • e user is presented with a ranked list of the suggested classes/ subclasses and their score ranging from 0 to 1. • 2,672 annotations were manually generated.
  62. 62. the process - 2/3
  63. 63. the process - 2/3 • RDFS statements were processed to construct a new data set (removal of stopwords, symbols, lowercasing, etc.)
  64. 64. the process - 2/3 • RDFS statements were processed to construct a new data set (removal of stopwords, symbols, lowercasing, etc.) • Experiments both with un-stemmed (4,880 features) and stemmed (3,257 features) words.
  65. 65. the process - 2/3 • RDFS statements were processed to construct a new data set (removal of stopwords, symbols, lowercasing, etc.) • Experiments both with un-stemmed (4,880 features) and stemmed (3,257 features) words. • Multi-label classification via the ML framework Meka.
  66. 66. the process - 2/3 • RDFS statements were processed to construct a new data set (removal of stopwords, symbols, lowercasing, etc.) • Experiments both with un-stemmed (4,880 features) and stemmed (3,257 features) words. • Multi-label classification via the ML framework Meka. • Four methods - binary representation - Label powersets - RAkEL - ML-kNN • Four algorithms - Naïve Bayes - Multinomial Naïve Bayes - k-Nearest- Neighbors - Support Vector Machines • Four metrics - Hamming Loss - Accuracy - One-error - F1 macro
  67. 67. the process - 3/3
  68. 68. the process - 3/3 • Performance tests were repeated using GoNTogle.
  69. 69. the process - 3/3 • Performance tests were repeated using GoNTogle. • GoNTogle’s algorithm achieves good results in relation to the tested multi-label classification algorithms.
  70. 70. the process - 3/3 • Performance tests were repeated using GoNTogle. • GoNTogle’s algorithm achieves good results in relation to the tested multi-label classification algorithms. 0 0.2 0.4 0.6 0.8 1.0 Hamming Loss Accuracy One - Error F1 macro 0.44 0.27 0.63 0.02 0.39 0.29 0.49 0.02
  71. 71. the process - 3/3 • Performance tests were repeated using GoNTogle. • GoNTogle’s algorithm achieves good results in relation to the tested multi-label classification algorithms. 0 0.2 0.4 0.6 0.8 1.0 Hamming Loss Accuracy One - Error F1 macro 0.44 0.27 0.63 0.02 0.39 0.29 0.49 0.02 GoNTogle Meka
  72. 72. part 3 how we discover
  73. 73. clustering - 1/3
  74. 74. clustering - 1/3 • e final data set consists of 224 vectors of 53 features
  75. 75. clustering - 1/3 • e final data set consists of 224 vectors of 53 features - represents the assigned annotations from the DiLEO vocabulary to the document corpus.
  76. 76. clustering - 1/3 • e final data set consists of 224 vectors of 53 features - represents the assigned annotations from the DiLEO vocabulary to the document corpus. • We represent the annotated documents by 2 vector models:
  77. 77. clustering - 1/3 • e final data set consists of 224 vectors of 53 features - represents the assigned annotations from the DiLEO vocabulary to the document corpus. • We represent the annotated documents by 2 vector models: - binary: fi has the value of 1, if the respective to fi subclass is assigned to the document m, otherwise 0.
  78. 78. clustering - 1/3 • e final data set consists of 224 vectors of 53 features - represents the assigned annotations from the DiLEO vocabulary to the document corpus. • We represent the annotated documents by 2 vector models: - binary: fi has the value of 1, if the respective to fi subclass is assigned to the document m, otherwise 0. - tf-idf: feature frequency ffi of fi in all vectors is equal to 1 when the respective subclass is annotated to the respective document m; idfi is the inverse document frequency of the feature i in documents M.
  79. 79. clustering - 2/3
  80. 80. clustering - 2/3 • We cluster the vector representations of the annotations by applying 2 clustering algorithms:
  81. 81. clustering - 2/3 • We cluster the vector representations of the annotations by applying 2 clustering algorithms: - K-Means: partitions M data points to K clusters. e rate of decrease peaked for K near 11 when plotted the Objective function (cost or error) for various values of K.
  82. 82. clustering - 2/3 • We cluster the vector representations of the annotations by applying 2 clustering algorithms: - K-Means: partitions M data points to K clusters. e rate of decrease peaked for K near 11 when plotted the Objective function (cost or error) for various values of K. - Agglomerative Hierarchical Clustering: a ‘bottom up’ built hierarchy of clusters.
  83. 83. clustering - 3/3
  84. 84. clustering - 3/3 • We assess each feature of each cluster using the frequency increase metric.
  85. 85. clustering - 3/3 • We assess each feature of each cluster using the frequency increase metric. - it calculates the increase of the frequency of a feature fi in the cluster k (cfi,k) compared to its document frequency dfi in the entire data set
  86. 86. clustering - 3/3 • We assess each feature of each cluster using the frequency increase metric. - it calculates the increase of the frequency of a feature fi in the cluster k (cfi,k) compared to its document frequency dfi in the entire data set • We select the threshold a that maximizes the F1-measure, the harmonic mean of Coverage and Dissimilarity mean.
  87. 87. clustering - 3/3 • We assess each feature of each cluster using the frequency increase metric. - it calculates the increase of the frequency of a feature fi in the cluster k (cfi,k) compared to its document frequency dfi in the entire data set • We select the threshold a that maximizes the F1-measure, the harmonic mean of Coverage and Dissimilarity mean. - Coverage: the proportion of features participating in the clusters to the total number of features
  88. 88. clustering - 3/3 • We assess each feature of each cluster using the frequency increase metric. - it calculates the increase of the frequency of a feature fi in the cluster k (cfi,k) compared to its document frequency dfi in the entire data set • We select the threshold a that maximizes the F1-measure, the harmonic mean of Coverage and Dissimilarity mean. - Coverage: the proportion of features participating in the clusters to the total number of features - Dissimilarity mean: the average of the distinctiveness of the clusters, defined in terms of the dissimilarity di,j between all the possible pairs of the clusters.
  89. 89. metrics - F1-measure
  90. 90. metrics - F1-measure 0 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
  91. 91. metrics - F1-measure 0 0.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9 1.0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K-Means tf-idf K-Means binary Hierarchical tf-idf
  92. 92. part 4 how (and what) we interpret
  93. 93. Levels patterns hasDimensionsType isAimingAt Research Questions isSupporting/ isSupportedBy hasPerformed/ isPerformedIn isUsedIn/ isUsing Findings CriteriaMetrics Factors Means Types Criteria Categories hasConstituent/ isConstituting Dimensions technical excellence Instruments software Activity report Goals design Subjects human agents Dimension Type summative Means survey studies isParticipatingIn Means laboratory studies Characteristics count Characteristics discipline Dimensions effectiveness Objects PROCEDURAL LAYER STRATEGIC LAYER K-Means tf-idf
  94. 94. patterns Research Questions hasPerformed/ isPerformedIn Findings CriteriaMetrics Factors Criteria Categories hasConstituent/ isConstituting isParticipatingIn Instruments Dimensions effectiveness Dimensions Types means survey studies means laboratory studies Characteristics Goal describe means type quantitative hasMeansType activity record activity compare Level interface isAimingAt isAffecting/ isAffectedBy Objects Subjects human agents PROCEDURAL LAYER STRATEGIC LAYER Hierarchical
  95. 95. part 5 conclusions
  96. 96. conclusions
  97. 97. conclusions • e patterns reflect and - up to a point - confirm the anecdotally evident research practices of DL researchers.
  98. 98. conclusions • e patterns reflect and - up to a point - confirm the anecdotally evident research practices of DL researchers. • Patterns have similar properties to a map.
  99. 99. conclusions • e patterns reflect and - up to a point - confirm the anecdotally evident research practices of DL researchers. • Patterns have similar properties to a map. - ey can provide the main and the alternative routes one can follow to reach to a destination, taking into account several practical parameters that might not know.
  100. 100. conclusions • e patterns reflect and - up to a point - confirm the anecdotally evident research practices of DL researchers. • Patterns have similar properties to a map. - ey can provide the main and the alternative routes one can follow to reach to a destination, taking into account several practical parameters that might not know. • By exploring previous profiles, one can weight all the available options.
  101. 101. conclusions • e patterns reflect and - up to a point - confirm the anecdotally evident research practices of DL researchers. • Patterns have similar properties to a map. - ey can provide the main and the alternative routes one can follow to reach to a destination, taking into account several practical parameters that might not know. • By exploring previous profiles, one can weight all the available options. • is approach can extend other coding methodologies in terms of transparency, standardization and reusability.
  102. 102. ank you for your attention. questions?

×