Impact Analysis of OCR Quality on Research Tasks in Digital Archives

4,144 views

Published on

Humanities scholars increasingly rely on digital archives for their research instead of time-consuming visits to physical archives. This shift in research method has the hidden cost of working with digitally processed historical documents: how much trust can a scholar place in noisy representations of source texts? In a series of interviews with historians about their use of digital archives, we found that scholars are aware that optical character recognition (OCR) errors may bias their results. They were, however, unable to quantify this bias or to indicate what information they would need to estimate it. This, however, would be important to assess whether the results are publishable. Based on the interviews and a literature study, we provide a classification of scholarly research tasks that gives account of their susceptibility to specific OCR- induced biases and the data required for uncertainty estimations. We conducted a use case study on a national newspaper archive with example research tasks. From this we learned what data is typically available in digital archives and how it could be used to reduce and/or assess the uncertainty in result sets. We conclude that the current knowledge situation on the users’ side as well as on the tool makers’ and data providers’ side is insufficient and needs to be improved.

Published in: Science

Impact Analysis of OCR Quality on Research Tasks in Digital Archives

  1. 1. Impact Analysis of OCR Quality on ResearchTasks in Digital Archives Myriam C. Traub, Jacco van Ossenbruggen, Lynda Hardman Centrum Wiskunde & Informatica, Amsterdam
  2. 2. Context ✤ Research in collaboration with the National Library of The Netherlands ✤ Digital newspaper archive: ✤ 10 million pages covering 1618 to 1995 ✤ +/- 1200 newspaper titles ✤ Available data: scanned image of the page, OCRed text and metadata records 2
  3. 3. Interviews ✤ Aim: ✤ Find out what types of research tasks scholars perform on digital archives ✤ Which quantitative / distant reading tasks are not (sufficiently) supported ✤ Scholars with experience in performing historical research on digital archives 3
  4. 4. Categorization of research tasks T1 find the first mention of a concept T2 find a subset with relevant documents T3 investigate quantitative results over time T3.a compare quantitative results for two terms T3.b compare quantitative results from two corpora T4 tasks using external tools on archive data
  5. 5. 5 I mostly use digital archives for exploration of a topic, selecting material for close reading (T1, T2) or external processing (T4). OCR quality in digital archives / libraries is partly very bad. I cannot quantify its impact on my research tasks. I would not trust quantitative analyses (T3a, T3b) based on this data sufficiently to use it in publications.
  6. 6. Literature ✤ OCR quality is addressed from the perspective of the collection owner/OCR software developer ✤ Usability studies for digital libraries ✤ Robustness of search engines towards OCR errors ✤ Error removal in post- processing either systematically or intellectually 6
  7. 7. We care about average performance on representative subsets for generic cases. I care about actual performance on my non- representative subset for my specific query. 7 Two different perspectives of quality evaluation
  8. 8. Use case ✤ Aims: ✤ To study the impact on research tasks in detail ✤ Identify starting points for workarounds and/or further research ✤ Tasks T1 - T3 8
  9. 9. T1: Finding the first mention ✤ Key requirement: recall ✤ 100% recall is unrealistic ✤ Aim: Find out how a scholar can assess the reliability of results 9
  10. 10. “Amsterdam” 1642 10 First mention of … … in the OCRed newspaper archive of the KB? 1618 earliest document
  11. 11. O C R pre-processing post-processing ingestion scanning 11 Understanding potential sources of bias and errors ✤ many details difficult to reconstruct ✤ essential to understand overall impact
  12. 12. “Amsterdam” 1642 12 First mention of … … in the OCRed newspaper archive of the KB? 1618 earliest document “Amfterdam” 1624
  13. 13. 01 OCR confidence values useful? ✤ Available for all items in the collection: page, word, character ✤ Only for highest ranked words / characters, other candidates missing ✤ This information would be required to estimate recall. 13
  14. 14. Confusion table ✤ Applied frequent OCR confusions to query ✤ 23 alternative spellings, but none of them yielded an earlier mention ✤ Problem: long tail Amstcrdam 16-01-1743 Amstordam 01-08-1772 Amsttrdam 04-08-1705 Amslerdam 12-12-1673 Amslcrdam 20-06-1797 Amslordam 29-06-1813 Amsltrdam 13-04-1810 Amscerdam 17-10-1753 Amsccrdam 16-02-1816 Amscordam 01-11-1813 Amsctrdam 16-06-1823 Amfterdam already found Amftcrdam 17-08-1644 Amftordam 31-01-1749 Amfttrdam 26-11-1675 Amflerdam 03-03-1629 Amflcrdam 01-03-1663 Amflordam 05-03-1723 Amfltrdam 01-09-1672 Amfcerdam 22-04-1700 Amfccrdam 27-11-1742 Amfcordam - Amfctrdam 09-10-1880 correct confused s f n u e c n a t l t c h b l i e o e t full table available online: http://persistent-identifier.org/?identifier=urn:nbn:nl:ui:18-23429
  15. 15. “Amsterdam” 1642 “Amfterdam” 1624 “Amsterstam” 1619 15 First mention of … 1618 … in the OCRed newspaper archive of the KB? earliest document
  16. 16. “Amsterdam” 1642 “Amfterdam” 1624 “Amsterstam” 1619 16 Update! 1618 Corrections for 17th century newspapers were crowdsourced! earliest document “Amsterdam” 1620
  17. 17. … but why not 1619?
  18. 18. Confusion Matrix OCR Confidence Values Alternative Confidence Values available: sample only full corpus not available T1 find all queries for x, impractical estimated precision, not helpful improve recall T2 as above estimated precision, requires improved UI improve recall T3 pattern summarized over set of alternative queries estimates of corrected precision estimates of corrected recall T3.a warn for different susceptibility to errors as above, warn for different distribution of confidence values as above T3.b as above as above as above 18
  19. 19. No silver bullet ✤ we propose novel strategies that solve part of the problem: ✤ critical attitude (awareness and better support) ✤ transparency (provenance, open source, documentation, …) ✤ alternative quality metrics (taking research context into account) 19
  20. 20. Conclusions Problems ✤ Scholars see OCR quality as a serious problem, but cannot assess its impact ✤ OCR technology is unlikely to be perfect ✤ OCR errors are reported in terms of averages measured over representative samples ✤ Impact on a specific research task cannot be assessed based on average error metrics Start of solutions ✤ Impact of OCR is different for different research tasks, so these tasks need to made be explicit ✤ OCR errors often assumed to be random but are often partly systematic ✤ Tool pipelines and their limitations need to be transparent & better documented
  21. 21. Translate the established tradition of source criticism to the digital world and create a new tradition of tool criticism to systematically identify and explain technology-induced bias. #toolcrit 21

×