High throughput mining of the scholarly literature


Published on

Talk given to statisticians in Tilburg, with emphasis on scholarly comms for detecting unusual features. Includes demo of Amanuens.is and image mining

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • ChemBark
  • High throughput mining of the scholarly literature

    1. 1. High throughput mining of the scholarly literature: a new research tool Peter Murray-Rust, Dept of Chemistry and TheContentMine MTO, Tilburg, NL, 2016-06-07 contentmine.org is supported by a grant to PMR as a
    2. 2. The scholarly literature now produces 10,000 articles per day and it is essential to use machines to understand, filter and analyse this stream. The full-text of these articles is much more valuable than the abstract, and in addition many have supplemental files such as tables, images, computer code. Machines can filter this and extract information on a huge and useful scale. Europe wishes to see this developed as a strategic area, but there is much resistance from “rights-owners”. The information in articles is in semi-structured form - a narrative with embedded data, even for some “data files”. There is a huge amount of factual information in this material and many disciplines have journals whose primary role is the reporting of facts - experimental protocols, formal observations (increasingly through instruments or computation) , and analysis of results using domain-specific and general protocols. ContentMine, funded by the Shuttleworth Foundation, has the vision of making these facts semantic and opening them to the whole world. The two main activities of document analysis are Information Retrieval (IR) and Information Extraction (IE). IR, filtering and classification, can be tackled by machine-learning (ML) or human-generated heuristics. ML is widely used the drawbacks are: the need for an annotated corpus (boring, expensive in time, and difficult to update) and the suspicion of “black-box” methods. Heuristics have the advantage that their methodology is usually self-evident and can be crowd-sourced; however they are often more limited in which fields are tractable. IE is often domain- specific (e.g. chemistry, phylogenetics) but there are general outputs which cover many disciplines. The most tractable and common are typed numeric quantities in running text: “Thermal expansion and land glacier melting contribute 0.15–0.23 meters to sea level rise by 2050, and 0.30 to 0.48 meters by 2100.” This is factual information (it may or may not be “true”). Natural Language Processing (NLP) can extract the numeric quantites into processable form. The terms (entities) “Thermal expansion”, “land glacier melting” are likely to be form a de facto vocabulary. IE can also extract facts from tables, lists, and diagrams (graphs, plots, etc.). This is at an early stage, but with probably 10-100 million numeric diagrams published per year the amount of data is potentially huge. The major problems in exploiting this are sociopolitical. The major “closed” journals are concerned that this will lead to “stealing” content and have therefore made it very difficult, technically and legally to mine scholarly journals. The UK government passed an exception to copyright in 2014 which allows mining for non-commercial research and ContentMine.org has been tooling up to support this. PM-R and colleagues have legal access to a very wide range of scholarly publications and are interested in exploring mutually beneficial research activities. by Peter Murray-Rust ContentMine.org and University of Cambridge ‘High throughput mining of the scholarly literature: a new research tool’
    3. 3. Overview • Scholarly literature • Automation of downloading, normalization • Discipline-dependent semantics/ontology • Classification • Extraction • Annotation • Mining diagrams • Politics of mining
    4. 4. The Right to Read is the Right to Mine**PeterMurray-Rust, 2011 http://contentmine.org
    5. 5. (2x digital music industry!)
    6. 6. Output of scholarly publishing [2] https://en.wikipedia.org/wiki/Mont_Blanc#/media/File:Mont_Blanc_depuis_Valmorel.jpg 586,364 Crossref DOIs 201507 [1] /month 8000 papers/day 2.5 3 million (papers + supplemental data) /year each 3 mm thick  4500 m high per year [2] * Most is not Publicly readable [1] http://www.crossref.org/01company/crossref_indicators.html
    7. 7. What is “Content”? http://www.plosone.org/article/fetchObject.action?uri=info:doi/10.1371/journal.pone.01113 03&representation=PDF CC-BY SECTIONS MAPS TABLES CHEMISTRY TEXT MATH contentmine.org tackles these
    8. 8. http://www.nytimes.com/2015/04/08/opinion/yes-we-were-warned-about- ebola.html We were stunned recently when we stumbled across an article by European researchers in Annals of Virology [1982]: “The results seem to indicate that Liberia has to be included in the Ebola virus endemic zone.” In the future, the authors asserted, “medical personnel in Liberian health centers should be aware of the possibility that they may come across active cases and thus be prepared to avoid nosocomial epidemics,” referring to hospital-acquired infection. Adage in public health: “The road to inaction is paved with research papers.” Bernice Dahn (chief medical officer of Liberia’s Ministry of Health) Vera Mussah (director of county health services) Cameron Nutt (Ebola response adviser to Partners in Health) A System Failure of Scholarly Publishing
    10. 10. Mining in action
    11. 11. A recipe! https://upload.wikimedia.org/wikipedia/commons/0/0b/Wikibooks_hamburger_recipe.png
    12. 12. http://chemicaltagger.ch.cam.ac.uk/ • Typical Typical chemical synthesis
    13. 13. Automatic semantic markup of chemistry Could be used for analytical, crystallization, etc.
    14. 14. AMI https://bitbucket.org/petermr/xhtml2stm/wiki/Home Example reaction scheme, taken from MDPI Metabolites 2012, 2, 100-133; page 8, CC-BY: AMI reads the complete diagram, recognizes the paths and generates the molecules. Then she creates a stop-fram animation showing how the 12 reactions lead into each other CLICK HERE FOR ANIMATION (may be browser dependent)
    15. 15. Tools and resources
    16. 16. Europe PubMedCentral
    17. 17. Dictionaries!
    18. 18. Dengue Mosquito
    19. 19. abstract methods references Captioned Figures Fig. 1 HTML tables abstract methods references Captioned Figures Fig. 1 HTML tables Dict A Dict B Image Caption Table Caption MINING with sections and dictionaries [W3C Annotation / https://hypothes.is/ ]
    20. 20. How does Rat find knowledge
    21. 21. Disease Dictionary (ICD-10) <dictionary title="disease"> <entry term="1p36 deletion syndrome"/> <entry term="1q21.1 deletion syndrome"/> <entry term="1q21.1 duplication syndrome"/> <entry term="3-methylglutaconic aciduria"/> <entry term="3mc syndrome” <entry term="corpus luteum cyst”/> <entry term="cortical blindness" /> SELECT DISTINCT ?thingLabel WHERE { ?thing wdt:P494 ?wd . ?thing wdt:P279 wd:Q12136 . SERVICE wikibase:label { bd:serviceParam wikibase:language "en" } } wdt:P494 = ICD-10 (P494) identifier wd:Q12136 = disease (Q12136) abnormal condition that affects the body of an organism Wikidata ontology for disease
    22. 22. Example statistics dictionary <dictionary title="statistics2"> <entry term="ANCOVA" name="ANCOVA"/> <entry term="ANOVA" name="ANOVA"/> <entry term="CFA" name="CFA"/> <entry term="EFA" name="EFA"/> <entry term="Likert" name="Likert"/> <entry term="Mann-Whitney" name="Mann-Whitney"/> <entry term="MANOVA" name="MANOVA"/> <entry term="McNemar" name="McNemar"/> <entry term="PCA" name="PCA"/> <entry term="Pearson" name="Pearson"/> <entry term="Spearman" name="Spearman"/> <entry term="t-test" name="t-test"/> <entry term="Wilcoxon" name="Wilcoxon"/> </dictionary> “Mann-Whitney” link to Wikipedia entry and Wikidata (Q1424533) entry
    23. 23. catalogue getpapers query Daily Crawl EuPMC, arXiv CORE , HAL, (UNIV repos) ToC services PDF HTML DOC ePUB TeX XML PNG EPS CSV XLSURLs DOIs crawl quickscrape norma Normalizer Structurer Semantic Tagger Text Data Figures ami UNIV Repos search Lookup CONTENT MINING Chem Phylo Trials Crystal Plants COMMUNITY plugins Visualization and Analysis PloSONE, BMC, peerJ… Nature, IEEE, Elsevier… Publisher Sites scrapers queries taggers abstract methods references Captioned Figures Fig. 1 HTML tables 100, 000 pages/day Semantic ScholarlyHTML (W3C community group) Facts Latest 20150908
    24. 24. Amanuens.is demo These slides represent snapshot of an interactive demo… Subject: Flavour
    25. 25. What plants produce Carvone? https://en.wikipedia.org/wiki/Carvone https://en.wikipedia.org/wiki/Carvone
    26. 26. https://en.wikipedia.org/wiki/Carvone WIKIDATA
    27. 27. Carvone in Wikidata Also SPARQL endpoint
    28. 28. Search for carvone
    29. 29. Mining for phytochemicals • getpapers –q carvone –o carvone –x –k 100 Search “carvone”, output to carvone/, fmt XML, limit 100 hits • cmine carvone Normalize papers; search locally for species, sequences, diseases, drugs Results in dataTables.html and results/…/results.xml (includes W3C annotation) • python cmhypy.py carvone/ -u petermr <key> send IUCN redlist plant annotations -> hypothes.is
    30. 30. Annotation (entity in context) prefix surface label location suffix
    31. 31. ARTICLES FACETS gene disease drug Phyto chem species genus words
    32. 32. Remote & Local papers Disease ICD-10 phytochemicals species Commonest words
    33. 33. Mining for phytochemicals • getpapers –q carvone –o carvone –x –k 100 Search “carvone”, output to carvone/, fmt XML, limit 100 hits • cmine carvone Normalize papers; search locally for species, sequences, diseases, drugs Results in dataTables.html and results/…/results.xml (includes W3C annotation) • python cmhypy.py carvone/ -u petermr <key> send annotations -> hypothes.is
    34. 34. Annotation (entity in context) prefix surface label location suffix
    35. 35. Annotation sent to hypothes.is prefix suffix source user text uri maybe 100+ annotations per paper text
    36. 36. Annotation with Hypothes.is
    37. 37. Amanuens.is Hypothes.is link Hypothes.is markup of article
    38. 38. Annotation with Hypothes.is Original publication “on publisher’s site” Annotation “on Hypothes.is site”
    39. 39. Systematic Reviews Can we: • eliminate true negatives automatically? • extract data from formulaic language? • mine diagrams? • Annotate existing sources? • forward-reference clinical trials?
    40. 40. Polly has 20 seconds to read this paper… …and 10,000 more
    41. 41. ContentMine software can do this in a few minutes Polly: “there were 10,000 abstracts and due to time pressures, we split this between 6 researchers. It took about 2-3 days of work (working only on this) to get through ~1,600 papers each. So, at a minimum this equates to 12 days of full-time work (and would normally be done over several weeks under normal time pressures).”
    42. 42. 400,000 Clinical Trials In 10 government registries Mapping trials => papers http://www.trialsjournal.com/content/16/1/80 2009 => 2015. What’s happened in last 6 years?? Search the whole scientific literature For “2009-0100068-41”
    43. 43. Mining diagrams
    44. 44. Examples of plots
    45. 45. Posterisation Extracted since unique posterized colour
    46. 46. Ln Bacterial load per fly 11.5 11.0 10.5 10.0 9.5 9.0 6.5 6.0 Days post—infection 0 1 2 3 4 5 Bitmap Image and Tesseract OCR
    47. 47. “Root”
    48. 48. OCR (Tesseract) Norma (imageanalysis) (((((Pyramidobacter_piscolens:195,Jonquetella_anthropi:135):86,Synergistes_jonesii:301):131,Thermotoga _maritime:357):12,(Mycobacterium_tuberculosis:223,Bifidobacterium_longum:333):158):10,((Optiutus_te rrae:441,(((Borrelia_burgdorferi:…202):91):22):32,(Proprinogenum_modestus:124,Fusobacterium_nucleat um:167):217):11):9); Semantic re-usable/computable output (ca 4 secs/image)
    49. 49. Supertree created from 4300 papers
    50. 50. But we can now turn PDFs into Science We can’t turn a hamburger into a cow Pixel => Path => Shape => Char => Word => Para => Document => SCIENCE
    52. 52. Dumb PDF CSV Semantic Spectrum 2nd Derivative Smoothing Gaussian Filter Automatic extraction
    53. 53. C) What’s the problem with this spectrum? Org. Lett., 2011, 13 (15), pp 4084–4087 Original thanks to ChemBark
    54. 54. After AMI2 processing….. … AMI2 has detected a square
    55. 55. Politics
    56. 56. http://www.lisboncouncil.net/publication/publication/134-text-and-data-mining-for-research-and-innovation-.html Asian and U.S. scholars continue to show a huge interest in text and data mining as measured by academic research on the topic. And Europe’s position is falling relative to the rest of the world. Legal clarity also matters. Some countries apply the “fair-use” doctrine, which allows “exceptions” to existing copyright law, including for text and data mining. Israel, the Republic of Korea, Singapore, Taiwan and the U.S. are in this group. Others have created a new copyright “exception” for text and data mining – Japan, for instance, which adopted a blanket text-and-data-mining exception in 2009, and more recently the United Kingdom, where text and data mining was declared fully legal for non-commercial research purposes in 2014. Some researchers worry that the UK exception does not go far enough; others report that British researchers are now at an advantage over their continental counterparts. the Middle East is now the world’s fourth largest region for research on text and data mining, led by Iran and Turkey.
    57. 57. @Senficon (Julia Reda) :Text & Data mining in times of #copyright maximalism: "Elsevier stopped me doing my research" http://onsnetwork.org/chartgerink/2015/11/16/elsevi er-stopped-me-doing-my-research/ … #opencon #TDM Elsevier stopped me doing my research Chris Hartgerink
    58. 58. I am a statistician interested in detecting potentially problematic research such as data fabrication, which results in unreliable findings and can harm policy-making, confound funding decisions, and hampers research progress. To this end, I am content mining results reported in the psychology literature. Content mining the literature is a valuable avenue of investigating research questions with innovative methods. For example, our research group has written an automated program to mine research papers for errors in the reported results and found that 1/8 papers (of 30,000) contains at least one result that could directly influence the substantive conclusion [1]. In new research, I am trying to extract test results, figures, tables, and other information reported in papers throughout the majority of the psychology literature. As such, I need the research papers published in psychology that I can mine for these data. To this end, I started ‘bulk’ downloading research papers from, for instance, Sciencedirect. I was doing this for scholarly purposes and took into account potential server load by limiting the amount of papers I downloaded per minute to 9. I had no intention to redistribute the downloaded materials, had legal access to them because my university pays a subscription, and I only wanted to extract facts from these papers. Full disclosure, I downloaded approximately 30GB of data from Sciencedirect in approximately 10 days. This boils down to a server load of 0.0021GB/[min], 0.125GB/h, 3GB/day. Approximately two weeks after I started downloading psychology research papers, Elsevier notified my university that this was a violation of the access contract, that this could be considered stealing of content, and that they wanted it to stop. My librarian explicitly instructed me to stop downloading (which I did immediately), otherwise Elsevier would cut all access to Sciencedirect for my university. I am now not able to mine a substantial part of the literature, and because of this Elsevier is directly hampering me in my research. [1] Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 1–22. doi: 10.3758/s13428-015-0664-2 Chris Hartgerink’s blog post
    59. 59. WILEY … “new security feature… to prevent systematic download of content “[limit of] 100 papers per day” “essential security feature … to protect both parties (sic)” CAPTCHA User has to type words
    60. 60. http://onsnetwork.org/chartgerink/2016/02/23/wiley-also-stopped-my-doing-my-research/ Wiley also stopped me (Chris Hartgerink) doing my research In November, I wrote about how Elsevier wanted me to stop downloading scientific articles for my research. Today, Wiley also ordered me to stop downloading. As a quick recapitulation: I am a statistician doing research into detecting potentially problematic research such as data fabrication and estimating how often it occurs. For this, I need to download many scientific articles, because my research applies content mining methods that extract facts from them (e.g., test statistics). These facts serve as my data to answer my research questions. If I cannot download these research articles, I cannot collect the data I need to do my research. I was downloading psychology research articles from the Wiley library, with a maximum of 5 per minute. I did this using the tool quickscrape, developed by the ContentMine organization. With this, I have downloaded approximately 18,680 research articles from the Wiley library, which I was downloading solely for research purposes. Wiley noticed my downloading and notified my university library that they detected a compromised proxy, which they had immediately restricted. They called it “illegally downloading copyrighted content licensed by your institution”. However, at no point was there any investigation into whether my user credentials were actually compromised (they were not). Whether I had legitimate reasons to download these articles was never discussed. The original email from Wiley is available here. As a result of Wiley denying me to download these research articles, I cannot collect data from another one of the big publishers, alongside Elsevier. Wiley is more strict than Elsevier by immediately condemning the downloading as illegal, whereas Elsevier offers an (inadequate) API with additional terms of use (while legitimate access has already been obtained). I am really confused about what the publisher’s stance on content mining is, because Sage and Springer seemingly allow it; I have downloaded 150,210 research articles from Springer and 12,971 from Sage and they never complained about it.
    61. 61. Julia Reda, Pirate MEP, running ContentMine software to liberate science 2016-04-16
    62. 62. The Right to Read is the Right to Mine**PeterMurray-Rust, 2011 http://contentmine.org