Questions at end. Talk a little bit about what CrossRef is then move on to talk about our text and data mining service.
First just a few words about CrossRef for anyone who isn’t a member or might not be familiar with us as an organisation. CrossRef is a not-for-profit membership organisation of international scholarly publishers. We have 4000 member publishers, representing all disciplines - not just STM, and comprising commercial publishers, academic societies, open access publishers, university presses. We also have 83 affiliate members and 2000 library affiliates - these libraries and other organisations make use of the CrossRef database to look up DOIs and metadata. We are the largest DOI registration agency and have assigned nearly 63 million DOIs to date.
CrossRef was founded 15 years ago to solve the problem of broken links. The web is all about links, but links break. This is annoying if you’re browsing the web and want to follow an interesting link, but in the context of scholarly publishing it becomes more than annoying - if you can’t follow a citation from one paper to another you’re being hampered in your research. CItation linking is one of the greatest benefits of online publishing, but it really does need to be reliable
and publishers were finding that web sites changed, content moved, and links that they had put into their articles stopped working. So they started a multi-publisher initiative to solve this problem of broken links. This is done using the DOI - the Digital Object Identifier, which I’m sure many of you are familiar with. A CrossRef DOI is simply a unique identifier for a piece of content. Once assigned, it doesn’t change. It is to all intents and purposes a meaningless number, but it allows that piece of content to be located on the web.
And it works like this: publishers use CrossRef DOIs to link to content, usually from the references at the end of articles. Users click on those DOI-based links and are referred via the CrossRef database to the cited article at it’s correct location on the web. If content moves the publisher only has to update the CrossRef database once, and all of the publishers that are linking to their content using CrossRef DOIs will be redirected to the content in its new location.
Every month there are around 100 million clicks on CrossRef DOI links, so 100 million citations resolved to content.
The issue of Text and Data Mining has become very important and we feel that CrossRef is in a unique position to expand its current infrastructure (a registry of unique identifiers and metadata for scholarly content and thousands of members) to make TDM easier for researchers and their institutions and publishers. Technical solution - we aren’t addressing the issue of licencing.
Looking at positives. Finding treatments to diseases that may not have been found before.
But urge caution – Google Flu!
Why did CrossRef develop this service? Applies to OA content too. Let’s just illustrate these issues.
Researcher to illustrate that plus some of the publishers we represent. TDM is about scale.
Bilateral agreements aspect - In the past, researchers who wish to text and data mine published literature have no common or simple way of accessing the full text for the content they wish to mine. This is true both of subscription-based content as well as of open access content. Consequently, TDM users access the content in one or two ways: Negotiating with publishers to have the content delivered to them, either via physical media or bulk data transfer (e.g. FTP) “Screen-scraping” the publisher’s website. The first option doesn’t scale well across multiple Publishers and Researchers. It also presents synchronisation problems if the researchers want an ongoing feed of refreshed content. The issue with the second option is that “screen scraping” is an inefficient, fragile and error prone mechanism for identifying and downloading full text. Screen scrapers put a large performance burden on web sites and, at the same time, any slight changes to the web site can break the tool that is doing the screen scraping. CrossRef Text and Data Mining provides a common solution which works across Open Access and subscription-based publishers and is free for anyone to use.
Processing the same document on multiple sites could easily skew text and data mining results and traditional techniques for eliminating duplicates (e.g. hashes, etc.) will not work reliably if the document in question exists in several representations (e.g. PDF, HTML, ePub ) and/or versions (e.g. accepted manuscript, version of record) Using the DOI as a key will allow researchers to retrieve and verify the provenance of the items in the TDM corpus, many years into the future when traditional HTTP URLs will have already broken
Wide range of papers from a wide range of publishers – spread of business models and geographical locations.
Explain API = basically an interface that software uses to interact with other software.
I should be able to show a text extraction tool or a clip of an extraction tool working to convert PDF to XML for the purposes of mining.
The CrossRef Common API is the main aspect of this service and is designed to allow researchers to easily harvest full text documents from all participating publishers regardless of their business model (e.g. open access, subscription). It makes use of CrossRef DOI content negotiation to provide researchers with links to the full text of content located on the publisher’s site. The publisher remains responsible for actually delivering the full text of the content requested. Thus, open access publishers can simply deliver the requested content while subscription based publishers continue to support subscriptions using their existing access control systems.
API works with content negotiation – what is content negotiation
Content negotiation allows a user to request a particular representation of a web resource. DOI resolvers use content negotation to provide different representations of metadata associated with DOIs.A content negotiated request to a DOI resolver is much like a standard HTTP request, except server-driven negotiation will take place based on the list of acceptable content types a client provides. Here, they’re asking for text
Here they’re asking for XML – and can also request PDF too as we know a lot of publishers may only have back content in PDF and that’s fine.
Set of standard HTTP headers that can be used by servers to convey rate-limiting information to automated TDM tools. Well-behaved TDM tools can simply look for these headers when they query publisher sites in order to understand how best to adjust their behaviour so as not to effect the performance of the site. The headers allow a publisher to define a “rate limit window”- which is basically a time span (e.g. a minute, and hour, a day).
In order for researchers to use the CrossRef API, Publishers need to add new metadata to their CrossRef DOI deposits.
One or more URIs pointing at licenses that govern how the full text content can be used.
This needs to be added to the publisher XML – license information at the article-level. Examples on our support site.
Publishers who require researchers to agree to a specific set of Terms and Conditions (T&Cs) before they are allowed to text and data mine content that they otherwise have access to (e.g. through an existing subscription) will need to make use of the click-through service.
So to put it all together…
If you are an open access publisher or if your existing subscription licenses already allow TDM of subscribed full text, then the registration of the above metadata deposit is the ONLY thing you need to do in order to enable TDM of your content via the CrossRef Metadata API. Rate limiting.
Rate limiting too
Support site with info. Info on rate limiting on there too.
Working group which will migrate to a full CrossRef Committee when the service is officially launched seen over 100,000 deposits of full text links and license information, mainly from Hindawi but some from AIP and IEEE as well.
Eric Lease Morgan
Publishers and researchers in pilot. Launch in May
Introduction to CrossRef Text and Data Mining Webinar
Crossref for Text &
Product Manager, CrossRef
Not-for-profit association of scholarly publishers
All subjects, all business models
5,000+ organizations from all over the world
83 non-publisher affiliates, 2000 library affiliates
76 million content items
User clicks on Crossref
DOI reference link in
Tani, N., N. Tomaru, M. Araki, AND K. Ohba. 1996. Genetic diversity and
differentiation in populations of Japanese stone pine (Pinus pumila) in Japan.
Canadian Journal of Forest Research 26: 1454–1462.[CrossRef]
Crossref DOI directory
User accesses cited
article in Journal B
What is text and data mining?
Text Mining is an interdisciplinary field combining
techniques from linguistics, computer science and
statistics to build tools that can efficiently retrieve and
extract information from digital text.
It uses powerful computers to find links between
drugs and side effects, or genes and diseases,
that are hidden within the vast scientific literature.
These are discoveries that a person scouring
through papers one by one may never notice.
Marc Weeber and colleagues used automated text mining tools to infer that the drug
thalidomide could treat several diseases it had not been associated with before. Thalidomide was
taken off the market 40 years ago, but is still the subject of research because it seems to benefit
leprosy patients via their immune systems. Weeber and Grietje Molema, an immunologist, used
text mining tools to search the literature for papers on thalidomide and then pick out those
containing concepts related to immunology. One concept, concerning thalidomide’s ability to
inhibit Interleukin-12 (IL-12), a chemical involved in the launch of an immune response, struck
Molema as particularly interesting. A second automated search for diseases that improve when
the action of IL-12 is blocked, revealed several not previously linked with thalidomide, including
chronic hepatitis, myasthenia gravis and a type of gastritis.
“Type in thalidomide and you get 2-3000 hits. Type in disease and you get 40,000 hits. With
automated text mining tools we only had to read 100-200 abstracts and 20 or 30 full papers.
We’ve created hypotheses for others to follow up” says Weeber.
Weeber et al. J Am Med Inform Assoc. 2003 10 252-259
• Researchers find it impractical to negotiate multiple bilateral
agreements with hundreds of subscription-based publishers in
order to authorize TDM of subscribed content.
• Subscription-based publishers find it impractical to negotiate
multiple bilateral agreements with thousands of researchers and
institutions in order to authorize TDM of subscribed content.
• All parties would benefit from support of standard APIs and data
representations in order to enable TDM across both open access and
Botanical Publishing Board * Fisheries Sciences.Com * Florida
Entomological Society * Fondazione Annali Die Matematica Pura Ed
Applicata * Fondazione Eni Enrico Mattei (Feem) * Fondazione Pro
Herbario Mediterraneo * Food And Agriculture Organization Of
The United Nations (Fao) * Food Safety Commission, Cabinet
Office * Foot And Ankle Online Journal * Fordham University Press
* Forest Products Society * Forschungsinstitut Freie Berufe *
Forum: Carbohydrates Coming Of Age * Foundation Compositio
Mathematica * Foundation For Cellular And Molecular Medicine *
Foundation For Sickle Cell Disease Research * Foundation Of
Computer Science * Franco Angeli * Fraunhofer-Institut Fur
Materialfluss Und Logistik * French Chemistry Society * French
Physical Society * French-Vietnamese Association Of Pulmonology
Using the DOI as the basis for a common text and data mining API provides several
benefits. For example, the DOI provides:
•An easy way to de-duplicate documents that may be found on several sites.
•Persistent provenance information.
•An easy way to document, share and compare corpora without having to exchange
the actual documents
•A mechanism to ensure the reproducibility of TDM results using the source
•A mechanism to track the impact of updates, corrections retractions and
withdrawals on corpora.
Why use the DOI?
Step 1: A researcher identifies the articles they are interested in:
The search engines they use bring back results from lots of different publishers. They can also
use Crossref to search.
The searches they run bring back results showing publications from a range of publishers, in
different locations and using different business models.
The challenge is to harvest all these articles in order to be able to mine them, without
engaging in individual transactions with each publisher.
How to do that?
Each of those articles has a DOI, or digital object identifier. Each DOI is unique and identifies the
paper. Researchers are familiar with DOIs and are used to working with them.
2. The researcher takes the DOIs that correspond to the articles they are interested in.
Search engines will allow them to download this as a list, the researcher does not need to go to
each paper to extract the DOI from it:
Click to download
3. The researcher gives this list to the Crossref REST API:
And that tells them
Where the full-text is located What they are allowed to do with it
What are they are allowed to do with it?
This is communicated by license information that publishers give to Crossref.
Some publishers ask researchers to agree to an additional license to be able to use their content
for mining. Crossref TDM allows researchers to log in with their ORCID ID and can view and accept
publisher licenses all in one place:
Again, this saves multiple transactions on the part of the researcher.
The publishers do not charge researchers for this, and Crossref does not charge researchers
for the service.
4. The researcher uses that information to go directly to each publisher via Crossref. It is a central
channel for them visit thousands of publishers via one request or transaction.
Where they will be identified in a number of
No identification (Open Access content)
IP recognition/log in credentials
IP recognition/log in credentials + Crossref
token (API key) from the TDM service
5. The full-text is then returned to the researcher, and they can use their tools to mine it
Crossref TDM HTTP Headers
(the rate limit ceiling per window on requests)
(number of requests left for the current window)
(the remaining time in UTC epoch seconds before the rate
limit resets and a new window is started)
*this is a technique used by many APIs, including Twitter’s
Common API Summary
• Content Negotiation (Required)
• New Metadata (Required)
• Full text URIs
• License URIs
• Rate Limiting Headers (optional)
Researcher queries DOI using CN + API token
Publisher verifies API token
If token verified AND access control allows,
publisher returns full text
(frequency at publisher discretion)
• Streamlines researcher access to distributed full text for
• Enables machine-to-machine, automated access for
recognized TDM (i.e. researchers won’t be locked out of publisher
• Enables article-level licensing info and easy mechanism
for supplemental T&Cs for text and data mining
(publishers discussing model license via STM)
There are two additional metadata elements that publishers will need
to deposit to support TDM via CrossRef. These are:
•Full Text URIs: One or more URIs that point to full text
representations of the content identified by your CrossRef DOIs.
•License URIs: One or more URIs pointing at licenses that govern
how the full text content can be used.
•A .csv upload option is available to populate backfiles
•OPTIONAL: Add publisher TDM terms and conditions to the click-
• Modify TDM tools to make use of the API token
• Modify TDM tools to look for <lic_ref> elements
• Register with the click-through service and accept/decline
licenses (if applicable)
Articles with full-text links and license information deposited: 15
million from over 200 DOI prefixes
Cost? Free to researchers and the public
No cost for publishers for 2015
Register interest at: http://www.crossref.org/tdm/contact_form.html
Usable as is: