Prompts (not script!):Introduction, marine biologist but chose a project supervised by Dr Alan Boyd which is why I’m presenting to this audience
Quick introduction to what I’ll be covering
Project aimed to providea useful way for someone with one journal article of interest to find related/similar articlesUsage scenario envisioned; literature review or researching for an essay. Could take a ‘classic paper’ or a piece of recommended reading from VITAL and use my app to quickly find a list of related papers.Wanted to make it as easy as possible to use, so accessed through the browser, no downloads, and all the user needs to enter is a title or a DOIDOI, in case you’re not familiar, acronym, a unique ID code given tojournal articlesExample on the right, the DOI is the highlighted code, usually found at the top or bottom of the first page of a journal article...and for people who aren’t comfortable with these initially odd looking strings of text, the user can also just paste in a title and the app will convert it to a DOI.
Did all work through language called pythonIt was used download and process the data from PubMed that made up application’s corpus, to display the results of a search to the user, and to analyse the results returned by my project for my report.Python’s got numerous strenghs. It’s easy to read and has a gentle learning curve which means it’s easy to get started with – I’d never used it before my project but now I feel comfortable with using it.But the main advantages are BioPython, a collection of open source code that anyone can benefit from, and compatibility with GAE.GAE – Google App Engine, a platform for developing web applications.Web applications bit of a buzzword at the moment, but described in more useful terms, a way of programs running online rather than through a downloaded file. So when you check your Hotmail, Gmail or uni webmail using your browser rather than a downloaded email application like Outlook then you’re using a web application.
By now, probably thinking that presenting related content not unique.Most if not all scientific literature aggregators do this.The advantage of my application, I’d like to think, is that it’s standalone – it can be used with any content from any website or a PDF – it’s not tied to one publisher’s website or to material available online.Also, bookmarklet, bit of code that lives inside a user’s browser, it automatically recognises DOIs within a page so with one click the user can forward these to my application.
So our application aimed to look for relationships between the text content of articles, and to find those we needed a corpus, or collection of scientific literature.Ideally corpus would have encompassed a very wide range of journals, but that would be slow and beyond the realms of knowledge and processing power available for an honours project.Selected a marine biology journal by ranking all the marine journals available on PubMed by their impact factor and then removing too specific or too infrequently published journals.Impact factor, criteria for establishing rank or importance of a journal by observing how often other papers cite the papers within a journal. Bit of controversy but generally accepted as measure of a journal’s significance/standing.Result was a choice of the journal Marine Pollution Bulletin.
Corpus was created by downloading from through PubMed’s API, Entrez.Working with the PubMed data to assemble our corpus was probably the hardest part of this project as it was the first big chunk that involved areas of programming that I’d never been involved in before.First of all 3 years worth of data from Marine Pollution Bulletin downloaded. This gave us a HUGE dump of data from PubMed. It had everything in it; three years worth of authors names, places of publishing, dates of publishing, dates of being added to PubMed, PubMed IDs, DOIs, all sorts of other information. It was too much information, and a lot of it would have served no use for our project.So, the next step was to process it. All this extra information was discarded, and we only kept the titles, abstracts and DOIs.Then, the titles and abstracts were separated word by word, and a big list of each word in the corpus was calculated. Then, for each word the number of times that it appears in each article was calculated – giving us an idea of the content and main themes of each article. This information was stored in the matrix, a big table that we’ll have a look at in the next slide.Finally, the matrix had to be shrunk somehow, it was just too big a file for Google App Engine to be able to read.
...this is a screen shot of a just a really small portion of the application’s dataset, opened in Excel.It might not be too easy to read but hopefully you can get some idea of the structure and scale.Down the rows we have one article per row, with the first two columns of each row holding an article’s title and DOI. The rest of the 10,821 columns contain a word count for each token/word.The matrix contained details of every single word in 859 articles, so ended up with over 9 million of these counts.You might also be able to see in this screen shot, that most of these counts are 0, in fact, 99% of the values in the matrix were zero.This meant that these values could be stripped out, leaving a file that was 95% smaller and much easier to deal with.The second step was to work out the relationships between all these word counts, and for that we used something called cosine similarity. This is just a method that worked through each row and used existing code to determine which rows share the most in common, and can be deemed related.
So, to put it all in context, here’s a quick example. This is the main page, which can be accessed by anyone right now at honourspw.appspot.com.
All the user has to do is paste in the title or DOI of a Marine Pollution Bulletin article from the last three years...Click submit...
And they get a list of sixteen related articles dealing with metal pollution at sea.For each one of these results that the application returns, there’s also a similarity score available. This just gives a score ranging from 0 – 1 where 0 represents nothing in common and 1 represents an exact copy.We wondered what the level of match (or quality) of our results were.
So, I wrote some code that records the score of the best match for all 859 articles.We found our mean best match was .33. This, to me, sounded quite low – we obviously weren’t expecting all our values to be in the .8/.9 or above range, because that would require a very homogenous set of data.But this score got us thinking about the quality of our results.
So, to get an idea of PubMed, I compared the results our application was recommending for an article, to the results that PubMed was recommending.We found a 46% similarity between the results, and for almost 20% of the titles in the corpus, 70% of what PubMed recommended we did too.Also, PubMed failed to return recommendations for some papers, whereas our application returned results of a quality not too far from our overall mean.I deem that to be a comparable level of quality.
Developing a web application for research: programming to find related PubMed articles.
1<br />Developing a web application for research: programming to find related PubMed articles.<br />Philip Wolstenholme<br />
Introduction to talk<br />Aims of my project<br />What work was done?<br />What was found?<br />How could the project be developed in the future?<br />Summary<br />Questions from the audience<br />2<br />
The Project<br />A research tool to find similar articles<br />Help explore scientific literature<br />Simple<br />Accessed online<br />Only requires article title or DOI<br />DOI: Digital Object Identifier<br />3<br />
Python<br />The programming language used for this project<br />Used to retrieve data, display results and analyse findings<br />Why Python:<br />BioPython library of code<br />Compatible with GAE<br />GAE – Google App Engine<br />Run programs online as ‘web applications’<br />4<br />
Advantages over alternatives<br />Presenting related content not unique<br />Used on PubMed, ScienceDirect, Web of Science etc<br />My application standalone<br />Works with content from any site, or from a PDF<br />Bookmarklet automatically detects DOIs from webpages<br />5<br />
Choice of corpus journal<br />Searching every journal for related items ideal, but slow<br />Selected Marine Pollution Bulletin<br />Based on high impact factor<br />Availability of articles on PubMed<br />6<br />
Working with PubMed data<br />Download<br />Three years worth of Mar Pol Bul downloaded<br />Downloaded data opened, only useful data kept<br />Large table made of words (tokens) and their frequencies <br />Matrix turned into an easy and quick format for Python to read<br />Process<br />Shrink<br />Matrix<br />7<br />
Finding similarity<br />10, 821 columns<br />859 rows of articles<br />Title and DOI<br />Token frequencies<br />8<br />
Were my results of good quality?<br />Benchmarked against PubMed<br />46% similarity between results <br />For 19% of articles similarity ≥ 70%<br />For some articles PubMed returned zero related results<br />Our app returned results scored at 0.25<br />Results of comparable quality<br />13<br />
Future work<br />14<br />Application good proof of concept<br />Limited dataset<br />One journal<br />Three years<br />Opportunities to adapt the application<br />E.g. subscription service, mobile version<br />
Summary<br />Aimed to create simple, easy to use, functional application<br />Completed application and carried out analysis of results<br />Results of a good quality<br />Aims of project achieved<br />15<br />