The Role of Community-Driven Data Curation for Enterprises


Published on

With increased utilization of data within their operational and strategic processes, enterprises need to ensure data quality and accuracy. Data curation is a process that can ensure the quality of data and its fitness for use. Traditional approaches to curation are struggling with increased data volumes, and near real-time demands for curated data. In response, curation teams have turned to community crowd-sourcing and semi-automatedmetadata tools for assistance. This chapter provides an overview of data curation, discusses the business motivations for curating data and investigates the role of community-based data curation, focusing on internal communities and pre-competitive data collaborations. The chapter is supported by case studies from Wikipedia, The New York Times, Thomson Reuters, Protein Data Bank and ChemSpider upon which best practices for both social and technical aspects of community-driven data curation are described.

E. Curry, A. Freitas, and S. O’Riáin, “The Role of Community-Driven Data Curation for Enterprises,” in Linking Enterprise Data, D. Wood, Ed. Boston, MA: Springer US, 2010, pp. 25-47.

Published in: Technology, Education
1 Comment
  • the slides describing The role of community driven data cu ration for enterprises are very helpful to me....The different types of curations u mentioned in ur slides namely manual,automated,sheer,blended are knowledgable...Expecting more such good topics from u edward. discount coupons
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The Role of Community-Driven Data Curation for Enterprises

  1. 1. The Role of Community-Driven Data Curation for Enterprises<br />Edward Curry, Andre Freitas, Seán O'Riain <br /><br /><br /><br />
  2. 2. Speaker Profile<br />Research Scientist at the Digital Enterprise Research Institute (DERI)<br />Leading international web science research organization<br />Researching how web of data is changing way business work and interact with information<br />Projects include studies of enterprise linked data, community-based data curation, semantic data analytics, and semantic search<br />Investigate utilization within the pharmaceutical, oil & gas, financial, advertising, media, manufacturing, health care, ICT, and automotive industries<br />Invited speaker at the 2010 MIT Sloan CIO Symposium to an audience of more than 600 CIOs<br />
  3. 3. Web of Data <br />
  4. 4. Acknowledgements<br />Collaborators Andre Freitas & SeánO'Riain<br />Insight from Thought Leaders<br />Evan Sandhaus (Semantic Technologist), Rob Larson (Vice President Product Development and Management), and Gregg Fenton (Director Emerging Platforms) from the New York Times<br />Krista Thomas (Vice President, Marketing & Communications), Tom Tague (OpenCalais initiative Lead) from Thomson Reuters<br />Antony Williams (VP of Strategic Development ) from ChemSpider<br />Helen Berman (Director), John Westbrook (Product Development) from the Protein Data Bank <br />Nick Lynch (Architect with AstraZeneca) from the Pistoia Alliance. <br />The work presented has been funded by Science Foundation Ireland under Grant No. SFI/08/CE/I1380 (Lion-2).<br />
  5. 5. Further Information<br /> The Role of Community-Driven<br /> Data Curation for Enterprises<br />Edward Curry, Andre Freitas, & Seán O'Riain<br />In David Wood (ed.), <br />Linking Enterprise Data Springer, 2010.<br />Available Free at: <br /><br />
  6. 6. Overview<br />Curation Background<br />The Business Need for Curated Data<br />What is Data Curation?<br />Data Quality and Curation<br />How to Curate Data<br />Curation Communities and Enterprise Data<br />Case Studies<br />Wikipedia, The New York Times, Thomson Reuters, ChemSpider, Protein Data Bank<br />Best Practices from Case Study Learning <br />
  7. 7. The Business Need<br /><ul><li>Knowledgeworkers need:
  8. 8. Access to the right information
  9. 9. Confidence in that information</li></ul>Working incomplete inaccurate, or wrong information can have disastrous consequences <br />
  10. 10. The Problems with Data<br />Flawed Data<br />Effects 25% of critical data in world’s top companies (Gartner)<br />Data Quality<br />Recent banking crisis (Economist Dec’09)<br />Inaccurate figures made it difficult to manage operations (investments exposure and risk)<br />“asset are defined differently in different programs”<br />“numbers did not always add up”<br />“departments do not trust each other’s figures”<br />“figures … not worth the pixels they were made of”<br />
  11. 11. What is Data Curation?<br />DigitalCuration <br />Selection, preservation, maintenance, collection, and archiving of digital assets<br />DataCuration<br />Active management of data over its life-cycle<br />Data Curators<br />Ensure data is trustworthy, discoverable, accessible, reusable, and fit for use<br />Museum cataloguers of the Internet age<br />
  12. 12. What is Data Curation?<br />Data Governance<br />Convergence of data quality, data management, business process management, and risk management<br />Data Curation is a complimentary activity<br />Part of overall data governance strategy for organization <br />Data Curator = Data Steward ??<br />Overlapping terms between communities<br />
  13. 13. Data Quality and Curation<br />What is Data Quality?<br />Desirable characteristics for information resource <br />Described as a series of quality dimensions<br />Discoverability, Accessibility, Timeliness, Completeness, Interpretation, Accuracy, Consistency, Provenance & Reputation<br />Data curation can be used to improve these quality dimensions<br />
  14. 14. Data Quality and Curation<br />Discoverability & Accessibility<br />Curate to streamline search by storing and classifying in appropriate and consistent manner<br />Accuracy<br />Curate to ensure data correctly represents the “real-world” values it models<br />Consistency<br />Curate to ensure datacreated and maintained using standardized definitions, calculations, terms, and identifiers<br />
  15. 15. Data Quality and Curation<br />Provenance & Reputation<br />Curate to track source of data and determine reputation<br />Curate to include the objectivity of the source/producer<br />Is the information unbiased, unprejudiced, and impartial?<br />Or does it come from a reputable but partisan source?<br />Other dimensions discussed in chapter<br />
  16. 16. How to Curate Data<br />Data Curation is a large field with sophisticated techniques and processes<br />Sectionprovides high-leveloverview on:<br />Should you curate data?<br />Types of Curation<br />Setting up a curation process<br />Additional detail and references available in book chapter<br />
  17. 17. Should You Curate Data?<br />Curation can have multiple motivations<br />Improving accessibility, quality, consistency,…<br />Will the data benefit from curation?<br />Identify business case<br />Determine if potential return support investment<br />Not all enterprise data should be curated<br />Suits knowledge-centric data rather than transactional operations data<br />
  18. 18. Types of Data Curation<br />Multiple approaches to curate data, no single correct way<br />Who?<br />Individual Curators<br />Curation Departments<br />Community-based Curation<br />How?<br />Manual Curation<br />(Semi-)Automated<br />Sheer Curation<br />
  19. 19. Types of Data Curation – Who?<br />Individual Data Curators<br />Suitable for infrequently changing small quantity of data<br /> (<1,000 records)<br />Minimal curation effort (minutes per record)<br />
  20. 20. Types of Data Curation – Who?<br />Curation Departments<br />Curation experts working with subject matter experts to curate data within formal process<br />Can deal with large curation effort (000’s of records)<br />Limitations<br />Scalability: Can struggle with large quantities of dynamic data (>million records) <br />Availability: Post-hoc nature creates delay incurated data availability<br />
  21. 21. Types of Data Curation - Who?<br />Community-Based Data Curation<br />Decentralized approach to data curation<br />Crowd-sourcing the curation process<br />Leverages community of users to curate data <br />Wisdom of the community (crowd)<br />Can scale to millions of records<br />
  22. 22. Types of Data Curation – How?<br />Manual Curation<br />Curators directly manipulate data<br />Can tie users up with low-value add activities<br />(Sem-)Automated Curation<br />Algorithms can (semi-)automate curation activities such as data cleansing, record duplication and classification<br />Can be supervised or approved by human curators<br />
  23. 23. Types of Data Curation – How?<br />Sheer curation, or Curation at Source<br />Curation activities integrated in normal workflow of those creating and managing data<br />Can be as simple as vetting or “rating” the results of a curation algorithm<br />Results can be available immediately<br />Blended Approaches: Best of Both <br />Sheer curation +post hoc curation department<br />Allows immediate access to curated data <br />Ensures quality control with expert curation<br />
  24. 24. Setting up a Curation Process<br />5 Steps to setup a curation process:<br />1 - Identify what data you need to curate<br />2 - Identify who will curate the data<br />3 - Define the curation workflow<br />4 - Identity appropriate data-in & data-out formats<br />5 - Identify the artifacts, tools, and processes needed to support the curation process<br />
  25. 25. Setting up a Curation Process<br />Step 1: Identify what data you need to curate<br />Newly created data and/or legacy data? <br />How is new data created? <br />Do users create the data, or is it imported from an external source? <br />How frequently is new data created/updated? <br />What quantity of data is created?<br />How much legacy data exists?<br />Is it stored within a single source, or scattered across multiple sources?<br />
  26. 26. Setting up a Curation Process <br />Step 2: Identify who will curate the data<br />Individuals, depts, groups, institutions,community<br />Step 3: Define the curation workflow<br />What curation activities are required?<br />How will curation activities be carried out?<br />Step 4: Identity suitable data-in & -out formats<br />What is the best format for the data?<br />Right format for receiving and publishing data is critical<br />Support multiple formats to maximum participation<br />
  27. 27. Setting up a Curation Process<br />Step 5: Identify the artifacts, tools, and processes needed to support curation<br />Workflow support/Community collaboration platforms<br />Algorithms can (semi-)automate curation activities<br />Major factors that influence approach:<br />Quantity of data to be curated (new and legacy data)<br />Amount of effort required to curate the data<br />Frequency of data change / data dynamics<br />Availability of experts<br />
  28. 28. Overview<br />Curation Background<br />The Business Need for Curated Data<br />What is Data Curation?<br />Data Quality and Curation<br />How to Curate Data<br />Curation Communities and Enterprise Data<br />Case Studies<br />Wikipedia, The New York Times, Thomson Reuters, ChemSpider, Protein Data Bank<br />Best Practices from Case Study Learning <br />
  29. 29. Community–based Curation<br />Two community approaches:<br />Internal corporate communities<br />External pre-competitive communities<br />To determine the right model consider:<br />What the purpose of the community is? <br />Will resulting curateddataset be publicly available? Or restricted?<br />
  30. 30. Community–based Curation<br />Internal Communities<br />Taps potential of workforce to assist data curation<br />Curate competitive enterprise data that will remain internal to the company<br />May not always be the case e.g. product technical support and marketing data <br />Can work in conjunction with curation dept.<br />Community governance typically follows the organization’s internal governance model<br />
  31. 31. Pre-competitive Communities<br />Pre-competitive collaboration<br />Well-established technique for open innovation <br />Notable examples<br />
  32. 32. What is Pre-Competitive Data?<br />Two Types of Enterprise Data<br />Propriety data for competitive advantage<br />Common data with no competitive advantage<br />What is pre-competitive data?<br />Has little potential for differentiation<br />Can be shared without conferring commercial advantage to competitor<br />Common non-competitive data<br />Needs to be maintaining and curated<br />Companies duplicate effort in-house incurring full-cost<br />
  33. 33. Pre-competitive Communities<br />External pre-competitive communities<br />Share costs, risks, and technical challenges<br />Common curation tasks carried out once inpublic domain rather than multiple timesin each company<br />Reduces cost required to provide and maintain data<br />Can increase the quantity, quality, and access<br />Focus turns to value-add competitive activity<br />Move “competitive onus” from novel data to novel algorithms, shifting emphasis from “proprietary data” to a “proprietary understanding of data”<br />e.g. Protein Data Bank and Pistoia Alliance in Pharma<br />
  34. 34. External Pre-competitive Communities<br />Two popular community models are<br />Organization consortium<br />Open community<br />Organization consortium<br />Operates like a private democratic club<br />Usually closed community, members invited based on skill-set to contribute<br />Output data - public or limited tomembers<br />Consortiums follow a democratic process<br />Member voting rights may reflect level of investment<br />Larger players may be leaders of the consortium<br />
  35. 35. External Pre-competitive Communities<br />Open community<br />Everyone can participate<br />“Founder(s)” defines desired curation activity<br />Seek public support to contribute to curation activates<br />Wikipedia, Linux, and Apache are good examples of large open communities<br />
  36. 36. Overview<br />Curation Background<br />The Business Need for Curated Data<br />What is Data Curation?<br />Data Quality and Curation<br />How to Curate Data<br />Curation Communities and Enterprise Data<br />Case Studies<br />Wikipedia, The New York Times, Thomson Reuters, ChemSpider, Protein Data Bank<br />Best Practices from Case Study Learning <br />
  37. 37. Wikipedia<br />The World Largest Open Digital Curation Community<br />
  38. 38. Wikipedia<br />Open-source encyclopedia<br />Collaboratively built by large community<br />Challenges existing models of content creation<br />More than 19,000,000 articles<br />270+ languages, 3,200,000+ articles in English<br />More than 157,000 active contributors<br />Studies show accuracy and stylistic formality are equivalent to resources developed in expert-based closed communities<br />i.e. Columbia and Britannica encyclopedias <br />
  39. 39. Wikipedia<br />MediaWiki <br />Wiki platform behind Wikipedia<br />Widespread and popular technology<br />Wikis can also support data curation<br />Lowers entry barriers for collaborative data curation<br />Widely used inside organizations<br />Intellipedia covering 16 U.S. Intelligence agencies<br />Wiki Proteins,curatedProtein data for knowledge discovery and annotation<br />
  40. 40. Wikipedia<br />Decentralized environment supports creation of high quality information with:<br />Social organization<br />Artifacts, tools & processes for cooperative work coordination<br />Wikipedia collaboration dynamics highlightgood practices<br />
  41. 41. Wikipedia – Social Organization<br />Any usercan edit its contents<br />Without prior registration<br />Does not lead to a chaotic scenario<br />In practice highly scalable approach for high quality content creation on the Web<br />Relies on simple but highly effective way to coordinate its curation process<br />Curation is activity of Wikipedia admins<br />Responsibility for information quality standards<br />
  42. 42. Wikipedia – Social Organization<br />Four main types of accounts:<br />Anonymous users<br />Identified by their associated IP address<br />Registered users<br />Users with an account in the Wikipedia website<br />Administrators/Editors<br />Registered users with additional permissions in the system<br />Access to curation tools<br />Bots <br />Programs that perform repetitive tasks<br />
  43. 43. Wikipedia – Social Organization<br />
  44. 44. Wikipedia – Social Organization<br />Incentives<br />Improvement of one’s reputation<br />Sense of efficacy<br />Contributing effectively to a meaningful project <br />Over time focus of editors typically change<br />From curators of a few articles in specific topics <br />To more global curation perspective<br />Enforcing quality assessment of Wikipedia as a whole<br />
  45. 45. Wikipedia – Artifacts, Tools & Processes <br />Wiki Article Editor (Tool)<br />WYSIWYG or markup text editor<br />Talk Pages (Tool)<br />Public arena for discussions around Wikipedia resources<br />Watchlists (Tool)<br />Helps curators to actively monitor the integrity and quality of resources they contribute<br />Permission Mechanisms (Tool)<br />Users with administrator status can perform critical actions such as remove pages and grant administrative permissions to new users<br />
  46. 46. Wikipedia – Artifacts, Tools & Processes <br />Automated Edition (Tool)<br />Bots are automated or semi-automated tools that perform repetitive tasks over content<br />Page History and Restore (Tool)<br />Historical trail of changes to a Wikipedia Resource<br />Guidelines, Policies & Templates (Artifact)<br />Defines curation guidelines for editors to assess article quality <br />Dispute Resolution (Process)<br />Dispute mechanism between editors over the article contents<br />Article Edition, Deletion, Merging, Redirection, Transwiking, Archival (Process)<br />Describe the curation actions over Wikipedia resources<br />
  47. 47. Wikipedia - DBPedia<br />DBPedia Knowledge base<br />Inherits massive volume of curated Wikipedia data<br />Built using information info box properties<br />Indirectly uses wiki as data curation platform<br />DBPediaprovides direct access to data<br />3.4 million entities and 1 billion RDF triples<br />Comprehensive data infrastructure<br /> Concept URIs, definitions, and basic types<br />
  48. 48.
  49. 49. Wikipedia - DBPedia<br />
  50. 50. The New York Times<br />100 Years of Expert Data Curation<br />
  51. 51. The New York Times<br />Largest metropolitan and third largest newspaper in the United States<br /><ul><li>
  52. 52. Most popular newspaper website in US
  53. 53. 100 year old curated repository defining its participation in the emerging Web of Data</li></li></ul><li>The New York Times<br />Data curation dates back to 1913 <br />Publisher/owner Adolph S. Ochs decided to provide a set of additions to the newspaper<br />New York Times Index<br />Organized catalog of articles titles and summaries <br />Containing issue, date and column of article<br />Categorized by subject and names<br />Introduced on quarterly thenannual basis <br />Transitory content of newspaper became important source of searchable historical data<br />Often used to settle historical debates<br />
  54. 54. The New York Times<br /> Index Department was created in 1913<br />Curation and cataloguingofNYT resources <br />Since 1851 NYT had low quality index for internal use<br />Developed a comprehensive catalog using a controlled vocabulary<br />Covering subjects, personal names, organizations, geographic locations and titles of creative works (books, movies, etc), linked to articles and their summaries<br />Current Index Dept. has~15 people<br />
  55. 55. The New York Times<br />Challenges with consistently and accurately classifying news articles over time<br />Keywords expressing subjects may show some variance due to cultural or legal constraints<br />Identities of some entities, such as organizations and places, changed over time<br />Controlled vocabulary grew to hundreds of thousands of categories<br />Adding complexity to classification process<br />
  56. 56. The New York Times<br />Increased importance of Web drove need to improve categorization of online content<br />Curation carried out by Index Department<br />Library-time (days to weeks)<br />Print edition can handle next-day index <br />Not suitable for real-time online publishing <br /> needed a same-day index<br />
  57. 57. The New York Times<br />Introduced two stage curation process<br />Editorial staff performed best-effort semi-automated sheer curation at point of online pub.<br />Several hundreds journalists<br />Index Department follow up with long-term accurate classification and archiving<br />Benefits:<br />Non-expert journalist curators provide instant accessibility to online users<br />Index Department provides long-term high-quality curation in a “trust but verify” approach<br />
  58. 58. NYT Curation Workflow <br />Curation starts with article getting out of the newsroom<br />
  59. 59. NYT Curation Workflow <br />Member of editorial staff submits article to web-based rule based information extraction system (SAS Teragram) <br />
  60. 60. NYT Curation Workflow <br />Teragram uses linguistic extraction rules based on subset of Index Dept’s controlled vocab.<br />
  61. 61. NYT Curation Workflow <br />Teragram suggests tags based on the Index vocabulary that can potentially describe the content of article<br />
  62. 62. NYT Curation Workflow <br />Editorial staff member selects terms that best describe the contents and inserts new tags if necessary <br />
  63. 63. NYT Curation Workflow <br />Reviewed by the taxonomy managers with feedback to editorial staff on classification process<br />
  64. 64. NYT Curation Workflow <br />Article is published online at<br />
  65. 65. NYT Curation Workflow <br />At later stage article receives second level curation by Index Dept. additional Index tags and a summary<br />
  66. 66. NYT Curation Workflow <br />Article is submitted to NYT Index<br />
  67. 67. The New York Times<br />Early adopter of Linked Open Data (June ‘09)<br />
  68. 68. The New York Times<br />Linked Open Data @<br />Subset of 10,000 tagsfrom index vocabulary<br />Dataset of people, organizations & locations<br />Complemented by search services to consume data about articles, movies, best sellers, Congress votes, real estate,…<br />Benefits<br />Improves traffic by third party data usage<br />Lowers development cost of new applications for different verticals inside the website<br />E.g. movies, travel, sports, books<br />
  69. 69. Thomson Reuters<br />Data Curation: A Core Business Competency<br />
  70. 70. Thomson Reuters<br />Thomson Reuters is an information provider<br />Created by acquisition of Reuters by Thomson<br />Over 50,000 employees<br />Commercial presence in 100+ countries<br />Provides specialist curated information and information-based services<br />Selects most relevant information for customers<br />Classifying, enriching and distributing it in a way that can be readily consumed<br />
  71. 71. Thomson Reuters<br />Curation process<br />Working over approximately 1000 data sources<br />Automatic tools provide first level triage and classification<br />Refined by intervention of human curators<br />Curator is a domain specialist<br />Employs thousands of curators<br />
  72. 72. Thomson Reuters<br />OneCalais platform<br />Reduces workload for classification ofcontent<br />Natural Language Processingonunstructured text<br />Automatically derives tags for analyzed content<br />Enrichment with machine readable structured data<br />Provides description of specific entities (places, people, events, facts) present in the text<br />Open Calais (free version of OneCalais) <br />20.000+ users,>4 million trans per day<br />CNET, CBS Interactive, The Huffington Post, The Powerhouse Museum of Science and Design,…<br />
  73. 73. ChemSpider<br />Structure centric chemical community <br />Over 300 data sources with 25 million records<br />Provided by chemical vendors, government databases, private laboratories and individual<br />Pharmarealizing benefits of open data<br />Heavily leveraged by pharmaceutical companies as pre-competitive resources for experimental and clinical trial investigation <br />Glaxo Smith Kline made its proprietary malaria dataset of 13,500 compounds available<br />
  74. 74. Protein Data Bank<br />Dedicated to improving understanding of biological systems functions with 3-D structure of macromolecules <br />Started in 1971 with 3 core members<br />Originally offered 7 crystal structures <br />Grown to 63,000 structures<br />Over 300 million dataset downloads<br />Expanded beyond curated data download service to include complex molecular visualized, search, and analysis capabilities<br />
  75. 75. Overview<br />Curation Background<br />The Business Need for Curated Data<br />What is Data Curation?<br />Data Quality and Curation<br />How to Curate Data<br />Curation Communities and Enterprise Data<br />Case Studies<br />Wikipedia, The New York Times, Thomson Reuters, ChemSpider, Protein Data Bank<br />Best Practices from Case Study Learning <br />
  76. 76. Best Practices from Case Study Learning<br />Social Best Practices<br />Participation<br />Engagement<br />Incentives<br />Community Governance Models<br />Technical Best Practices<br />Data Representation<br />Human- andAutomatedCuration<br />Track Provenance<br />
  77. 77. Social Best Practices<br />Participation<br />Stakeholders involvement fordata producers and consumers must occur early in project<br />Provides insight into basic questions of what they want to do, for whom, and what it will provide<br />White papers are effective means to present these ideas, and solicit opinion from community<br />Can be used to establish informal ‘social contract’ for community<br />
  78. 78. Social Best Practices<br />Engagement<br />Outreach activities essential for promotion and feedback<br />Typical consumers-to-contributors ratios of less than 5%<br />Social communication and networking forums are useful<br />Majority of community may not communicate using these media<br />Communication by email still remains important<br />
  79. 79. Social Best Practices<br />Incentives<br />Sheer curationneedsline of sight from data curating activity, to tangible exploitation benefits<br />Lack of awareness of value proposition will slow emergence ofcollaborative contributions<br />Recognizing contributing curators through a formal feedback mechanism<br />Reinforces contribution culture<br />Directly increases output quality<br />
  80. 80. Social Best Practices<br />Community Governance Models<br />Effective governance structure is vital to ensure success of community <br />Internal communities and consortium perform well when they leverage traditional corporate and democratic governance models <br />Open communities need to engage the community within the governance process<br />Follow less orthodox approaches using meritocratic and autocratic principles<br />
  81. 81. Technical Best Practices<br />Data Representation<br />Must be robust and standardized to encourage community usage and tools development<br />Support for legacy data formats and ability to translate data forward to support new technology and standards<br />Human & Automated Curation<br />Balancing will improve data quality<br />Automated curation should always defer to, and never override, human curation edits<br />Automate validating data deposition and entry<br />Target community at focused curation tasks<br />
  82. 82. Technical Best Practices<br />Track Provenance<br />All curation activities should be recorded and maintained as part data provenance effort<br />Especially where human curators are involved <br />Users can have different perspectives of provenance <br />A scientist may need to evaluate the fine grained experiment description behind the data<br />For a business analyst the ’brand’ of data provider can be sufficient for determining quality<br />
  83. 83. Conclusions<br />Data curation can ensure the quality of data and its fitness for use<br />Pre-competitive data can be shared without conferring a commercial advantage<br />Pre-competitive data communities<br />Common curation tasks carried out once in public domain<br />Reduces cost, increase quantity and quality<br />