10% Wrong 90% Wrong

682 views

Published on

2011 Evergreen International Conference presentation on our MARC de-duplication project.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
682
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

10% Wrong 90% Wrong

  1. 1. Rogan Hamby<br />South Carolina State Library<br />rhamby@statelibrary.sc.gov<br />Shasta Brewer <br />York County Library <br />Shasta.brewer@yclibrary.net<br />10% Wrong, 90% Done<br />A practical approach to bibliographic de-duplication.<br />
  2. 2. Made Up Words<br />When I say ‘deduping’ I mean <br />‘MARC record de-duplication’<br />
  3. 3. The Melting Pot<br />We were ten library systems with no standard source of MARC records. <br />We came from five ILSes. <br />Each had its own needs and workflow. <br />The MARC records reflected that. <br />
  4. 4. Over 2,000,000 Records<br />Ten library systems joined in three waves.<br />
  5. 5. Early Effort<br />During each wave we ran a dedupingscript.<br />The script functioned as designed, however its matches were too few for our needs. <br />
  6. 6. 100% Accurate<br />It had a very high standard for creating matches.<br />No bad merges were created. <br />
  7. 7. Service Issue<br />When a patron searched the catalog it was messy.<br />
  8. 8. This caused problems with searching and placing holds. <br />
  9. 9. It’s All About the TCNs<br />Why was this happening?<br />Because identical items were divided among multiple similar bib records with distinct fingerprints due to coming from multiple sources. <br />
  10. 10. Time for the Cleaning Gloves<br />In March 2009 we began discussing the issue with ESI. The low merging rate was due to the very precise and conservative finger printing of the deduping process.<br />In true open source spirit we decided to roll our own solution and start cleaning up the database.<br />
  11. 11. Finger Printing <br />Finger printing is identifying a unique MARC record by its properties. <br />
  12. 12. As finger printing identifies unique records it was of limited use since our records came from many sources.<br />
  13. 13. A Disclaimer<br />The initial deduping, as designed, was very accurate. It emphasized avoiding imprecise matches. <br />We decided that we had different priorities and were willing to make compromises.<br />
  14. 14. MARC Crimes Unit<br />We decided to go past finger printing and build profiles based on broad MARC attributes. <br />
  15. 15. Project Goals<br />Improve Searching<br />Faster Holds Filling<br />
  16. 16. The Team<br />Shasta Brewer – York County<br />Lynn Floyd – Anderson County<br />Rogan Hamby – Florence County / State Library<br />
  17. 17. The Mess<br />2,048,936 bib records<br />
  18. 18. On Changes<br />During the development process a lot changed from early discussion to implementation.<br />We weighed decisions heavily on the side of needing to have a significant and practical impact on the catalog. <br />I watch the ripples change their size / But never leave the stream <br /><ul><li>David Bowie, Changes</li></li></ul><li>Modeling the Data<br />Determining match points determines the scope of the record set you create mergers from.<br />Due to lack of uniformity in records, matching became extremely important. Adding a single extra limiting match point caused high percentage drops in possible matches reducing the effectiveness of the project.<br />
  19. 19. Tilting at Windmills<br />We refused to believe that the highest priority for deduping should be avoiding bad matches.<br />The highest priority is creating the maximum positive impact on the catalog. <br />Many said we were a bit mad. Fortunately, we took it as a complement.<br />
  20. 20. We ran extensive reports to model the bib data. <br />A risky and non-conventional model was proposed. <br />Although we kept trying other models, the benefit of large matches using the risky model made it too compelling to discard. <br />
  21. 21. Why not just title and ISBN?<br />We did socialize this idea. And everyone did think we were nuts. <br />
  22. 22. Method to the Madness<br />Title and ISBN are the most commonly populated fields for identifying unique items. <br />Records with ISBNs and Titles accounted for over 60% of the bib records in the system. The remainder included SUDOCs, ISSNs, pre-ISBN items and some that were just plain garbage.<br />
  23. 23. Geronimo<br />We decided <br />to do it!<br />
  24. 24. What Was Left Behind<br />Records without a valid ISBN.<br />Records without any ISBN (serials, etc..).<br />Pre-Cat, stubs records, etc…<br />Pure Junk Records.<br />And other things that would require such extraordinarily convoluted matching that it exceeded the risk even beyond our pain threshold for a first run.<br />
  25. 25. We estimated based on modeling a conservative ~300,000 merges or about 25% of our ISBNs.<br />
  26. 26. The Wisdom of Crowds<br />Conventional wisdom said that MARC could not be generalized because of unique information in the records.<br />We were taking risks and very aware of it but the need to create a large impact on our database drove us to disregard friendly warnings.<br />
  27. 27. An Imperfect World<br />We knew that we would miss things that could potentially be merged.<br />We knew that we would create some bad merges. <br />10% wrong to get it 90% done.<br />
  28. 28. Next Step … Normalization<br />With matching decided we needed to normalize the data. This was done to copies of the production MARC records and that used to make lists. <br />Normalization is needed because of variability in how data was entered. It allows us to get the most possible matches based on data.<br />
  29. 29. Normalization Details<br />We normalized case, punctuation, numbers, non-Roman characters, trailing and leading spaces, some GMDs put in as parts of titles, redacted fields, 10 digit ISBNs as 13 digit and lots, lots more.<br />This was not done to permanent records but to copies used to make the lists.<br />
  30. 30. Weighting<br />Finally, we had to weight the records that have been matched to determine which should be the record to keep. <br />To do this each bib record was given a score to profile its quality.<br />
  31. 31. The Weighting Criteria<br />We looked at the presence, length, and number of entries in the 003, 02X, 24X, 300, 260$b, 100, 010, 500s, 440, 490, 830s, 7XX, 9XX and 59X fields to manipulate, add to, subtract from, bludgeon, poke and eventually determine a 24 digit number that would profile the quality of a bib record.<br />
  32. 32. The Merging<br />Once the weighing is done the highest scored record in each group is made the master record, the copies and holds from the others moved to it and those bibs marked deleted. <br />
  33. 33. Checking the Weight <br />We did a report of items that would group based on our criteria and had staff do sample manual checks to see if they could live with the dominant record. <br />We collectively checked ~1,000 merges. <br />
  34. 34. 90 % of the time we felt the highest quality record was selected as the dominant. More than 9% of the time an acceptable record was selected. <br />In a very few instances human errors in the record made the system create a bad profile, but never an actual bad dominant record.<br />
  35. 35. The Coding<br />We proceeded to contract with Equinox to have them develop the code and run it against our test environment (and eventually production). <br />Galen Charlton was our primary contact in this. In addition to his coding of the algorithm he also provided input about additional criteria to include in the weighting and normalization. <br />
  36. 36. Test Server <br />Once run on the test server we took our new batches of records and broke them into 50,000 record chunks. We then gave those chunks to member libraries and had them do random samples for five days.<br />
  37. 37. Fixed As We Went<br />Non-Standard Cataloging (ongoing)<br />13 digit ISBNs normalizing as 10 digit ISBNs. <br />Identified many parts of item sets as issues. <br />Shared title publications with different formats. <br />The order of the ISBNs. <br />Kits. <br />
  38. 38. In Conclusion<br />We don’t know how many bad matches were formed.<br />Total discovered after a year is less than 200.<br />We were able to purge 326,098 bib records or about 27% of our ISBN based collection. <br />
  39. 39. Evaluation<br />The catalog is visibly cleaner. <br />The cost per bib record <br />was 1.5 cents.<br />Absolutely successful!<br />
  40. 40. Future<br />We want to continue to refine it (eg. 020 subfield z). <br />There are still problems that need to be cleaned up in the catalog – some manually and some by automation.<br />Raising Standards.<br />
  41. 41. New libraries that have joined SCLENDs use our deduping algorithm not the old one.<br />It has continued to be successful. <br />
  42. 42. Open Sourcing the Solution<br />We are releasing the algorithm under the Creative Commons Attribution Non-Commercial license.<br />We are releasing the SQL code under the GPL.<br />
  43. 43. Questions?<br />

×