Your SlideShare is downloading. ×
Kris Carpenter Negulescu Gordon Paynter Archiving the National Web of New Zealand
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Kris Carpenter Negulescu Gordon Paynter Archiving the National Web of New Zealand

745

Published on

Lessons Learned Archiving the National Web of New Zealand …

Lessons Learned Archiving the National Web of New Zealand
Kris Carpenter Negulescu
Gordon Paynter

Published in: Technology, Business, Travel
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
745
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
13
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Lessons Learned Archiving theNational Web of New Zealand Kris Carpenter Negulescu The Internet Archive Gordon Paynter The National Library of New Zealand Future Perfect 2012, 27 March 2012 , Wellington New Zealand
  • 2. Why collect the web?
  • 3. Legal deposit• The National Library of New Zealand Act (2003)• “Legal deposit” now includes “Internet documents”• Available from http://legislation.govt.nz/
  • 4. Two web archiving programmesSelective Harvesting of specific websites or parts websitesDomain Harvesting of the entire “New Zealand Internet”http://topics.breitbart.com/fishing+pole/ http://www.trimarinegroup.com/operations/fleet.php
  • 5. Selective Web Archiving
  • 6. Selective web archiving
  • 7. Selective web archiving
  • 8. Selective web archiving
  • 9. Selective web archiving cd ND... Administration Submission Tools cd ND... cd ND... Access Tools Actor 1 National Library Beta,Web Curator Tool Voyager, Tapuhi, etc cd ND... cd ND... Actor 1 Actor 1 Other Published & NDHA Timeframes, Papers Past cd ND... (Rosetta) cd ND... Unpublished Material Actor 1 Actor 1 Rosetta Access modulesDigitisation & Sound including ArcViewer Actor 1 Actor 1 Preservation Collection Management Systems Technology Infrastructure IAMS
  • 10. Selective web archiving
  • 11. Selective web archiving
  • 12. Selective web archivingFrom January 2007: 14,182 harvests• 83% Endorsed and Archived• 17% Rejected or Aborted• Using the Web Curator ToolFrom 2000-2006: 441 harvests• Some of multiple websites• Using a desktop website capture tool
  • 13. New Zealand Web Harvests October 2008 April 2010
  • 14. New Zealand Web Harvests• Scope• Seeds• Robots Policy• Notification and communications• How are we going to accomplish this?• When are we going to stop?
  • 15. New Zealand Web Harvests 2008 2010• 17 days in October • 24 days in April-May• 106,184,620 URLs • 131,770,485 URLs• 4.6 Terabytes • 6.9 Terabytes• 397,000 hosts • 559,000 hosts• Seeds are known hosts • Seeds include .nz, .com, .org and .net zone files
  • 16. New Zealand Web Harvests• Harvest analysis: – What exactly do we have? – What’s a good harvest frequency?• Preservation analysis: – ARC or WARC format? – Should they be stored in the National Digital Heritage Archive?• Public access analysis: – Ethical issues – Privacy issues – Legal and evidentiary value – Copyright
  • 17. Challenges and Lessons
  • 18. Scope of a National Domain• How is a national web domain defined? – Hosts in the top-level domain or domains operated by registrars in country? – Hosts known to be hosted on IP addresses within geographic boundaries? – Content and advertising embedded in web sites published to the above – Curator selected web sites, desitinations, or services considered to be within bounds of a country’s legislative or cultural heritage
  • 19. Scope of a National Domain• New Zealand Web Harvest scope: – Hosts in the .nz top-level domain – Hosts from .com, .org and .net that are physically in New Zealand – A list of hosts known to be within the scope of the legislation – Image, video clips, and other files that are embedded in web pages on the hosts above• New Zealand Web Harvest seeds: – 2008: Gathered from the Library and the Internet Archive’s past crawls – 2010: Zone files for .nz, .com, .org and .net (plus 2008 hosts)
  • 20. Shape of harvest• How broad or deep should the harvest be? – Usually as broad as possible (survey of all resources at the highest levels) – Usually deep enough to collect primary resources of interest and minimize unwanted, unrelated junk prevalent in any top level domain
  • 21. Shape of harvest• New Zealand Web Harvest – Up to 10,000 URLs from every host – But up to 50,000 for .govt.nz and .ac.nz.• On average, about 250 URLs (12 megabytes) per host
  • 22. Harvest Policies & Practices• Robots Policy – Respect robots.txt – Ignore for embeds and inline content for unrestricted pages• Notification – Notifications may be sent to site owners/publishers prior to harvest• Politeness settings – Usually limit to load from a visitor navigating to the site via a browser• Trade-off of harvest duration vs scale of resources – Need to keep the data capture period brief
  • 23. Harvest Policies & Practices• New Zealand Web Harvest Robots Policy – Selective: Ignore robots.txt (usually) – 2008: Ignore robots.txt (unless asked otherwise) – 2010: Mostly honour robots.txt (following consultation)• Four to six weeks of notification through many channels
  • 24. Harvest Infrastructure• Dedicated crawlers to capture data – Service nodes for reporting and access; shared infrastructure for automated QA, data mining and analysis• Hardware: – Quad Core Processors (2.6 GHz) – 4-8 GB ram/core – 8+ Terabytes of local disk (Four 2-Terabyte SATA drives)• Software: – Ubuntu Linux – Java(TM) SE Runtime Environment (latest build) – Heritrix 3 or v1.14.x• Network: – Bandwidth is limited to ~300 Mbits/sec/project
  • 25. Harvest Infrastructure The New Zealand Web Harvests were commissioned from the Internet Archive.In-house Commissioned• Possibly cheaper • Higher outright cost• Large staff requirement • Contractor provides• Hardware requirements expertise: Heritrix, crawler traps, scope, etc• Network requirements • Contractor provides staff,• Risks: what don’t we know? computers, bandwidth Unexpected issue: International bandwidth
  • 26. Challenges of All Web Archiving• Not all data can be crawled• Can publishers “opt in” or “opt out”?• Data may be lost no matter how carefully it is managed• Harvested data hard to make accessible – Intuitive interfaces for discovering and navigating resources – With robust APIs – All done in a compelling and sustainable way• Research and experimentation are essential to keep pace with publisher innovation
  • 27. Challenges of Domain Archiving• Harvests are at best samples – Time & expense: can’t get everything – Rate of change: don’t get every version – Rate of collection: issues of ‘time skew’• Choice of User agents/protocols – If you crawl as the Mozilla agent your content may not redisplay in IE – Which mobile agents should you crawl as, if any?• Site structure & publishing models – Some parts of sites are not “archive-friendly” (JavaScript, AJAX, Flash, etc.) – Change both their technical structure and policy quickly and often (YouTube, Facebook, etc)
  • 28. Challenges of Domain Archiving70+% of the world’s digital content is now generated by individuals – not all of it can be crawled(UK Telegraph, IDC annual survey, released May 2010)Social networks and collaborative/semi-private spacesImmersive Worlds
  • 29. Challenges of Domain Archiving• Manageable Costs/Sustainable Approaches – Access to power & other critical operational resources – Sufficient processing capacity for collection, analysis, discovery, & dissemination of resources – Bandwidth• Recruitment and retention of staff/engineering expertise; effective ongoing training
  • 30. Challenges of Domain ArchivingWhen do you stop crawling?• The internet is infinitely large!• Indicators that suggest diminishing returns have set in: – A relatively small number of remaining hosts have a lot of depth – More HTML than images appearing in the crawl log – Higher incidence of crawler traps, content farms• At this point we expect: – We will capture proportionally more junk – Website owners will complain that were over-crawling
  • 31. Challenges of Domain ArchivingHow do you assess the quality of a harvest?• Quantitative measures of quality, breadth and depth• Qualitative measures including characterization of resources and how they fit with other collections• Usually harvest for weeks in duration depending upon the desired scope, and then run a “patch crawl”
  • 32. Challenges of Domain Archiving• Being responsive during a crawl• New Zealand Web Harvest 2008: – 37 individual contacts during harvest – 2 major mailing list discussions – Blogs & Twitter – Newspapers (“Library harvest costs website dear”) and radio• A communications strategy and plan essential – The biggest difficulty is responding promptly outside working hours
  • 33. Final thoughts? What have we learned that isparticularly relevant to New Zealand?
  • 34. Final thoughts• New Zealand faces the same challenges as our peers overseas• Most of the world favours dedicated web archives – But we’re preserving web material alongside other formats.• When will it be economical to harvest from New Zealand?
  • 35. Final thoughts: how should national domain crawls work?• Institutions crawl within their national domains from their own national infrastructure• Institutions share tools, metadata, knowledge and best practices – And to the extent possible – data! – Collaboration will always achieve greater results than acting alone!• Over the long term, shared goals and resources can help mitigate economic and other barriers to collection, mining, and access of New Zealand’s national digital heritage

×