Big Tools for Big Data Analytics and Management at web scale IIPC General Assembly, Singapore, May 2010 Lewis Crawford Web Archiving Programme Technical Lead British Library
Big Data “the Petabyte age” Internet Archive stores about 2 Petabytes of data and grows at 20TB a month Large Hadron Collider 15PB / year At the BL Selective Web Archive growing at 200GB a month Conservative estimate for  Domain Crawl is 100TB
The problem of big data We can process data very quickly but we can read/write it very slowly 1990 1 GB disk 4.4MB/s read whole disk in 5 mins 2010 1 TB disk 100MB/s read whole disk in 2.5 hours
The solution! Solution: parallel reads 1 HDD = 100 MB/sec 1000 HDDs = 100 GB/sec
Hadoop 2002  Nutch  Crawler - Doug Cutting 2003  GFS http://labs.google.com/papers/gfs.html 2004  Map Reduce  http://labs.google.com/papers/mapreduce.html 2005  Nutch moves to Map Reduce model with NDFS 2006  NDFS and Map Reduce model becomes Hadoop  under  2008  Top level project at Apache 2009  17 clusters with 24,000 nodes at Yahoo!  1TB sorted in 62 seconds 100TB sorted in 173 minutes
Hadoop Users Yahoo! More than 100,000 CPUs in >25,000 computers running Hadoop Our biggest cluster: 4000 nodes (2*4cpu boxes w 4*1TB disk & 16GB RAM) Used to support research for Ad Systems and Web Search Also used to do scaling tests to support development of Hadoop on larger clusters Baidu - the leading Chinese language search engine Hadoop used to analyze the log of search and do some mining work on web page database We handle about  3000TB per week Our clusters vary from 10 to 500 nodes Facebook Use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning. Currently we have 2 major clusters: A 1100-machine cluster with 8800 cores and about 12 PB raw storage. A 300-machine cluster with 2400 cores and about 3 PB raw storage. Each (commodity) node has 8 cores and 12 TB of storage. http://wiki.apache.org/hadoop/PoweredBy
Nutchwax!
[email_address]
IBM Digital Democracy for the BBC
Bigsheets!
BigSheets and the open source stack Top level Apache Project Yahoo! Contributed  open source IBM Research Licence Insight Engine Spreadsheet Paradigm SQL ‘like’ programming language Distributed processing and file system
Analytics - the meta tag example. Extract meta data tags from all html files in the 2005 General Election Collection Extract ‘keywords’ from metatags Record all html pages into three separate ‘bags’ where keywords contained: Tory, Conservative Labour Liberal, Lib Dem, Liberal Democrat Analyse single and pairs of words in each of those ‘bags’ of data Generate Tag clouds from the 50 most common words .
Data management
robots.txt example
Robots.txt continued…
Data management High level management tool – Spreadsheet paradigm Clean User interface Straightforward programming model (UDF’s) Use cases: ARC to WARC migration Information package generation (SIP) CDX indexes / Lucene indexes JHOVE object validation / verification Object format migration.
Slash Page crawl - election sites extraction Slash page (home page) of known UK domains Data discarded after processing Generate list of election terms (Politcal parties, Mori election tags)  Extract text from html pages using an HTML tag density algorithm Identify all web pages that contain these words Identify sites that contain two or more of the terms
Slash Page Data
Text Extracted Using Tag Density Algorithm
Election Key Terms
Results
Pie Chart Visualization
Seeds With 2 Or More Terms
Manual Verification
Other potential potential digital material Digital Books Datasets 19 th  Century Newspapers
Back to analytics and the next generation access tools Automatic Classification – WebDewey, LOC Subject Headings Machine learning Faceted lucene indexes for Advanced Search functionality Engage directly with Higher Education community Access tool – researcher focus? BL 3 year Research Behaviour Study
Thank you! [email_address] http://uk.linkedin.com/in/lewiscrawford 3x30 Nehalem-based node grids, with 2x4 cores, 16GB RAM, 8x1TB storage using ZFS in a JBOD configuration. Hadoop and Pig for discovering People You May Know and other fun facts.

Big Tools for Big Data

  • 1.
    Big Tools forBig Data Analytics and Management at web scale IIPC General Assembly, Singapore, May 2010 Lewis Crawford Web Archiving Programme Technical Lead British Library
  • 2.
    Big Data “thePetabyte age” Internet Archive stores about 2 Petabytes of data and grows at 20TB a month Large Hadron Collider 15PB / year At the BL Selective Web Archive growing at 200GB a month Conservative estimate for Domain Crawl is 100TB
  • 3.
    The problem ofbig data We can process data very quickly but we can read/write it very slowly 1990 1 GB disk 4.4MB/s read whole disk in 5 mins 2010 1 TB disk 100MB/s read whole disk in 2.5 hours
  • 4.
    The solution! Solution:parallel reads 1 HDD = 100 MB/sec 1000 HDDs = 100 GB/sec
  • 5.
    Hadoop 2002 Nutch Crawler - Doug Cutting 2003 GFS http://labs.google.com/papers/gfs.html 2004 Map Reduce http://labs.google.com/papers/mapreduce.html 2005 Nutch moves to Map Reduce model with NDFS 2006 NDFS and Map Reduce model becomes Hadoop under 2008 Top level project at Apache 2009 17 clusters with 24,000 nodes at Yahoo! 1TB sorted in 62 seconds 100TB sorted in 173 minutes
  • 6.
    Hadoop Users Yahoo!More than 100,000 CPUs in >25,000 computers running Hadoop Our biggest cluster: 4000 nodes (2*4cpu boxes w 4*1TB disk & 16GB RAM) Used to support research for Ad Systems and Web Search Also used to do scaling tests to support development of Hadoop on larger clusters Baidu - the leading Chinese language search engine Hadoop used to analyze the log of search and do some mining work on web page database We handle about 3000TB per week Our clusters vary from 10 to 500 nodes Facebook Use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning. Currently we have 2 major clusters: A 1100-machine cluster with 8800 cores and about 12 PB raw storage. A 300-machine cluster with 2400 cores and about 3 PB raw storage. Each (commodity) node has 8 cores and 12 TB of storage. http://wiki.apache.org/hadoop/PoweredBy
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
    BigSheets and theopen source stack Top level Apache Project Yahoo! Contributed open source IBM Research Licence Insight Engine Spreadsheet Paradigm SQL ‘like’ programming language Distributed processing and file system
  • 12.
    Analytics - themeta tag example. Extract meta data tags from all html files in the 2005 General Election Collection Extract ‘keywords’ from metatags Record all html pages into three separate ‘bags’ where keywords contained: Tory, Conservative Labour Liberal, Lib Dem, Liberal Democrat Analyse single and pairs of words in each of those ‘bags’ of data Generate Tag clouds from the 50 most common words .
  • 13.
  • 14.
  • 15.
  • 17.
    Data management Highlevel management tool – Spreadsheet paradigm Clean User interface Straightforward programming model (UDF’s) Use cases: ARC to WARC migration Information package generation (SIP) CDX indexes / Lucene indexes JHOVE object validation / verification Object format migration.
  • 18.
    Slash Page crawl- election sites extraction Slash page (home page) of known UK domains Data discarded after processing Generate list of election terms (Politcal parties, Mori election tags) Extract text from html pages using an HTML tag density algorithm Identify all web pages that contain these words Identify sites that contain two or more of the terms
  • 19.
  • 20.
    Text Extracted UsingTag Density Algorithm
  • 21.
  • 22.
  • 23.
  • 24.
    Seeds With 2Or More Terms
  • 25.
  • 26.
    Other potential potentialdigital material Digital Books Datasets 19 th Century Newspapers
  • 27.
    Back to analyticsand the next generation access tools Automatic Classification – WebDewey, LOC Subject Headings Machine learning Faceted lucene indexes for Advanced Search functionality Engage directly with Higher Education community Access tool – researcher focus? BL 3 year Research Behaviour Study
  • 28.
    Thank you! [email_address]http://uk.linkedin.com/in/lewiscrawford 3x30 Nehalem-based node grids, with 2x4 cores, 16GB RAM, 8x1TB storage using ZFS in a JBOD configuration. Hadoop and Pig for discovering People You May Know and other fun facts.

Editor's Notes

  • #2 Introduction the problem of big data hadoop map / reduce hdfs Bigsheets! PIG the open source stack Analytics - the meta tag example. Data management Arc to Warc Jhove format migration flv to mpeg4? Simple Examples - Iraq Inquiry video link extraction Slash Page crawl - election sites extraction Newspapers Back to analytics the next generation access tool - targeted at researchers - cooliris, network / swirl, spreadsheet, skydragon
  • #3 Straw Pole of how much archive material there is in the room. 3 Petabytes
  • #4 Add diagram? Page
  • #6 Add PIG
  • #11 IBM insight engine
  • #18 New york times example Page
  • #28 Seadragon notes: Review current access tool Search by title, urls, or full text browse by Subject or special collection. More websites search results already in the millions Provide tools to mine the data (renewable resource?) Page