Big Data




Steve Watt
 1
             Technology Strategy @ HP   swatt@hp.com
Agenda

    Hardware   Software                     Data




                                    • Big Data



                    • Situational
                    Applications




2
Situational Applications




      3
– eaghra (Flickr)
Web 2.0 Era Topic Map
                                                   Produce          Process
                                    Inexpensiv
              Data                   e Storage
            Explosion
                             LAM
 Social                       P
Platform         Publishin
    s                g
                 Platforms

                                                              Situational
                                                             Applications
              Web 2.0                    Mashups




           Enterpris          SOA
              e
4
5
Big Data




      6
– blmiers2 (Flickr)
The data just keeps growing…

 1024 GIGABYTE= 1 TERABYTE
       1024 TERABYTES = 1 PETABYTE
             1024 PETABYTES = 1 EXABYTE


1 PETABYTE 13.3 Years of HD Video

20 PETABYTES Amount of Data processed by Google daily

5 EXABYTES All words ever spoken by humanity
Mobile
  App Economy for Devices                                                        Sensor Web
  App for this     App for that                                      An instrumented and monitored world




Set Top            Tablets, etc.   Multiple Sensors in your pocket
Boxes
                                                                                                   Real-time
                                                                                                   Data

                                        The Fractured Web
                                                                                                       Opportunity
                                          Facebook       Twitter     LinkedIn
Service Economy
Service for this                          Google     NetFlix    New York Times

Service for that                           eBay          Pandora       PayPal              Web 2.0 Data Exhaust of
                                                                                           Historical and Real-time Data



                                   Web 2.0 - Connecting People                            API Foundation
 Web as a Platform
 8                                 Web 1.0 - Connecting Machines                          Infrastructure
Data Deluge! But filter patterns can
                  help…
    9
Kakadu (Flickr)
Filtering
With
Search




 10
Filtering
Socially




            Awesome
 11
Filtering
Visually




 12
But filter patterns force you down a pre-processed
  path
M.V. Jantzen (Flickr)
What if you could ask your own questions?

     14
– wowwzers(Flickr)
And go from discovering Something about Everything…

– MrB-MMX (Flickr)
To discovering Everything about Something ?

16
How do we do this?
 Lets examine a few techniques for
Gathering,
     Storing,
         Processing &

17
                Delivering Data @   Scale
Gathering Data

Data Marketplaces




 18
19
20
Gathering Data

Apache Nutch
(Web Crawler)




 21
Storing, Reading and Processing - Apache Hadoop
    Cluster technology with a single master and scale out with multiple slaves
    It consists of two runtimes:
        The Hadoop Distributed File System (HDFS)
        Map/Reduce

    As data is copied onto the HDFS it ensures the data is blocked and replicated to other
     machines to provide redundancy
    A self-contained job (workload) is written in Map/Reduce and submitted to the Hadoop
     Master which in-turn distributes the job to each slave in the cluster.
    Jobs run on data that is on the local disks of the machine they are sent to ensuring data
     locality
    Node (Slave) failures are handled automatically by Hadoop. Hadoop may execute or re-
     execute a job on any node in the cluster.

     Want to know more?
22
     “Hadoop – The Definitive Guide (2nd Edition)”
Delivering Data @ Scale

•    Structured Data
•    Low Latency & Random Access
•    Column Stores (Apache HBase or Apache Cassandra)
     •   faster seeks
     •   better compression
     •   simpler scale out
     •   De-normalized – Data is written as it is intended to be queried




         Want to know more?
23
         “HBase – The Definitive Guide” & “Cassandra High Performance
Storing, Processing & Delivering : Hadoop + NoSQL

              Gather            Read/Transfor                  Low-
                                m                              latency       Application
        Web Data
                        Nutch                                                Query
                        Crawl
                                                                                     Serve
                      Copy

                                        Apache
                                        Hadoop
 Log Files
                   Flume
                   Connector              HDFS                                 NoSQL
                                                                              Repository
                                                               NoSQL
                   SQOOP                                       Connector/A
                   Connector                                   PI

 Relational
 Data
                                -Clean and Filter Data
 (JDBC)
                                - Transform and Enrich Data
               MySQL
                                - Often multiple Hadoop jobs
   24
Some things to keep
    in mind…




     25
– Kanaka Menehune (Flickr)
Some things to keep in mind…

•    Processing arbitrary types of data (unstructured, semi-
     structured, structured) requires normalizing data with many different
     kinds of readers
     Hadoop is really great at this !
•    However, readers won’t really help you process truly unstructured data
     such as prose. For that you’re going to have to get handy with Natural
     Language Processing. But this is really hard.
     Consider using parsing services & APIs like Open Calais

     Want to know more?
26
     “Programming Pig” (O’REILLY)
Open Calais (Gnosis)




27
Statistical real-time decision making

      Capture Historical information

      Use Machine Learning to build decision making models (such as
       Classification, Clustering & Recommendation)

      Mesh real-time events (such as sensor data) against Models to make
       automated decisions




     Want to know more?
28
     “Mahout in Action”
29
Pascal Terjan (Flickr
30
31
32
33
Making the data STRUCTURED




          Retrieving HTML

                Prelim Filtering on URL


          Company POJO then /t Out




34
Aargh!

My viz tool
requires
zipcodes to plot
geospatially!


  35
Apache Pig Script to Join on City to get Zip
Code and Write the results to Vertica

ZipCodes = LOAD 'demo/zipcodes.txt' USING PigStorage('t') AS (State:chararray, City:chararray, ZipCode:int);

CrunchBase = LOAD 'demo/crunchbase.txt' USING PigStorage('t') AS

(Company:chararray,City:chararray,State:chararray,Sector:chararray,Round:chararray,Month:int,Year:int,Investor:chararray,Amount:int);


CrunchBaseZip = JOIN CrunchBase BY (City,State), ZipCodes BY (City,State);

STORE CrunchBaseZip INTO

'{CrunchBaseZip(Company varchar(40), City varchar(40), State varchar(40), Sector varchar(40), Round varchar(40), Month int, Year

int, Investor int, Amount varchar(40))}’

USING com.vertica.pig.VerticaStorer(‘VerticaServer','OSCON','5433','dbadmin','');
Total Tech Investments By Year
Investment Funding By Sector
Total Investments By Zip Code for all Sectors

                                                                     $1.2 Billion in Boston



     $7.3 Billion in San Francisco


            $2.9 Billion in Mountain View




                                            $1.7 Billion in Austin

39
Total Investments By Zip Code for Consumer Web

        $600 Million in Seattle
                                       $1.2 Billion in Chicago


     $1.7 Billion in San Francisco




40
Total Investments By Zip Code for BioTech

                                            $1.3 Billion in Cambridge




                   $528 Million in Dallas




     $1.1 Billion in San Diego




41
Questions?


42

Tech4Africa - Opportunities around Big Data

  • 1.
    Big Data Steve Watt 1 Technology Strategy @ HP swatt@hp.com
  • 2.
    Agenda Hardware Software Data • Big Data • Situational Applications 2
  • 3.
    Situational Applications 3 – eaghra (Flickr)
  • 4.
    Web 2.0 EraTopic Map Produce Process Inexpensiv Data e Storage Explosion LAM Social P Platform Publishin s g Platforms Situational Applications Web 2.0 Mashups Enterpris SOA e 4
  • 5.
  • 6.
    Big Data 6 – blmiers2 (Flickr)
  • 7.
    The data justkeeps growing… 1024 GIGABYTE= 1 TERABYTE 1024 TERABYTES = 1 PETABYTE 1024 PETABYTES = 1 EXABYTE 1 PETABYTE 13.3 Years of HD Video 20 PETABYTES Amount of Data processed by Google daily 5 EXABYTES All words ever spoken by humanity
  • 8.
    Mobile AppEconomy for Devices Sensor Web App for this App for that An instrumented and monitored world Set Top Tablets, etc. Multiple Sensors in your pocket Boxes Real-time Data The Fractured Web Opportunity Facebook Twitter LinkedIn Service Economy Service for this Google NetFlix New York Times Service for that eBay Pandora PayPal Web 2.0 Data Exhaust of Historical and Real-time Data Web 2.0 - Connecting People API Foundation Web as a Platform 8 Web 1.0 - Connecting Machines Infrastructure
  • 9.
    Data Deluge! Butfilter patterns can help… 9 Kakadu (Flickr)
  • 10.
  • 11.
  • 12.
  • 13.
    But filter patternsforce you down a pre-processed path M.V. Jantzen (Flickr)
  • 14.
    What if youcould ask your own questions? 14 – wowwzers(Flickr)
  • 15.
    And go fromdiscovering Something about Everything… – MrB-MMX (Flickr)
  • 16.
    To discovering Everythingabout Something ? 16
  • 17.
    How do wedo this? Lets examine a few techniques for Gathering, Storing, Processing & 17 Delivering Data @ Scale
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
    Storing, Reading andProcessing - Apache Hadoop  Cluster technology with a single master and scale out with multiple slaves  It consists of two runtimes:  The Hadoop Distributed File System (HDFS)  Map/Reduce  As data is copied onto the HDFS it ensures the data is blocked and replicated to other machines to provide redundancy  A self-contained job (workload) is written in Map/Reduce and submitted to the Hadoop Master which in-turn distributes the job to each slave in the cluster.  Jobs run on data that is on the local disks of the machine they are sent to ensuring data locality  Node (Slave) failures are handled automatically by Hadoop. Hadoop may execute or re- execute a job on any node in the cluster. Want to know more? 22 “Hadoop – The Definitive Guide (2nd Edition)”
  • 23.
    Delivering Data @Scale • Structured Data • Low Latency & Random Access • Column Stores (Apache HBase or Apache Cassandra) • faster seeks • better compression • simpler scale out • De-normalized – Data is written as it is intended to be queried Want to know more? 23 “HBase – The Definitive Guide” & “Cassandra High Performance
  • 24.
    Storing, Processing &Delivering : Hadoop + NoSQL Gather Read/Transfor Low- m latency Application Web Data Nutch Query Crawl Serve Copy Apache Hadoop Log Files Flume Connector HDFS NoSQL Repository NoSQL SQOOP Connector/A Connector PI Relational Data -Clean and Filter Data (JDBC) - Transform and Enrich Data MySQL - Often multiple Hadoop jobs 24
  • 25.
    Some things tokeep in mind… 25 – Kanaka Menehune (Flickr)
  • 26.
    Some things tokeep in mind… • Processing arbitrary types of data (unstructured, semi- structured, structured) requires normalizing data with many different kinds of readers Hadoop is really great at this ! • However, readers won’t really help you process truly unstructured data such as prose. For that you’re going to have to get handy with Natural Language Processing. But this is really hard. Consider using parsing services & APIs like Open Calais Want to know more? 26 “Programming Pig” (O’REILLY)
  • 27.
  • 28.
    Statistical real-time decisionmaking  Capture Historical information  Use Machine Learning to build decision making models (such as Classification, Clustering & Recommendation)  Mesh real-time events (such as sensor data) against Models to make automated decisions Want to know more? 28 “Mahout in Action”
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
    Making the dataSTRUCTURED Retrieving HTML Prelim Filtering on URL Company POJO then /t Out 34
  • 35.
    Aargh! My viz tool requires zipcodesto plot geospatially! 35
  • 36.
    Apache Pig Scriptto Join on City to get Zip Code and Write the results to Vertica ZipCodes = LOAD 'demo/zipcodes.txt' USING PigStorage('t') AS (State:chararray, City:chararray, ZipCode:int); CrunchBase = LOAD 'demo/crunchbase.txt' USING PigStorage('t') AS (Company:chararray,City:chararray,State:chararray,Sector:chararray,Round:chararray,Month:int,Year:int,Investor:chararray,Amount:int); CrunchBaseZip = JOIN CrunchBase BY (City,State), ZipCodes BY (City,State); STORE CrunchBaseZip INTO '{CrunchBaseZip(Company varchar(40), City varchar(40), State varchar(40), Sector varchar(40), Round varchar(40), Month int, Year int, Investor int, Amount varchar(40))}’ USING com.vertica.pig.VerticaStorer(‘VerticaServer','OSCON','5433','dbadmin','');
  • 37.
  • 38.
  • 39.
    Total Investments ByZip Code for all Sectors $1.2 Billion in Boston $7.3 Billion in San Francisco $2.9 Billion in Mountain View $1.7 Billion in Austin 39
  • 40.
    Total Investments ByZip Code for Consumer Web $600 Million in Seattle $1.2 Billion in Chicago $1.7 Billion in San Francisco 40
  • 41.
    Total Investments ByZip Code for BioTech $1.3 Billion in Cambridge $528 Million in Dallas $1.1 Billion in San Diego 41
  • 42.

Editor's Notes

  • #2 What is Big Data? -- “The challenges, solutions and opportunities around the storage, processing and delivery of data at scale”Tag Cloud created from a week of Tech4Africa Tweets – an example of trend analysis which is a popular Big Data Analytics patternGoals are to explain the importance and opportunity and tell you how to do it. Hadoop/NoSQL deep dive not covered.
  • #3 As Hardware becomes increasing commoditized, the margin & differentiation moved to software, as software is becoming increasingly commoditized the margin & differentiation is moving to data2000 - Cloud is an IT Sourcing Alternative (Virtualization extends into Cloud)Explosion of Unstructured DataMobile“Let’s create a context in which to think….”Focused on 3 major tipping points in the evolution of the technology. Mention that this is a very web centric view contrasted to Barry Devlin’s Enterprise viewAssumes Networking falls under Hardware & Cloud is at the Intersection of Software and DataWhy should you care?Tipping Point 1: Situational ApplicationsTipping Point 2: Big DataTipping Point 3: Reasoning
  • #5 Web 2.0(Information Explosion, Now Many Channels - Turning consumers into Producers (Shirky),Tipping point Web Standards allow Rapid Application Development, Advent of Situational Applications, Folksonomies,Social)SOA (Functionality exposed through open interfaces and open standards, Great strides in modularity and re-use whilst reducing complexities around system integration, Still need to be a developer to create applications using theseservice interfaces (WSDL, SOAP, way too complex !) Enter mashups…)Mashups (Place a façade on the service and you have the final step in the evolution of services and service based applications,Now anyone can build applications (i.e. non-programmers). We’ve taken the entire SOA Library and exposed it to non-programmers, What do I mean? Check out this YouTunes app…) 1st example where we saw arbitrary data/content re-purposed in ways the original authors never intended –eg. Craigslist gumtree/ homes for sales scraped and placed on google map mashed up w/ crime statistics. Whole greater than the sum of its parts -> New kinds of Information !!BUT Limitations around how much arbitrary data being scraped and turned into info. Usually no pre-processing and just what can be rendered on a single page.Demo
  • #6 http://www.housingmaps.com/
  • #7 “Every 2 days we create as much data as we did from the dawn of humanity until 2003” – We’ve hit the Petabyte & Exabyte age. What does that mean? Lets look (next slide)
  • #8 Mention Enterprise Growth over time, Mobile/Sensor Data, Web 2.0 Data Exhaust, Social NetworksAdvances in Analytics – keep your data around for deeper business insights and to avoid Enterprise Amnesia
  • #9 How about we summarize a few of the key trends in the Web as we know it today …. This diagram shows some of the main trends of what Web 3.0 is about…Netflix accounts for 29.7 % of US Traffic, Mention Web 2.0 Summit Points of ControlHaving more data leads to better context which leads to deeper understanding/insight or new discoveriesRefer to Reid Hoffman’s views on what web 3.0 is
  • #11 Pre-processed though, not flexible, you can’t ask specific questions that have not been pre-processed
  • #12 Mention folksonomies in Web 2.0 with searching Delicious Bookmarks. Mention Chilean Earthquake Crisis Video using Twitter to do Crisis Mapping.
  • #13 Talk about Visualizations and InfoGraphics – manual and a lot of work
  • #14 They are only part of the solution & don’t allow you to ask your own questions
  • #16 This is the real promise of Big Data
  • #17 These are not all the problems around Big Data. These are the bigger problems around deriving new information out of web data. There are other issues as well likely inconsistency, skew, etc.
  • #18 Give a Nutch example
  • #19 Specifically call out the color coding reasoning for Map/Reduce and HDFS as a single distributed service
  • #20 Give examples of how one might use Open Calais or Entity Extraction libraries