Tour of Big Data
Raymond Yu
Socal Code Camp 2013
About myself
•Sr. Database Architect @ BridgePoint
Education
•Blog www.yutechnet.com
•LinkedIn www.linkedin.com/in/raymondyu1
•@yutechnet
About this talk…
7/28/2013yutechnet.com
•Inspired by “Introduction to Data Science”
on Coursera (Bill Howe, UW)
•Guided tour of topics in data science
– MapReduce, Pig
– noSQL
– Machine Learning
– Information Visualization
•Goal
Big Data
•Volume
– Size of data
•Velocity
– The latency of data processing relative to the growing
demand of interactivity
•Variety
– The diversity of sources, formats, quality, and structures
Big Data is any data that is expensive to manage and hard to
extract value from. -Michael Franklin
Where does big data come from?
•“Data exhaust” from customers
•New censor technologies
•Individually contributed data in massive
scale
•Cheap to keep data
Data Science
•Data Preparation (at scale)
•Analytics
•Communication
The ability to take data, understand it, process it,
extract value from it, visualize it, and communicate it
- Hal Varian, Google's Chief Economist
Context…
src. Introduction to Data Science course
Relational Databases
•SQL as Declarative Language
•Indexes
– Extract small result from big dataset
– Built easily and automatically used when appropriate
•Data consistency
•“Old-style” scalability
MapReduce
•Google paper 2004
•Hadoop 2008
•High level programming model for large-
scale parallel data processing
•Divide-and-conquer
•Mapper + Reducer
“Hello World” of MapReduce
Count word frequency in millions of documents
MapReduce Programming Model
src. Course slide
Show me the MapReduce…
•www.jsmapreduce.com
MapReduce in Hadoop
Pig
• An engine to execute programs on top of
Hadoop
• Language layer Pig Latin
• An Apache open source project
(http://pig.apache.org)
•Yahoo! 2009
Why use Pig?
In MapReduce…
In Pig Latin
Pig System Overview
Context…
src. Introduction to Data Science course
noSQL definitions
•A term to designate databases which
differ from classic relational databases
– Transactional model
– Data model
•Not much to do with SQL
•“not only SQL”
Concepts
• CAP Theorem
– Consistency
– Availability
– Partition Tolerance
• Eventual consistency
Src: blog.beany.co.kr
noSQL One-page Overview
Let’s walk through a few
•Column definitions
•RDBMS
•Memcache
•Dynamo
•CouchDB
•BigTable (Hbase)
noSQL Common Features
• The ability to replicate and partition data
over many servers (scale)
• Horizontally scale simple operation
throughput over many servers
• A simple API - no query language (no SQL)
• Weaker concurrency model than ACID
transactions (no transaction)
• The ability to dynamically add new attributes
to data records (no schema)
Machine Learning
• Systems that automatically learn programs
from data
• Prediction
– Given examples of inputs and outputs
– Learn the relationship between them
– Apply the relationship to larger set
• Different from statistics model
– Large data set over simple model trumpets small data set
over sophisticated model
Bertin’s Visual Attributes
Data Encoding Exercise
Information Visualization
src. http://www.tableausoftware.com/public
Closing example
Src. http://commons.wikimedia.org/wiki/File:ElectoralCollege2012.svg
Nate Silver
fivethirtyeight.com
Obama’s Data-
Driven Campaign
• Massive voter db
• Hadoop as ETL
• Vertica db for slice-
and-dice
Questions?

Tour of Big Data

Editor's Notes

  • #3 Whenever you see “yutechnet”, it is me Next ask audience about:Developer? DBA? DBE?Worked on any databases beyond relational database?Use Hadoop and other noSQL on a daily basis?
  • #4 Dummy down version of the courseHard to pick topics to shareMajor areas of data science, focus on big data and noSQLGoal: familiar with the big picture and terminologies of data science and speak intelligently about this field, springboard into specific areas you are further interested in
  • #5 Franklin’s key idea: “Big” is relative, it depends on what you try to do
  • #7 Analytics: statistics model, machine learning, slice-dice
  • #9 Call out a few great features about relational databases to set the context of how we got here, and we don’t get lost in the context of big data and noSQL, with bad name/impression as old guardDeclarative – specify what you want, no need to worry about logical or physical operation and optimization
  • #11 Map, shuffle, and reduce.
  • #14 Touch base on HDFS layer about fault tolerance, job tracker, task tracker, etc.
  • #18 Comments in lieu of demo:schema-on-read with LOADRelational JOIN operationOptimization – relational algebraLazy evaluation – no work is done until STORE
  • #19 Pig performance: initially not good as MR, but caught up quickly, now almost the same as MRHive not covered, but 2011 data showed that >90% MR jobs are executed via HiveClear win for a declarative languageDon’t feel bad if you know SQL 
  • #22 About EC:Databases: “Everyone MUST see the same thing, either old or new, no matter how long it takes.”NoSQL: “For large applications, we can’t afford to wait that long, and maybe it doesn’t matter anyway”
  • #24 Memcache: load everything into memory, and scale across hundreds of machines, consistent hashingBigTable – Google 2006, complementary to MapReduce, added index (zoom-in), fast key-based lookup
  • #26 Statistics emphasizes accuracy of model, while ML cares less about the nature of modelThink of the example of building a super-accurate gun