Crunch Big Data in the Cloud
with IBM BigInsights
and Hadoop
IBD-3475
Leons Petrazickis, IBM Canada

@leonsp

© 2013 IBM C...
Please note
IBM’s statements regarding its plans, directions, and intent are subject to
change or withdrawal without notic...
First step



Request a lab environment
 http://bit.ly/requestLab
BigDataUniversity.com
Hadoop Architecture
Agenda
• Terminology review
• Hadoop architecture
– HDFS
– Blocks
– MapReduce
– Type of nodes
– Topology awareness
– Writi...
Terminology review
Hadoop cluster
Rack 1

Rack n

Rack 2

Node 1

Node 1

Node 1

Node 2

Node 2

Node 2

…

…
Node n

7

...
Hadoop architecture
• Two main components:
– Hadoop Distributed File System (HDFS)
– MapReduce Engine

8
Hadoop distributed file system (HDFS)
• Hadoop file system that runs on top of
existing file system
• Designed to handle v...
HDFS - Blocks
•

File Blocks
– 64MB (default), 128MB (recommended) – compare to 4KB in UNIX
– Behind the scenes, 1 HDFS bl...
HDFS - Replication
• Blocks with data are replicated to multiple nodes
• Allows for node failure without data loss
Node 3
...
MapReduce engine
• Technology from Google
• A MapReduce program consists of map and reduce
functions
• A MapReduce job is ...
Types of nodes - Overview
• HDFS nodes
– NameNode
– DataNode
• MapReduce nodes
– JobTracker
– TaskTracker
• There are othe...
Types of nodes - Overview

14
Types of nodes - NameNode
• NameNode
– Only one per Hadoop cluster
– Manages the filesystem namespace and metadata
– Singl...
Types of nodes - DataNode
• DataNode
– Many per Hadoop cluster
– Manages blocks with data and
serves them to clients
– Per...
Types of nodes - JobTracker
• JobTracker node
– One per Hadoop cluster
– Receives job requests submitted by client
– Sched...
Types of nodes - TaskTracker
• TaskTracker node
– Many per Hadoop cluster
– Executes MapReduce operations
– Reads blocks f...
…lesson continued in the next video>

19
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:

20
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1. Process on the same node.

21
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1. Process on the same node
2. Diff...
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1. Process on the same node
2. Diff...
Topology awareness
Bandwidth becomes progressively smaller in the following scenarios:
1.
2.
3.
4.

24

Process on the sam...
Writing a file to HDFS

25
Writing a file to HDFS

26
Writing a file to HDFS

27
Writing a file to HDFS

28
Writing a file to HDFS

29
Writing a file to HDFS

30
Writing a file to HDFS

31
Writing a file to HDFS

32
Writing a file to HDFS

33
Writing a file to HDFS

34
Writing a file to HDFS

35
Thank You
What is Hadoop?
Agenda
•
•
•
•
•

38

What is Hadoop?
What is Big Data?
Hadoop-related open source projects
Examples of Hadoop in action
B...
What is Hadoop?

1G
B
Relational
Database

39
What is Hadoop?

10GB
1G
B
Relational
Database

40
What is Hadoop?

100GB

10GB
1G
B
Relational
Database

41
What is Hadoop?

100GB

10GB
1G
B
Relational
Database

42
What is Hadoop?

1TB

Relational
Database

43
What is Hadoop?

10TB 100TB

1TB

Relational
Database

44
What is Hadoop?

10TB 100TB

1TB

Relational
Database

45
What is Hadoop?

Facebook

10TB 100TB
RFIDs

1TB

Relational
Database
Sensors
Twitter

46
What is Hadoop?
• Open source project
• Written in Java

• Optimized to handle
• Massive amounts of data through paralleli...
What is Big Data?
RFID Readers

48
What is Big Data?
2 Billion internet users

49
What is Big Data?
4.6 Billion mobile phones

50
What is Big Data?
7TB of data processed by Twitter every day

7TB
a day

51
What is Big Data?
10TB of data processed by Facebook every day

10TB
a day

52
What is Big Data?
About 80% of this data is unstructured

53
Hadoop-related open source projects

jaql
PIG

ZooKeeper

54
Examples of Hadoop in action – IBM Watson

55
Examples of Hadoop in action
• In the telecommunication industry
• In the media
• In the technology industry

56
Hadoop is not for all types of work
•
•
•
•
•

57

Not to process transactions (random access)
Not good when work cannot b...
Big Data solutions and the Cloud
• Big Data solutions are more than just
Hadoop
– Add business intelligence/analytics
func...
Thank You
HDFS – Command Line
Agenda
• HDFS Command Line Interface
• Examples

61
HDFS Command line interface
• File System Shell (fs)

• Invoked as follows:

hadoop fs <args>
• Example:
Listing the curre...
HDFS Command line interface
• FS shell commands take paths URIs as argument
• URI format:

scheme://authority/path

• Sche...
HDFS Command line interface
• Many POSIX-like commands
• cat, chgrp, chmod, chown, cp, du, ls, mkdir, mv, rm, stat, tail
•...
HDFS – Specific commands
• copyFromLocal / put
• Copy files from the local file system into fs

hadoop fs -copyFromLocal <...
HDFS – Specific commands
• copyToLocal / get
• Copy files from fs into the local file system

hadoop fs -copyToLocal [-ign...
HDFS – Specific commands
• getMerge
• Get all the files in the directories that match the source file pattern
• Merge and ...
HDFS – Specific commands
• setRep
• Set the replication level of a file.
• The -R flag requests a recursive change of repl...
Thank You
Hadoop MapReduce
Agenda
•
•
•
•
•
•
•
•

71

Map operations
Reduce operations
Submitting a MapReduce job
Distributed Mergesort Engine
Two f...
What is a Map operation?
• Doing something to every element in an array is a common operation:

var a = [1,2,3];
for (i = ...
What is a Map operation?
• Doing something to every element in an array is a common operation:

var a = [1,2,3];
for (i = ...
What is a Map operation?
• Doing something to every element in an array is a common operation:

var a = [1,2,3];
for (i = ...
What is a Map operation?
• Doing something to every element in an array is a common operation:

var a = [1,2,3];
for (i = ...
What is a Map operation?
• Doing something to every element in an array is a common operation:

var a = [1,2,3];
for (i = ...
What is a Map operation?
• …like this, where fn is a function passed as an argument:

function map(fn, a) {
for (i = 0; i ...
What is a Map operation?
• …like this, where fn is a function passed as an argument:

function map(fn, a) {
for (i = 0; i ...
What is a Map operation?
• …like this, where fn is a function passed as an argument:

function map(fn, a) {
for (i = 0; i ...
What is a Map operation?
• In summary, now you can rewrite:

for (i = 0; i < a.length; i++)
a[i] = a[i] * 2;
}

as a map o...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function sum(a) {

var s...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function sum(a) {

var s...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function sum(a) {

var s...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function sum(a) {

var s...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function reduce(fn, a, i...
What is a Reduce operation?
• Another common operation on arrays is to combine all their values:

function sum(a) {

var s...
…lesson continued in the next video>

87
Submitting a MapReduce job

88
Submitting a MapReduce job

89
Submitting a MapReduce job

90
Submitting a MapReduce job

91
Submitting a MapReduce job

92
Submitting a MapReduce job

93
Submitting a MapReduce job

94
Submitting a MapReduce job

95
Submitting a MapReduce job

96
Submitting a MapReduce job

97
…lesson continued in the next video>

98
MapReduce – Distributed Mergesort Engine

99
MapReduce – Distributed Mergesort Engine

100
MapReduce – Distributed Mergesort Engine

101
MapReduce – Distributed Mergesort Engine

102
MapReduce – Distributed Mergesort Engine

103
MapReduce – Distributed Mergesort Engine

104
MapReduce – Distributed Mergesort Engine

105
MapReduce – Distributed Mergesort Engine

106
MapReduce – Distributed Mergesort Engine

107
MapReduce – Distributed Mergesort Engine

108
MapReduce – Distributed Mergesort Engine

109
…lesson continued in the next video>

110
Two Fundamental data types
• Key/value pairs
• Lists
Input
map
reduce

111

Output
Two Fundamental data types
• Key/value pairs
• Lists
Input
map
reduce

112

<k1, v1>

Output
Two Fundamental data types
• Key/value pairs
• Lists
Input
map
reduce

113

Output

<k1, v1>

list(<k2, v2>)
Two Fundamental data types
• Key/value pairs
• Lists
Input
map

<k1, v1>

list(<k2, v2>)

reduce

114

Output

<k2, list(v...
Two Fundamental data types
• Key/value pairs
• Lists
Input
map

<k1, v1>

list(<k2, v2>)

reduce

115

Output

<k2, list(v...
Simple data flow example

116
Simple data flow example

117
Simple data flow example

118
Simple data flow example

119
Simple data flow example

120
…lesson continued in the next video>

121
Fault tolerance

122
Fault tolerance
• Task Failure

123
Fault tolerance
• Task Failure
• If a child task fails, the child JVM reports to the TaskTracker before it exits.
Attempt ...
Fault tolerance
• Task Failure
• If a child task fails, the child JVM reports to the TaskTracker before it exits.
Attempt ...
Fault tolerance
• Task Failure
• If a child task fails, the child JVM reports to the TaskTracker before it exits.
Attempt ...
Fault tolerance
• TaskTracker Failure

127
Fault tolerance
• TaskTracker Failure
• JobTracker receives no heartbeat

128
Fault tolerance
• TaskTracker Failure
• JobTracker receives no heartbeat
• Removes TaskTracker from pool of TaskTrackers t...
Fault tolerance
• TaskTracker Failure
• JobTracker receives no heartbeat
• Removes TaskTracker from pool of TaskTrackers t...
Fault tolerance
• TaskTracker Failure
• JobTracker receives no heartbeat
• Removes TaskTracker from pool of TaskTrackers t...
…lesson continued in the next video>

132
Scheduling

133
Scheduling
• FIFO scheduler (with priorities)

134
Scheduling
• FIFO scheduler (with priorities)
• Each job uses the whole cluster, so jobs wait their turn.

135
Scheduling
• FIFO scheduler (with priorities)
• Each job uses the whole cluster, so jobs wait their turn.

• Fair schedule...
Scheduling
• FIFO scheduler (with priorities)
• Each job uses the whole cluster, so jobs wait their turn.

• Fair schedule...
Scheduling
• FIFO scheduler (with priorities)
• Each job uses the whole cluster, so jobs wait their turn.

• Fair schedule...
Scheduling
• FIFO scheduler (with priorities)
• Each job uses the whole cluster, so jobs wait their turn.

• Fair schedule...
Task execution

140
Task execution
• Speculative Execution

141
Task execution
• Speculative Execution
• Job execution is time sensitive to slow-running tasks. Hadoop detects
slow-runnin...
Task execution
• Speculative Execution
• Job execution is time sensitive to slow-running tasks. Hadoop detects
slow-runnin...
Task execution
• Speculative Execution
• Job execution is time sensitive to slow-running tasks. Hadoop detects
slow-runnin...
Thank You
Pig, Hive, and JAQL
Agenda
•
•
•
•

147

Overview
Pig
Hive
Jaql
Agenda
•
•
•
•

148

Overview
Pig
Hive
Jaql
Similarities of Pig, Hive and Jaql









149

All translate their respective high-level languages to
MapReduce job...
Comparing Pig, Hive, and Jaql
Pig

Hive

Jaql

Developed by

Yahoo!

Facebook

IBM

Language name

Pig Latin

HiveQL

Jaql...
Agenda
•
•
•
•

151

Overview
Pig
Hive
Jaql
Pig components
• Two Components



Language (called Pig Latin)
Compiler

Pig
Pig Latin
Compiler

• Two execution environ...
Running Pig


Script
pig scriptfile.pig



Grunt (command line)
pig (to launch command line tool)



Embedded
Call in t...
Pig Latin sample code
#pig
grunt> records = LOAD ‘econ_assist.csv’
using PigStorage (‘,’)
AS (country:chararray, sum:long)...
Pig Latin – Statements, operations & commands
Pig Latin program

An operation
as a statement
A
command
as a
statement

… L...
Pig Latin statements


UDF Statements




Commands








156

Hadoop Filesystem (cat, ls, etc.)
Hadoop MapReduce...
Pig Latin – Relational operators


Loading and storing
Eg: LOAD (into a program), STORE (to disk), DUMP (to the screen)

...
Pig Latin – Relations and schema




Result of a relational operator is a relation
A relation is a set of tuples
Relati...
Pig Latin – Relations and schema



Structure of a relation is a schema
Use the DESCRIBE operator to see the schema. Eg:...
Pig Latin expressions




Statements that contain relational operators may
also contain expressions.
Kinds of expression...
Pig Latin – Data types
• Simple types:
int
long



bytearray
chararray

Complex types:
Tuple
Bag
Map

161

float
double

...
Pig Latin – Function types


Eval
Input: One or more expressions
Output: An expression
Example: MAX



Filter
Input: Bag...
Pig Latin – Function types


Load
Input: Data from external storage
Output: A relation
Example: PigStorage



Store
Inpu...
Pig Latin – User-Defined Functions
• Written in Java
 Packaged in a JAR file
 Register JAR file using the REGISTER state...
Agenda
•
•
•
•

165

Overview
Pig
Hive
Jaql
Hive architecture
DDL

JDBC/ODBC

Queries

CLI
Metastore
(Relational
database
for metadata)

Web
Interface

Hadoop

166

1...
Running Hive


Hive Shell


Interactive
hive



Script
hive -f myscript



Inline
hive -e 'SELECT * FROM mytable'

167...
Hive services
hive --service servicename
where servicename can be:


hiveserver
server for Thrift, JDBC, ODBC clients


...
Hive - Metastore



Stores Hive metadata
Configurations
Embedded
in-process metastore, in-process database
 Local
in-pr...
Hive – Schema-On-Read





170

170

Faster loads into the database (simply copy
or move)
Slower queries
Flexibility – ...
Hive - Configuration
• Three ways to configure hive:
• hive-site.xml
-

fs.default.name
mapred.job.tracker
Metastore confi...
Hive Query Language (HiveQL)




SQL dialect
Does not support full SQL92 specification
No support for:
HAVING clause in...
Sample code
#hive
hive> CREATE TABLE foreign_aid
(country STRING, sum BIGINT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘,...
Hive Query Language (HiveQL)


Extensions
MySQL-like extensions
 MapReduce extensions


Multi-table insert, MAP, REDUCE...
Hive Query Language (HiveQL)


Built-in Functions



175

175

SHOW FUNCTIONS
DESCRIBE FUNCTION
Hive – User-Defined Functions



Written in Java
Three UDF types:









176

UDF
Input: single row, output: sing...
Agenda
•
•
•
•

177

Overview
Pig
Hive
Jaql
Jaql architecture

Interactive shell / Applications

Script
Compiler / Parser / Rewriter
I/O layer

Storage layer
File Sys...
Jaql data model: JSON






179

179

JSON = JavaScript object Notation
Flexible (Schema is optional)
Powerful modelin...
JSON example
[
{ACCT_NUM:18,AUTH_DATE:”2011-01-29”,
AUTH_AMT:”111.11”,ZIP:98765,MERCH_NAME:”Acme”},

{ACCT_NUM:19,AUTH_DAT...
Running Jaql


Jaql Shell
Interactive.
 Batch
 Inline




Cluster
 Minicluster

181

-b myscript.jaql

-e jaqlstatem...
Jaql query language
source

…

operator

operator

sink

• Sources and sinks
Eg: Copy data from a local file to a new file...
Jaql query language
• Variables




Pipes, streams, and consumers





183

183

Equal operator (=) binds source outp...
Jaql query language
• Categories of Built-in Functions
system
core
hadoop
io
array
index

184

184

schema
xml
regex
binar...
Jaql – Data Storage


Data store examples





Amazon S3
HTTP

185

HBase
Local FS

Data format examples
JSON

185

DB...
Jaql sample code
#jaqlshell -c
jaql> $foreignaid =
read(del(“econ_assist.csv”,
{schema: schema
{country: string, sum: long...
Hadoop core lab – Part 3
BigDataUniversity.com
Acknowledgements and Disclaimers
Availability. References in this presentation to IBM products, programs, or services do n...
Communities
• On-line communities, User Groups, Technical Forums, Blogs, Social
networks, and more
o Find the community th...
Thank You
Your feedback is important!
• Access the Conference Agenda Builder to
complete your session surveys

oAny web or...
Upcoming SlideShare
Loading in...5
×

IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop

1,711

Published on

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,711
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
62
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop

  1. 1. Crunch Big Data in the Cloud with IBM BigInsights and Hadoop IBD-3475 Leons Petrazickis, IBM Canada @leonsp © 2013 IBM Corporation
  2. 2. Please note IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
  3. 3. First step  Request a lab environment  http://bit.ly/requestLab
  4. 4. BigDataUniversity.com
  5. 5. Hadoop Architecture
  6. 6. Agenda • Terminology review • Hadoop architecture – HDFS – Blocks – MapReduce – Type of nodes – Topology awareness – Writing a file to HDFS 6
  7. 7. Terminology review Hadoop cluster Rack 1 Rack n Rack 2 Node 1 Node 1 Node 1 Node 2 Node 2 Node 2 … … Node n 7 Node n … … Node n
  8. 8. Hadoop architecture • Two main components: – Hadoop Distributed File System (HDFS) – MapReduce Engine 8
  9. 9. Hadoop distributed file system (HDFS) • Hadoop file system that runs on top of existing file system • Designed to handle very large files with streaming data access patterns • Uses blocks to store a file or parts of a file 9
  10. 10. HDFS - Blocks • File Blocks – 64MB (default), 128MB (recommended) – compare to 4KB in UNIX – Behind the scenes, 1 HDFS block is supported by multiple operating system (OS) blocks HDFS Block 128 MB OS Blocks • Advantages of blocks: – Fixed size – easy to calculate how many fit on a disk – A file can be larger than any single disk in the network – If a file or a chunk of the file is smaller than the block size, only needed space is used. Eg: 420MB file is split as: 128MB • 10 128MB 128MB 36MB Fits well with replication to provide fault tolerance and availability
  11. 11. HDFS - Replication • Blocks with data are replicated to multiple nodes • Allows for node failure without data loss Node 3 Node 1 Node 2 11
  12. 12. MapReduce engine • Technology from Google • A MapReduce program consists of map and reduce functions • A MapReduce job is broken into tasks that run in parallel 12
  13. 13. Types of nodes - Overview • HDFS nodes – NameNode – DataNode • MapReduce nodes – JobTracker – TaskTracker • There are other nodes not discussed in this course 13
  14. 14. Types of nodes - Overview 14
  15. 15. Types of nodes - NameNode • NameNode – Only one per Hadoop cluster – Manages the filesystem namespace and metadata – Single point of failure, but mitigated by writing state to multiple filesystems – Single point of failure: Don’t use inexpensive commodity hardware for this node, large memory requirements 15
  16. 16. Types of nodes - DataNode • DataNode – Many per Hadoop cluster – Manages blocks with data and serves them to clients – Periodically reports to name node the list of blocks it stores – Use inexpensive commodity hardware for this node 16
  17. 17. Types of nodes - JobTracker • JobTracker node – One per Hadoop cluster – Receives job requests submitted by client – Schedules and monitors MapReduce jobs on task trackers 17
  18. 18. Types of nodes - TaskTracker • TaskTracker node – Many per Hadoop cluster – Executes MapReduce operations – Reads blocks from DataNodes 18
  19. 19. …lesson continued in the next video> 19
  20. 20. Topology awareness Bandwidth becomes progressively smaller in the following scenarios: 20
  21. 21. Topology awareness Bandwidth becomes progressively smaller in the following scenarios: 1. Process on the same node. 21
  22. 22. Topology awareness Bandwidth becomes progressively smaller in the following scenarios: 1. Process on the same node 2. Different nodes on the same rack 22
  23. 23. Topology awareness Bandwidth becomes progressively smaller in the following scenarios: 1. Process on the same node 2. Different nodes on the same rack 3. Nodes on different racks in the same data center 23
  24. 24. Topology awareness Bandwidth becomes progressively smaller in the following scenarios: 1. 2. 3. 4. 24 Process on the same node Different nodes on the same rack Nodes on different racks in the same data center Nodes in different data centers
  25. 25. Writing a file to HDFS 25
  26. 26. Writing a file to HDFS 26
  27. 27. Writing a file to HDFS 27
  28. 28. Writing a file to HDFS 28
  29. 29. Writing a file to HDFS 29
  30. 30. Writing a file to HDFS 30
  31. 31. Writing a file to HDFS 31
  32. 32. Writing a file to HDFS 32
  33. 33. Writing a file to HDFS 33
  34. 34. Writing a file to HDFS 34
  35. 35. Writing a file to HDFS 35
  36. 36. Thank You
  37. 37. What is Hadoop?
  38. 38. Agenda • • • • • 38 What is Hadoop? What is Big Data? Hadoop-related open source projects Examples of Hadoop in action Big Data solutions and the Cloud
  39. 39. What is Hadoop? 1G B Relational Database 39
  40. 40. What is Hadoop? 10GB 1G B Relational Database 40
  41. 41. What is Hadoop? 100GB 10GB 1G B Relational Database 41
  42. 42. What is Hadoop? 100GB 10GB 1G B Relational Database 42
  43. 43. What is Hadoop? 1TB Relational Database 43
  44. 44. What is Hadoop? 10TB 100TB 1TB Relational Database 44
  45. 45. What is Hadoop? 10TB 100TB 1TB Relational Database 45
  46. 46. What is Hadoop? Facebook 10TB 100TB RFIDs 1TB Relational Database Sensors Twitter 46
  47. 47. What is Hadoop? • Open source project • Written in Java • Optimized to handle • Massive amounts of data through parallelism • A variety of data (structured, unstructured, semi-structured) • Using inexpensive commodity hardware • Great performance • Reliability provided through replication • Not for OLTP, not for OLAP/DSS, good for Big Data • Current version: 0.20.2 47
  48. 48. What is Big Data? RFID Readers 48
  49. 49. What is Big Data? 2 Billion internet users 49
  50. 50. What is Big Data? 4.6 Billion mobile phones 50
  51. 51. What is Big Data? 7TB of data processed by Twitter every day 7TB a day 51
  52. 52. What is Big Data? 10TB of data processed by Facebook every day 10TB a day 52
  53. 53. What is Big Data? About 80% of this data is unstructured 53
  54. 54. Hadoop-related open source projects jaql PIG ZooKeeper 54
  55. 55. Examples of Hadoop in action – IBM Watson 55
  56. 56. Examples of Hadoop in action • In the telecommunication industry • In the media • In the technology industry 56
  57. 57. Hadoop is not for all types of work • • • • • 57 Not to process transactions (random access) Not good when work cannot be parallelized Not good for low latency data access Not good for processing lots of small files Not good for intensive calculations with little data
  58. 58. Big Data solutions and the Cloud • Big Data solutions are more than just Hadoop – Add business intelligence/analytics functionality – Derive information of data in motion • Big Data solutions and the Cloud are a perfect fit. – The Cloud allows you to set up a cluster of systems in minutes and it’s relatively inexpensive. 58
  59. 59. Thank You
  60. 60. HDFS – Command Line
  61. 61. Agenda • HDFS Command Line Interface • Examples 61
  62. 62. HDFS Command line interface • File System Shell (fs) • Invoked as follows: hadoop fs <args> • Example: Listing the current directory in hdfs hadoop fs –ls . 62
  63. 63. HDFS Command line interface • FS shell commands take paths URIs as argument • URI format: scheme://authority/path • Scheme: • For the local filesystem, the scheme is file • For HDFS, the scheme is hdfs hadoop fs –copyFromLocal file://myfile.txt hdfs://localhost/user/keith/myfile.txt • Scheme and authority are optional • Defaults are taken from configuration file core-site.xml 63
  64. 64. HDFS Command line interface • Many POSIX-like commands • cat, chgrp, chmod, chown, cp, du, ls, mkdir, mv, rm, stat, tail • Some HDFS-specific commands • copyFromLocal, copyToLocal, get, getmerge, put, setrep 64
  65. 65. HDFS – Specific commands • copyFromLocal / put • Copy files from the local file system into fs hadoop fs -copyFromLocal <localsrc> .. <dst> Or hadoop fs -put <localsrc> .. <dst> 65
  66. 66. HDFS – Specific commands • copyToLocal / get • Copy files from fs into the local file system hadoop fs -copyToLocal [-ignorecrc] [-crc] <src> <localdst> Or hadoop fs -get [-ignorecrc] [-crc] <src> <localdst> 66
  67. 67. HDFS – Specific commands • getMerge • Get all the files in the directories that match the source file pattern • Merge and sort them to only one file on local fs • <src> is kept hadoop fs -getmerge <src> <localdst> 67
  68. 68. HDFS – Specific commands • setRep • Set the replication level of a file. • The -R flag requests a recursive change of replication level for an entire tree. • If -w is specified, waits until new replication level is achieved. hadoop fs -setrep [-R] [-w] <rep> <path/file> 68
  69. 69. Thank You
  70. 70. Hadoop MapReduce
  71. 71. Agenda • • • • • • • • 71 Map operations Reduce operations Submitting a MapReduce job Distributed Mergesort Engine Two fundamental data types Fault tolerance Scheduling Task execution
  72. 72. What is a Map operation? • Doing something to every element in an array is a common operation: var a = [1,2,3]; for (i = 0; i < a.length; i++) a[i] = a[i] * 2; 72
  73. 73. What is a Map operation? • Doing something to every element in an array is a common operation: var a = [1,2,3]; for (i = 0; i < a.length; i++) a[i] = a[i] * 2; • New value for variable a would be: var a = [2,4,6]; 73
  74. 74. What is a Map operation? • Doing something to every element in an array is a common operation: var a = [1,2,3]; for (i = 0; i < a.length; i++) a[i] = a[i] * 2; • New value for variable a would be: var a = [2,4,6]; 74 This can be written as a function
  75. 75. What is a Map operation? • Doing something to every element in an array is a common operation: var a = [1,2,3]; for (i = 0; i < a.length; i++) a[i] * 2; a[i] = fn(a[i]); • New value for variable a would be: var a = [2,4,6]; 75 Like this, where fn is a function defined as: function fn(x) {return x*2;}
  76. 76. What is a Map operation? • Doing something to every element in an array is a common operation: var a = [1,2,3]; for (i = 0; i < a.length; i++) a[i] = fn(a[i]); Now, all of this can also be converted into a “map” function 76
  77. 77. What is a Map operation? • …like this, where fn is a function passed as an argument: function map(fn, a) { for (i = 0; i < a.length; i++) a[i] = fn(a[i]); } 77
  78. 78. What is a Map operation? • …like this, where fn is a function passed as an argument: function map(fn, a) { for (i = 0; i < a.length; i++) a[i] = fn(a[i]); } • You can invoke this map function like this: map(function(x){return x*2;}, a); 78
  79. 79. What is a Map operation? • …like this, where fn is a function passed as an argument: function map(fn, a) { for (i = 0; i < a.length; i++) a[i] = fn(a[i]); } • You can invoke this map function like this: map(function(x){return x*2;}, a); This is function fn whose definition is included in the call 79
  80. 80. What is a Map operation? • In summary, now you can rewrite: for (i = 0; i < a.length; i++) a[i] = a[i] * 2; } as a map operation: map(function(x){return x*2;}, a); 80
  81. 81. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function sum(a) { var s = 0; for (i = 0; i < a.length; i++) s += a[i]; return s; } 81
  82. 82. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function sum(a) { var s = 0; for (i = 0; i < a.length; i++) s += a[i]; return s; } 82 This can be written as a function
  83. 83. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function sum(a) { var s = 0; for (i = 0; i < a.length; i++) s = fn(s,a[i]); return s; } 83 Like this, where function fn is defined so it adds its arguments: function fn(a,b){ return a+b; }
  84. 84. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function sum(a) { var s = 0; for (i = 0; i < a.length; i++) s = fn(s, a[i]); return s; } The whole function sum can also be rewritten so that fn is passed as an argument 84
  85. 85. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function reduce(fn, a, init) { var s = init; for (i = 0; i < a.length; i++) s = fn(s, a[i]); return s; } Like this… The function name was changed to reduce, and now it takes three arguments, a function, an array, and an initial value 85
  86. 86. What is a Reduce operation? • Another common operation on arrays is to combine all their values: function sum(a) { var s = 0; for (i = 0; i < a.length; i++) s += a[i]; return s; } as a reduce operation: reduce(function(a,b){return a+b;},a,0); 86
  87. 87. …lesson continued in the next video> 87
  88. 88. Submitting a MapReduce job 88
  89. 89. Submitting a MapReduce job 89
  90. 90. Submitting a MapReduce job 90
  91. 91. Submitting a MapReduce job 91
  92. 92. Submitting a MapReduce job 92
  93. 93. Submitting a MapReduce job 93
  94. 94. Submitting a MapReduce job 94
  95. 95. Submitting a MapReduce job 95
  96. 96. Submitting a MapReduce job 96
  97. 97. Submitting a MapReduce job 97
  98. 98. …lesson continued in the next video> 98
  99. 99. MapReduce – Distributed Mergesort Engine 99
  100. 100. MapReduce – Distributed Mergesort Engine 100
  101. 101. MapReduce – Distributed Mergesort Engine 101
  102. 102. MapReduce – Distributed Mergesort Engine 102
  103. 103. MapReduce – Distributed Mergesort Engine 103
  104. 104. MapReduce – Distributed Mergesort Engine 104
  105. 105. MapReduce – Distributed Mergesort Engine 105
  106. 106. MapReduce – Distributed Mergesort Engine 106
  107. 107. MapReduce – Distributed Mergesort Engine 107
  108. 108. MapReduce – Distributed Mergesort Engine 108
  109. 109. MapReduce – Distributed Mergesort Engine 109
  110. 110. …lesson continued in the next video> 110
  111. 111. Two Fundamental data types • Key/value pairs • Lists Input map reduce 111 Output
  112. 112. Two Fundamental data types • Key/value pairs • Lists Input map reduce 112 <k1, v1> Output
  113. 113. Two Fundamental data types • Key/value pairs • Lists Input map reduce 113 Output <k1, v1> list(<k2, v2>)
  114. 114. Two Fundamental data types • Key/value pairs • Lists Input map <k1, v1> list(<k2, v2>) reduce 114 Output <k2, list(v2)>
  115. 115. Two Fundamental data types • Key/value pairs • Lists Input map <k1, v1> list(<k2, v2>) reduce 115 Output <k2, list(v2)> list(<k3, v3>)
  116. 116. Simple data flow example 116
  117. 117. Simple data flow example 117
  118. 118. Simple data flow example 118
  119. 119. Simple data flow example 119
  120. 120. Simple data flow example 120
  121. 121. …lesson continued in the next video> 121
  122. 122. Fault tolerance 122
  123. 123. Fault tolerance • Task Failure 123
  124. 124. Fault tolerance • Task Failure • If a child task fails, the child JVM reports to the TaskTracker before it exits. Attempt is marked failed, freeing up slot for another task. 124
  125. 125. Fault tolerance • Task Failure • If a child task fails, the child JVM reports to the TaskTracker before it exits. Attempt is marked failed, freeing up slot for another task. • If the child task hangs, it is killed. JobTracker reschedules the task on another machine. 125
  126. 126. Fault tolerance • Task Failure • If a child task fails, the child JVM reports to the TaskTracker before it exits. Attempt is marked failed, freeing up slot for another task. • If the child task hangs, it is killed. JobTracker reschedules the task on another machine. • If task continues to fail, job is failed. 126
  127. 127. Fault tolerance • TaskTracker Failure 127
  128. 128. Fault tolerance • TaskTracker Failure • JobTracker receives no heartbeat 128
  129. 129. Fault tolerance • TaskTracker Failure • JobTracker receives no heartbeat • Removes TaskTracker from pool of TaskTrackers to schedule tasks on. 129
  130. 130. Fault tolerance • TaskTracker Failure • JobTracker receives no heartbeat • Removes TaskTracker from pool of TaskTrackers to schedule tasks on. • JobTracker Failure 130
  131. 131. Fault tolerance • TaskTracker Failure • JobTracker receives no heartbeat • Removes TaskTracker from pool of TaskTrackers to schedule tasks on. • JobTracker Failure • Singe point of failure. Job fails 131
  132. 132. …lesson continued in the next video> 132
  133. 133. Scheduling 133
  134. 134. Scheduling • FIFO scheduler (with priorities) 134
  135. 135. Scheduling • FIFO scheduler (with priorities) • Each job uses the whole cluster, so jobs wait their turn. 135
  136. 136. Scheduling • FIFO scheduler (with priorities) • Each job uses the whole cluster, so jobs wait their turn. • Fair scheduler 136
  137. 137. Scheduling • FIFO scheduler (with priorities) • Each job uses the whole cluster, so jobs wait their turn. • Fair scheduler • Jobs placed in pools. If a user submits more jobs than another user, he will not get any more cluster resources than the other user, on average. Can define custom pools with guaranteed minimum capacity. 137
  138. 138. Scheduling • FIFO scheduler (with priorities) • Each job uses the whole cluster, so jobs wait their turn. • Fair scheduler • Jobs placed in pools. If a user submits more jobs than another user, he will not get any more cluster resources than the other user, on average. Can define custom pools with guaranteed minimum capacity. • Capacity scheduler 138
  139. 139. Scheduling • FIFO scheduler (with priorities) • Each job uses the whole cluster, so jobs wait their turn. • Fair scheduler • Jobs placed in pools. If a user submits more jobs than another user, he will not get any more cluster resources than the other user, on average. Can define custom pools with guaranteed minimum capacity. • Capacity scheduler • Allows Hadoop to simulate, for each user, a separate MapReduce cluster with FIFO scheduling. 139
  140. 140. Task execution 140
  141. 141. Task execution • Speculative Execution 141
  142. 142. Task execution • Speculative Execution • Job execution is time sensitive to slow-running tasks. Hadoop detects slow-running tasks and launches another, equivalent task as a backup. The output from the first of these tasks to finish is used. 142
  143. 143. Task execution • Speculative Execution • Job execution is time sensitive to slow-running tasks. Hadoop detects slow-running tasks and launches another, equivalent task as a backup. The output from the first of these tasks to finish is used. • Task JVM Reuse 143
  144. 144. Task execution • Speculative Execution • Job execution is time sensitive to slow-running tasks. Hadoop detects slow-running tasks and launches another, equivalent task as a backup. The output from the first of these tasks to finish is used. • Task JVM Reuse • Tasks run in their own JVMs for isolation. Jobs that have a large number of short-lived tasks or tasks with lengthy initialization can benefit from sequential JVM reuse through configuration. 144
  145. 145. Thank You
  146. 146. Pig, Hive, and JAQL
  147. 147. Agenda • • • • 147 Overview Pig Hive Jaql
  148. 148. Agenda • • • • 148 Overview Pig Hive Jaql
  149. 149. Similarities of Pig, Hive and Jaql      149 All translate their respective high-level languages to MapReduce jobs All offer significant reductions in program size over Java All provide points of extension to cover gaps in functionality All provide interoperability with other languages None support random reads/writes or low-latency queries
  150. 150. Comparing Pig, Hive, and Jaql Pig Hive Jaql Developed by Yahoo! Facebook IBM Language name Pig Latin HiveQL Jaql Data flow Declarative (SQL dialect) Data flow Complex Geared towards structured data Loosely structured data, JSON Schema optional? Yes No, but data can have many schemas Yes Turing complete? Yes when extended with Java UDFs Yes when extended with Java UDFs Type of language Data structures it operates on 150 Yes
  151. 151. Agenda • • • • 151 Overview Pig Hive Jaql
  152. 152. Pig components • Two Components   Language (called Pig Latin) Compiler Pig Pig Latin Compiler • Two execution environments  Local (Single JVM)   Distributed (Hadoop cluster)  152 152 pig -x local pig -x mapreduce, or simply pig Execution Environment Local Distributed
  153. 153. Running Pig  Script pig scriptfile.pig  Grunt (command line) pig (to launch command line tool)  Embedded Call in to Pig from Java 153 153
  154. 154. Pig Latin sample code #pig grunt> records = LOAD ‘econ_assist.csv’ using PigStorage (‘,’) AS (country:chararray, sum:long); grunt> grouped = GROUP records BY country; grunt> thesum = FOREACH grouped GENERATE group, SUM(records, sum); grunt> DUMP thesum; 154 154
  155. 155. Pig Latin – Statements, operations & commands Pig Latin program An operation as a statement A command as a statement … LOAD ‘input.txt’; … ls *.txt Logical Plan … … DUMP… Compile Physical Plan Execute 155 155
  156. 156. Pig Latin statements  UDF Statements   Commands      156 Hadoop Filesystem (cat, ls, etc.) Hadoop MapReduce (kill) Utility (exec, help, quit, run, set) Operators  156 REGISTER, DEFINE Diagnostic: DESCRIBE, EXPLAIN, ILLUSTRATE Relational: LOAD, STORE, DUMP, FILTER, etc.
  157. 157. Pig Latin – Relational operators  Loading and storing Eg: LOAD (into a program), STORE (to disk), DUMP (to the screen)  Filtering Eg: FILTER, DISTINCT, FOREACH...GENERATE, STREAM, SAMPLE  Grouping and joining Eg: JOIN, COGROUP, GROUP, CROSS  Sorting Eg: ORDER, LIMIT  Combining and splitting Eg: UNION, SPLIT 157 157
  158. 158. Pig Latin – Relations and schema    Result of a relational operator is a relation A relation is a set of tuples Relations can be named using an alias (Eg: “x”) x = LOAD ‘sample.txt’ AS (id: int, year:int); DUMP x  Output is a tuple. Eg: (1,1987) 158 158
  159. 159. Pig Latin – Relations and schema   Structure of a relation is a schema Use the DESCRIBE operator to see the schema. Eg: DESCRIBE x  The output is the schema: x: {id: int, year: int} 159 159
  160. 160. Pig Latin expressions   Statements that contain relational operators may also contain expressions. Kinds of expressions: Constant Map lookup Conditional Functional 160 160 Field Cast Boolean Flatten Projection Arithmetic Comparison
  161. 161. Pig Latin – Data types • Simple types: int long  bytearray chararray Complex types: Tuple Bag Map 161 float double 161 – Sequence of fields of any type – Unordered collection of tuples – Set of key-value pairs. Keys must be chararray.
  162. 162. Pig Latin – Function types  Eval Input: One or more expressions Output: An expression Example: MAX  Filter Input: Bag or map Output: boolean Example: IsEmpty 162 162
  163. 163. Pig Latin – Function types  Load Input: Data from external storage Output: A relation Example: PigStorage  Store Input: A relation Output: Data to external storage Example: PigStorage 163 163
  164. 164. Pig Latin – User-Defined Functions • Written in Java  Packaged in a JAR file  Register JAR file using the REGISTER statement  Optionally, alias it with DEFINE statement 164 164
  165. 165. Agenda • • • • 165 Overview Pig Hive Jaql
  166. 166. Hive architecture DDL JDBC/ODBC Queries CLI Metastore (Relational database for metadata) Web Interface Hadoop 166 166 Parser, Planner Optimizer
  167. 167. Running Hive  Hive Shell  Interactive hive  Script hive -f myscript  Inline hive -e 'SELECT * FROM mytable' 167 167
  168. 168. Hive services hive --service servicename where servicename can be:  hiveserver server for Thrift, JDBC, ODBC clients  hwi web interface  jar hadoop jar with Hive jars in classpath  metastore out of process metastore 168 168
  169. 169. Hive - Metastore   Stores Hive metadata Configurations Embedded in-process metastore, in-process database  Local in-process metastore, out-of-process database  Remote out-of-process metastore, out-of-process database  169 169
  170. 170. Hive – Schema-On-Read    170 170 Faster loads into the database (simply copy or move) Slower queries Flexibility – multiple schemas for the same data
  171. 171. Hive - Configuration • Three ways to configure hive: • hive-site.xml - fs.default.name mapred.job.tracker Metastore configuration settings hive –hiveconf  “Set” command in the Hive Shell  171 171
  172. 172. Hive Query Language (HiveQL)    SQL dialect Does not support full SQL92 specification No support for: HAVING clause in SELECT  Correlated subqueries  Subqueries outside FROM clauses  Updateable or materialized views  Stored procedures  172 172
  173. 173. Sample code #hive hive> CREATE TABLE foreign_aid (country STRING, sum BIGINT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’ STORED AS TEXTFILE; hive> SHOW TABLES; hive> DESCRIBE foreign_aid; hive> LOAD DATA INPATH ‘econ_assist.csv’ OVERWRITE INTO TABLE foreign_aid; hive> SELECT * FROM foreign_aid LIMIT 10; hive> SELECT country, SUM(sum) FROM foreign_aid GROUP BY country; 173 173
  174. 174. Hive Query Language (HiveQL)  Extensions MySQL-like extensions  MapReduce extensions  Multi-table insert, MAP, REDUCE, TRANSFORM clauses  Data Types  Simple TINYINT, SMALLINT, INT, BIGINT, FLOAT, DOUBLE, BOOLEAN, STRING  Complex ARRAY, MAP, STRUCT 174 174
  175. 175. Hive Query Language (HiveQL)  Built-in Functions   175 175 SHOW FUNCTIONS DESCRIBE FUNCTION
  176. 176. Hive – User-Defined Functions   Written in Java Three UDF types:      176 UDF Input: single row, output: single row UDAF Input: multiple rows, output: single row UDTF Input: single row, output: multiple rows Register UDF using ADD JAR Create alias using CREATE TEMPORARY FUNCTION 176
  177. 177. Agenda • • • • 177 Overview Pig Hive Jaql
  178. 178. Jaql architecture Interactive shell / Applications Script Compiler / Parser / Rewriter I/O layer Storage layer File Systems (HDFS, GPFS, Local) 178 178 Databases (DBMS, HBase) Streams (Web, Pipes)
  179. 179. Jaql data model: JSON     179 179 JSON = JavaScript object Notation Flexible (Schema is optional) Powerful modeling for semi-structured data Popular exchange format
  180. 180. JSON example [ {ACCT_NUM:18,AUTH_DATE:”2011-01-29”, AUTH_AMT:”111.11”,ZIP:98765,MERCH_NAME:”Acme”}, {ACCT_NUM:19,AUTH_DATE:”2011-01-29”, AUTH_AMT:”222.22”,ZIP:98765,MERCH_NAME:”Exxme”, NICKNAME:”Xyz”}, {ACCT_NUM:20,AUTH_DATE:”2011-01-30”, AUTH_AMT:”3.33”,ZIP:12345,MERCH_NAME:”Acme”, ROUTE:[”68.86.85.188”,”64.215.26.111”]}, … ] 180 180
  181. 181. Running Jaql  Jaql Shell Interactive.  Batch  Inline   Cluster  Minicluster 181 -b myscript.jaql -e jaqlstatement Modes  181 Eg: jaqlshell Eg: jaqlshell Eg: jaqlshell Eg: jaqlshell Eg: jaqlshell -c
  182. 182. Jaql query language source … operator operator sink • Sources and sinks Eg: Copy data from a local file to a new file on HDFS source sink read(file(“input.json”)) -> write(hdfs(“output”))  Core Operators Filter Transform Expand 182 182 Group Join Union Tee Sort Top
  183. 183. Jaql query language • Variables   Pipes, streams, and consumers    183 183 Equal operator (=) binds source output to a variable e.g. $tweets = read(hdfs(“twitterfeed”)) Pipe operator (->) streams data to a consumer Pipe expects array as input e.g. $tweets → filter $.from_src == 'tweetdeck'; $ – implicit variable referencing current array value
  184. 184. Jaql query language • Categories of Built-in Functions system core hadoop io array index 184 184 schema xml regex binary date nil agg number string function random record
  185. 185. Jaql – Data Storage  Data store examples    Amazon S3 HTTP 185 HBase Local FS Data format examples JSON 185 DB2 JDBC AVRO CSV XML HDFS
  186. 186. Jaql sample code #jaqlshell -c jaql> $foreignaid = read(del(“econ_assist.csv”, {schema: schema {country: string, sum: long} } ) ) jaql> $foreignaid -> group by $country = ($.country) into {$country.country, sum($[*].sum)}; 186 186
  187. 187. Hadoop core lab – Part 3
  188. 188. BigDataUniversity.com
  189. 189. Acknowledgements and Disclaimers Availability. References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. The workshops, sessions and materials have been prepared by IBM or the session speakers and reflect their own views. They are provided for informational purposes only, and are neither intended to, nor shall have the effect of being, legal or other guidance or advice to any participant. While efforts were made to verify the completeness and accuracy of the information contained in this presentation, it is provided AS-IS without warranty of any kind, express or implied. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this presentation or any other materials. Nothing contained in this presentation is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreem ent governing the use of IBM software. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results. © Copyright IBM Corporation 2013. All rights reserved. •U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM, the IBM logo, ibm.com, InfoSphere and BigInsights, Streams, and DB2 are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml Other company, product, or service names may be trademarks or service marks of others.
  190. 190. Communities • On-line communities, User Groups, Technical Forums, Blogs, Social networks, and more o Find the community that interests you … • Information Management bit.ly/InfoMgmtCommunity • Business Analytics bit.ly/AnalyticsCommunity • Enterprise Content Management bit.ly/ECMCommunity • IBM Champions o Recognizing individuals who have made the most outstanding contributions to Information Management, Business Analytics, and Enterprise Content Management communities • ibm.com/champion
  191. 191. Thank You Your feedback is important! • Access the Conference Agenda Builder to complete your session surveys oAny web or mobile browser at http://iod13surveys.com/surveys.html oAny Agenda Builder kiosk onsite
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×