Mayuri Agarwal
Data Management !!!!!!
Big Data-What does it mean?
Velocity:
Often time sensitive , big data must be used
as it is streaming in to the enterprise it order
to maximize its value to the business.
Batch ,Near time , Real-time ,streams
Volume:
Big data comes in one size : large .
Enterprises are awash with data ,easy
amassing terabytes and even petabytes of
information.
TB , Records , Transactions ,Tables , Files.
Variety:
Big data extends beyond structured data
, including semi-structured and unstructured
data to all varieties :text , audio , video ,click
streams ,log files and more
Structured , Unstructured , Semi-structured
Veracity:
Quality and provenance of received data.
Good , Undefined , bad , Inconsistency
, Incompleteness , Ambiguity
Value
Big Data
90%
10%
Worldwide Data
Last 2 years
Since the Beginnning of
the Time
What is Hadoop?
Software project that enables the distributed processing of large data sets across clusters of
commodity servers
Works with structured and unstructured data
Open source software + Hardware commodity = IT cost Reduction
It is designed to scale up from a single server to thousands of machines
Very high degree of fault tolerance software’s ability to detect and handle failures at the application
layer
The origin of the name Hadoop….
The name Hadoop is not an acronym; it’s a
made-up name. The project’s creator, Doug
Cutting, explains how the name came about:
The name my kid gave a stuffed yellow
elephant. Short, relatively easy to spell and
pronounce, meaningless, and not used
elsewhere: those are my naming criteria.
Kids are good at generating such. Googol is
a kid’s term.
Hadoop Sub-projects
 HDFS
 Map-Reduce
HDFS-Hadoop Distributed File System
 Distributed, scalable, and portable file system
Each node in a Hadoop instance typically has
a single Namenode : a cluster of Datanodes
form the HDFS cluster
Asynchronous replication.
Data divided into 64mb (default) or 128mb
blocks , each block replicated 3 times (default)
Namenode holds file system metadata.
Files are broken up and spread over Datanode
.
HDFS- Read & Write
MapReduce
Software framework for distributed
computation
Input | Map() | Copy/Sort | Reduce () |
Output
JobTracker schedules and manages
jobs.
Task tracker executes individual
map() and reduce task on each cluster
node.
Example : MapReduce
Master – Slave Model
Hadoop Ecosystem
HBase
 HBase is an open source , non-relational, distributed database
 A Key-value store
 A value is identified by the key
 Both key and value are a byte array
 The values are stored in key-order
 Thus access data by key is very fast
 Users create table in HBase
 There is no schema of HBase table
 Very good for sparse data
 Takes lots of disk space
HBase Architecture
 Master: Responsible for coordinating with region server.
 Region server: Serves data for read and write
 Zookeeper: Manages the HBase cluster
 Low latency and random access to data
Hive
 A system for managing and querying structured data built on Hadoop
 SQL-Like query language called HQL
 Main purpose is analysis and ad hoc querying
 Database/table/partition –DDL operation
 Not for :small data sets ,Low latency queries ,OLTP
Hadoop-Hive Architecture
HBase-Hive configuration
HBase as ETL data sink
HBase as Data Source
Low Latency warehouse
Hive and MySQL Database Structure
Hadoop Limitations
 Not a high-speed SQL database.
 Is not a particularly simple technology.
 Hadoop is not easy to connect to legacy systems.
 Hadoop is not a replacement for traditional data warehouses. It is an
adjunctive product to data warehouses.
 Normal DBAs will need to learn new skills before they can adopt
Hadoop tools.
 The architecture around the data - the way you store data, the way
you de-normalize data, the way you ingest data, the way you extract
data - is different in Hadoop.
 Linux and Java skills are critical for making a Hadoop environment a
reality.
Hadoop’s Capability
 Hadoop is a super-powerful environment that can transform your
understanding of data.
 Hadoop can store vast amounts of data.
 Hadoop can run queries on huge data sets.
 You can archive data on Hadoop and still query it.
 Hadoop allows you to ingest data at incredible speeds and analyze it and
report on it in near real-time.
 Hadoop massively reduces the latency of data.
Hadoop: Hot skill to acquire on IT job
circuit
 The market for data technologies, such as databases, is a multi-billion dollar industry.
 Many start-ups are working on technology extensions to Hadoop to make it both analytical
and transactional. That would be big.
 Major companies have a big data strategy and want to build their businesses on top of this
 Google, the originator of Hadoop, has already moved on – suggesting that within a decade
either the Hadoop framework will have to be developed beyond all recognition or that
something newer could be on the way to supplant it.
 Every major internet company - be it Google, Twitter, Linkedin or Facebook - uses some form
of Hadoop .
mayuri.enggheads@gmail.com

Hadoop

  • 1.
  • 2.
  • 4.
    Big Data-What doesit mean? Velocity: Often time sensitive , big data must be used as it is streaming in to the enterprise it order to maximize its value to the business. Batch ,Near time , Real-time ,streams Volume: Big data comes in one size : large . Enterprises are awash with data ,easy amassing terabytes and even petabytes of information. TB , Records , Transactions ,Tables , Files. Variety: Big data extends beyond structured data , including semi-structured and unstructured data to all varieties :text , audio , video ,click streams ,log files and more Structured , Unstructured , Semi-structured Veracity: Quality and provenance of received data. Good , Undefined , bad , Inconsistency , Incompleteness , Ambiguity Value
  • 5.
    Big Data 90% 10% Worldwide Data Last2 years Since the Beginnning of the Time
  • 6.
    What is Hadoop? Softwareproject that enables the distributed processing of large data sets across clusters of commodity servers Works with structured and unstructured data Open source software + Hardware commodity = IT cost Reduction It is designed to scale up from a single server to thousands of machines Very high degree of fault tolerance software’s ability to detect and handle failures at the application layer
  • 7.
    The origin ofthe name Hadoop…. The name Hadoop is not an acronym; it’s a made-up name. The project’s creator, Doug Cutting, explains how the name came about: The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria. Kids are good at generating such. Googol is a kid’s term.
  • 8.
  • 9.
    HDFS-Hadoop Distributed FileSystem  Distributed, scalable, and portable file system Each node in a Hadoop instance typically has a single Namenode : a cluster of Datanodes form the HDFS cluster Asynchronous replication. Data divided into 64mb (default) or 128mb blocks , each block replicated 3 times (default) Namenode holds file system metadata. Files are broken up and spread over Datanode .
  • 10.
  • 11.
    MapReduce Software framework fordistributed computation Input | Map() | Copy/Sort | Reduce () | Output JobTracker schedules and manages jobs. Task tracker executes individual map() and reduce task on each cluster node.
  • 12.
  • 13.
  • 14.
  • 15.
    HBase  HBase isan open source , non-relational, distributed database  A Key-value store  A value is identified by the key  Both key and value are a byte array  The values are stored in key-order  Thus access data by key is very fast  Users create table in HBase  There is no schema of HBase table  Very good for sparse data  Takes lots of disk space
  • 16.
    HBase Architecture  Master:Responsible for coordinating with region server.  Region server: Serves data for read and write  Zookeeper: Manages the HBase cluster  Low latency and random access to data
  • 17.
    Hive  A systemfor managing and querying structured data built on Hadoop  SQL-Like query language called HQL  Main purpose is analysis and ad hoc querying  Database/table/partition –DDL operation  Not for :small data sets ,Low latency queries ,OLTP
  • 18.
  • 19.
    HBase-Hive configuration HBase asETL data sink HBase as Data Source Low Latency warehouse
  • 20.
    Hive and MySQLDatabase Structure
  • 21.
    Hadoop Limitations  Nota high-speed SQL database.  Is not a particularly simple technology.  Hadoop is not easy to connect to legacy systems.  Hadoop is not a replacement for traditional data warehouses. It is an adjunctive product to data warehouses.  Normal DBAs will need to learn new skills before they can adopt Hadoop tools.  The architecture around the data - the way you store data, the way you de-normalize data, the way you ingest data, the way you extract data - is different in Hadoop.  Linux and Java skills are critical for making a Hadoop environment a reality.
  • 22.
    Hadoop’s Capability  Hadoopis a super-powerful environment that can transform your understanding of data.  Hadoop can store vast amounts of data.  Hadoop can run queries on huge data sets.  You can archive data on Hadoop and still query it.  Hadoop allows you to ingest data at incredible speeds and analyze it and report on it in near real-time.  Hadoop massively reduces the latency of data.
  • 23.
    Hadoop: Hot skillto acquire on IT job circuit  The market for data technologies, such as databases, is a multi-billion dollar industry.  Many start-ups are working on technology extensions to Hadoop to make it both analytical and transactional. That would be big.  Major companies have a big data strategy and want to build their businesses on top of this  Google, the originator of Hadoop, has already moved on – suggesting that within a decade either the Hadoop framework will have to be developed beyond all recognition or that something newer could be on the way to supplant it.  Every major internet company - be it Google, Twitter, Linkedin or Facebook - uses some form of Hadoop .
  • 25.