0
Presented by,
Atul Kushwaha
B.Tech, 3rd
Year(IT)
1109113033
Introduction
Big Data:
•Big data is a term used to describe the voluminous
amount of unstructured and semi-structured data...
• The New York Stock Exchange generates about one terabyte of new
trade data per day.
• Facebook hosts approximately 10 bi...
So What Is The Problem?
 The transfer speed is around 100 MB/s
 A standard disk is 1 Terabyte
 Time to read entire disk...
So What do We Do?
•The obvious solution is that
we use multiple processors to
solve the same problem by
fragmenting it int...
Distributed Computing Vs
Parallelization
 Parallelization- Multiple processors or CPU’s
in a single machine
 Distributed...
Problems In Distributed Computing
• Hardware Failure:
As soon as we start using many pieces of
hardware, the chance that o...
To The Rescue!
Apache Hadoop is a framework for running applications on large
cluster built of commodity hardware.
A commo...
What is ? ?
• It is an open source project by the Apache Foundation to handle
large data processing
• It was inspired by G...
Hadoop Approach to Distributed
Computing
 The theoretical 1000-CPU machine would cost a very
large amount of money, far m...
Hadoop Components
HDFSHDFS
Storage
Self-healing
high-bandwidth
clustered storage
MapReduceMapReduce
Processing
Fault-toler...
Hadoop MapReduce
 MapReduce is a programming model
 Programs written in this functional style are automatically parallel...
The Programming Model Of MapReduce
 Map, written by the user, takes an input pair and produces a set of intermediate
key/...
 The Reduce function, also written by the user, accepts an intermediate key I and a set of values
for that key. It merges...
 Filesystems that manage the storage across a network of machines
are called distributed filesystems.
 Hadoop comes with...
HDFS
 It manages storage on the cluster by breaking
incoming files into pieces, called blocks
 Stores each of the blocks...
Namenodes and Datanodes
 A HDFS cluster has two types of node operating in a master-slave
pattern: a namenode (the master...
Who Uses Hadoop?
Advantages Over RDBMS
• Scalable: It can reliably store and process petabytes.
• Economical: It distributes the data and p...
Conclusion
 So major companies like facebook
amazon,yahoo,etc. are adapting Hadoop and in
future there can be many names ...
Questions???
‘Thank You’
Hadoop Technology
Upcoming SlideShare
Loading in...5
×

Hadoop Technology

406

Published on

This is a small presentation on Hadoop .This is useful for seminar topics..

Published in: Engineering
2 Comments
2 Likes
Statistics
Notes
No Downloads
Views
Total Views
406
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
48
Comments
2
Likes
2
Embeds 0
No embeds

No notes for slide
  • Gm…I think everybody heard abt hadoop bcz before fews days an institutional lecture was held in our clg but nobody attended it…so I m give u a brief introduction abt hadoop what it is,what is the need of it,its feature and applications.basically hadoop deals with big data.
  • Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by a global community of contributors and users.[2] It is licensed under the Apache License 2.0.
  • Hadoop is designed to efficiently process large volumes of information by connecting many
    commodity computers together to work in parallel. A 1000 CPU single machine (i.e a
    supercomputer with a vast memory storage) would cost a lot. Thus Hadoop parallelizes the
    computation by tying smaller and more reasonably priced machines together into a single costeffective
    compute cluster.
  • Hadoop is designed to efficiently process large volumes of information by connecting many
    commodity computers together to work in parallel. A 1000 CPU single machine (i.e a
    supercomputer with a vast memory storage) would cost a lot. Thus Hadoop parallelizes the
    computation by tying smaller and more reasonably priced machines together into a single costeffective
    compute cluster.
  • HDFS cluster/healing. MapReduce
  • MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster.[1]
  • First explain distributed file system?-managing storage across n/w of machines.
  • Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
  • Rackspace for log processing. Netflix for recommendations. LinkedIn for social graph. SU for page recommendations.
  • Transcript of "Hadoop Technology"

    1. 1. Presented by, Atul Kushwaha B.Tech, 3rd Year(IT) 1109113033
    2. 2. Introduction Big Data: •Big data is a term used to describe the voluminous amount of unstructured and semi-structured data a company creates. •Data that would take too much time and cost too much money to load into a relational database for analysis. • Big data doesn't refer to any specific quantity, the term is often used when speaking about petabytes and exabytes of data.
    3. 3. • The New York Stock Exchange generates about one terabyte of new trade data per day. • Facebook hosts approximately 10 billion photos, taking up one petabyte of storage. • The Internet Archive stores around 2 petabytes of data, and is growing at a rate of 20 terabytes per month. • The Large Hadron Collider near Geneva, Switzerland, produces about 15 petabytes of data per year. Facts!!
    4. 4. So What Is The Problem?  The transfer speed is around 100 MB/s  A standard disk is 1 Terabyte  Time to read entire disk= 10000 seconds or 3 Hours!  Increase in processing time may not be as helpful because • Network bandwidth is now more of a limiting factor • Physical limits of processor chips have been reached
    5. 5. So What do We Do? •The obvious solution is that we use multiple processors to solve the same problem by fragmenting it into pieces. •Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.
    6. 6. Distributed Computing Vs Parallelization  Parallelization- Multiple processors or CPU’s in a single machine  Distributed Computing- Multiple computers connected via a network
    7. 7. Problems In Distributed Computing • Hardware Failure: As soon as we start using many pieces of hardware, the chance that one will fail is fairly high. • Combine the data after analysis: Most analysis tasks need to be able to combine the data in some way; data read from one disk may need to be combined with the data from any of the other 99 disks.
    8. 8. To The Rescue! Apache Hadoop is a framework for running applications on large cluster built of commodity hardware. A common way of avoiding data loss is through replication: redundant copies of the data are kept by the system so that in the event of failure, there is another copy available. The Hadoop Distributed Filesystem (HDFS), takes care of this problem. The second problem is solved by a simple programming model- Mapreduce. Hadoop is the popular open source implementation of MapReduce, a powerful tool designed for deep analysis and transformation of very large data sets.
    9. 9. What is ? ? • It is an open source project by the Apache Foundation to handle large data processing • It was inspired by Google’s MapReduce and Google File System (GFS) papers in 2003 and 2004 • It was originally conceived by Doug Cutting in 2005 and first used by Yahoo! in 2006 • It is named after his son’s pet elephant incidentally • It is basically a distributed file system which is written in Java.
    10. 10. Hadoop Approach to Distributed Computing  The theoretical 1000-CPU machine would cost a very large amount of money, far more than 1,000 single-CPU.  Hadoop will tie these smaller and more reasonably priced machines together into a single cost-effective computer cluster.  Hadoop provides a simplified programming model which allows the user to quickly write and test distributed systems, and its’ efficient, automatic distribution of data and work across machines and in turn utilizing the underlying parallelism of the CPU cores.
    11. 11. Hadoop Components HDFSHDFS Storage Self-healing high-bandwidth clustered storage MapReduceMapReduce Processing Fault-tolerant distributed processing
    12. 12. Hadoop MapReduce  MapReduce is a programming model  Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines  MapReduce is an associated implementation for processing and generating large data sets.
    13. 13. The Programming Model Of MapReduce  Map, written by the user, takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key I and passes them to the Reduce function.
    14. 14.  The Reduce function, also written by the user, accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values
    15. 15.  Filesystems that manage the storage across a network of machines are called distributed filesystems.  Hadoop comes with a distributed filesystem called HDFS, which stands for Hadoop Distributed Filesystem.  HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information.  The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file-system written in Java for the Hadoop framework. HADOOP DISTRIBUTED FILESYSTEM (HDFS)
    16. 16. HDFS  It manages storage on the cluster by breaking incoming files into pieces, called blocks  Stores each of the blocks redundantly across the pool of servers  It stores three complete copies of each file by copying each piece to three different servers
    17. 17. Namenodes and Datanodes  A HDFS cluster has two types of node operating in a master-slave pattern: a namenode (the master) and a number of datanodes (slave).  The namenode manages the filesystem namespace. It maintains the filesystem tree and the metadata for all the files and directories in the tree.  Datanodes are the work horses of the filesystem.It manages storage attached to the nodes that they run on.  HDFS exposes a file system namespace and allows user data to be stored in files.  Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes.
    18. 18. Who Uses Hadoop?
    19. 19. Advantages Over RDBMS • Scalable: It can reliably store and process petabytes. • Economical: It distributes the data and processing across clusters of commonly available computers (in thousands). • Efficient: By distributing the data, it can process it in parallel on the nodes where the data is located. • Reliable: It automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.
    20. 20. Conclusion  So major companies like facebook amazon,yahoo,etc. are adapting Hadoop and in future there can be many names in the list.  This technology has bright future scope because day by day need of data would increase and security issues also the major point.  Hence Hadoop Technology is the best appropriate approach for handling the large data in smart way and its future is bright…
    21. 21. Questions??? ‘Thank You’
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×