Published on

Hadoop is a project run under Apache. It is an efficient choice to manage big clusters of data easily.

Published in: Education, Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Hadoopis only one part under Apache FoundationAccording to IDC, the amount digital information produced in 2012 will be ten times that produced in 2006: 1800 exabytesThe majority of this data will be “unstructured” – complex data poorly-suited to management by structured storage systems like relational databases
  • 1 Petabyte [where most SME corporations are?]1 Exabyte [where most large corporations are?]1 Zettabyte [where leaders like Facebook and Google are]
  • -Flexible: Data from multiple sources can be joined and aggregated in arbitrary ways enabling deeper analyses than any one system can provide.80% of the world’s data is unstructured, and most businesses don’t even attempt to use this data to their advantage. Imagine if you had a way to analyze that data?
  • HDFS assumes nodes will fail, so it achieves reliability by replicating data across multiple nodesMapReduce: It refers to two separate and distinct tasks that Hadoop programs perform. The first is the map job, which takes a set of data and converts it into another set of data, where individual elements are broken down into tuples (key/value pairs). The reduce job takes the output from a map as input and combines those data tuples into smaller set of tuples. As the sequence of the name MapReduce implies, the reduce job is always performed after a Map.MapReduce was first presented to the world via a 2004 white paper by Google where salient insights were blurt out. Yahoo re-implemented this technique and open sourced it via the Apache foundationAs an analogy, you can think of map and reduce tasks as the way a cen­sus was conducted in Roman times, where the census bureau would dis­patch its people to each city in the empire. Each census taker in each city would be tasked to count the number of people in that city and then return their results to the capital city. There, the results from each city would be reduced to a single count (sum of all cities) to determine the overall popula­tion of the empire. This mapping of people to cities, in parallel, and then com­bining the results (reducing) is much more efficient than sending a single per­son to count every person in the empire in a serial fashion.Large volumes of complex data can hide important insights. Are there buying patterns in point-of-sale data that can forecast demand for products a particular stores?Do user logs from a website, or calling records in a mobile network, contain information about relationships among individual customers? Companies that can extract facts like these from the huge volume of data can better control processes and costs, can better predict demand and build better products
  • HDFS: Hadoop Distributed File SystemMapReduce: Parellel data-processing frameworkHadoop Common: A set of utilities that support the Hadoop subprojectsHbase: Hadoop database for random read/write accessHive: SQL-like queries and tables on large datasetsPig: Data flow language and compilerOozie: Workflow for interdependent Hadoop jobsSqoop: Integration of databases and data warehouses with HadoopFlume: Configurable streaming data collectionZookeeper: Coordination service for distributed applicationsHue: User interface framework and SDK for visual Hadoop applications
  • In the very simple example shown, any two servers can fail, and the entire file will still be available. HDFS notices when a block or a node is lost, and creates a new copy of missing data from the replicas it manages. Because the cluster stores several copies of every block, more clients can read them at the same time without creating bottlenecks.
  • Each of the server runs the analysis on its own block from the file. Results are collated and digested into a single result after each piece has been analyzedRunning the analysis on the nodes that actually store the data delivers much better performance than reading data over the network from a single centralized serverIt monitors jobs during execution, and will restart work lost due to node failure if necessary. In fact, if a particular node is running very slowly, it will restart its work on another server with a copy of the data
  • All above companies are using for variety of tasks like marketing, advertising, and sentiment and risk analysis. IBM used the software as the engine for its Watson computer, which competed with the champions of TV game show Jeopardy.
  • Foursquare aimed at letting your friends in almost every country know where you are and figuring where they are.As a platform, it is now aware of 25+ million venues worldwide, each of which can be described by unique signals about who is coming to these places, when, and for how long. To reward and incent users foursquare allows frequent users to collect points, prize “badges,” and eventually coupons, for check-ins
  • Hadoop

    1. 1. Structured, Unstructured and Complex Data Management Amit Chaudhary 11MCA03 Karthik Iyer 11MCA05
    2. 2. Hadoop What is this? Structure of this Is this unknown thing right for me? Where is this used?
    3. 3.  Any idea? (Idea SIM card)
    4. 4. What is ? It is an open source project by the Apache Foundation to handle large data processing It was inspired by Google’s MapReduce and Google File System (GFS) papers It was originally conceived by Doug Cutting It is named after his son’s pet elephant incidentally
    5. 5. Large Data Means? 1000 kilobytes = 1 Megabyte 1000 Megabytes = 1 Gigabyte 1000 Gigabytes = 1 Terabyte 1000 Terabytes = 1 Petabyte 1000 Petabytes = 1 Exabyte 1000 Exabytes = 1 Zettabyte 1000 Zettabytes = 1 Yottabyte 1000 Yottabytes = 1 Bronobyte 1000 Bronobytes = 1 Geopbyte
    6. 6. So what’s the big deal? Scalable: New nodes can be added as needed, without changing the formats Flexible: It is schema-less, and can absorb any type of data, structured or not, from any number of sources Fault tolerant: System redirects work to another location if a node fails
    7. 7. Hadoop = HDFS + MapReduce HDFS: For storing massive datasets using low-cost storage MapReduce: The algorithm on which Google built its empire
    8. 8. HDFS It is a fault-tolerant storage system Able to store huge amounts of information It creates clusters of machines and coordinates work among them If one fails, it continues to operate the cluster without losing data or interrupting work, by shifting work to the remaining machines in the cluster
    9. 9. HDFS It manages storage on the cluster by breaking incoming files into pieces, called blocks Stores each of the blocks redundantly across the pool of servers It stores three complete copies of each file by copying each piece to three different servers
    10. 10. How this works?
    11. 11. How this works?
    12. 12. Which companies areusing? LinkedIn Walt Disney Wal-mart General Electric Nokia Bank of America Foursquare
    13. 13. at Foursquare Foursquare: Mobile + Location + Social Networking
    14. 14. Is this unknown thing right for me?