Cisco and Big Data – Hadoop World 2011
Unified Fabric  optimized for Big Data infrastructures  with seamless integration with current data models Traditional Dat...
 
<ul><li>128 Nodes of UCS C200 M2, 1RU Rock-Mount Servers  </li></ul><ul><ul><li>4x2TB Drives, Dual Xeon 5670 @ 2.93GHz, 96...
<ul><li>Unified Fabric L2 &/or L3 for SAN/NAS, RDBMS, UCS,  and Big Data </li></ul><ul><li>L2/L3 Top of Rack Infrastructur...
<ul><li>Cluster Size </li></ul><ul><ul><li>Number of Data Nodes </li></ul></ul><ul><li>Data Model </li></ul><ul><ul><li>Ma...
A general characteristic of an optimally configured cluster is the ability to decrease job completion times by scaling out...
The complexity of the functions used in Map and/or Reduce has a large impact on the type of job and network traffic. <ul><...
Network Graph of all Traffic received on an single node (80 Node run) Reducers Start Maps Finish Job Complete Maps Start T...
Network Graph of all Traffic received on an single node (80 Node run) <ul><li>Output Data Replication Enabled </li></ul><u...
Network Graph of all Traffic received on an single node (80 node run) Reducers Start Maps Finish Job Complete Maps Start T...
Given the same MapReduce Job, the larger the input dataset, the longer the job will take.  Note: It is important to note t...
The I/O capacity, CPU and memory of the Data Node can have a direct impact on performance of a cluster. Note: A 2RU Server...
Data Locality – The ability to process data where it is locally stored. Note: During the Map Phase, the JobTracker attempt...
Hadoop clusters are generally multi-use. The effect of background use can effect any single jobs completion. Note: A given...
<ul><ul><li>The relative impact of various network characteristics on Hadoop clusters. </li></ul></ul>Impact of Network Ch...
The failure of a networking device can affect multiple data nodes of a Hadoop cluster with a range of effects. Note: The t...
Several HDFS operations and phases of MapReduce jobs are very bursty in nature Note: The extent of bursts largely depend o...
Buffer being used during shuffle phase Buffer being used during output replication <ul><li>The buffer utilization is highe...
Buffer being used during shuffle phase Buffer being used during output replication Note: The Fabric Extender Buffer utiliz...
In a multi-use cluster described previously, multiple job types (ETL, BI, etc.) and Importing data into HDFS can be happen...
In the largest workloads, multi-terabytes can be transmitted across the network Note: Data taken from multi-use workload (...
Generally 1GE is being used largely due to the cost/performance trade-offs. Though 10GE can provide benefits depending on ...
Moving from 1GE to 10GE actually lowers the buffer requirement on the switching layer.  Note: By moving to 10GE, the data ...
Generally network latency, while consistent latency being important, does not represent a significant factor for Hadoop Cl...
For more information: www.cisco.com/go/bigdata
 
Upcoming SlideShare
Loading in …5
×

Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - Jacob Rapp, Cisco

14,918 views

Published on

Hadoop is a popular framework for web 2.0 and enterprise businesses who are challenged to store, process and analyze large amounts of data as part of their business requirements. Hadoop’s framework brings a new set of challenges related to the compute infrastructure and underlined network architectures. This session reviews the state of Hadoop enterprise environments, discusses fundamental and advanced Hadoop concepts and reviews benchmarking analysis and projection for big data growth as related to Data Center and Cluster designs. The session also discusses network architecture tradeoffs, and the advantages of close integration between compute and networking.

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
14,918
On SlideShare
0
From Embeds
0
Number of Embeds
11,478
Actions
Shares
0
Downloads
0
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide

Hadoop World 2011: Hadoop Network and Compute Architecture Considerations - Jacob Rapp, Cisco

  1. 1. Cisco and Big Data – Hadoop World 2011
  2. 2. Unified Fabric optimized for Big Data infrastructures with seamless integration with current data models Traditional Database RDBMS Storage SAN/NAS “ Big Data” Store And Analyze “ Big Data” Real-Time Capture, Read & Update NoSQL Application Virtualized, Bare-Metal, Cloud Sensor Data Logs Social Media Click Streams Mobility Trends Event Data Cisco Unified Fabric
  3. 4. <ul><li>128 Nodes of UCS C200 M2, 1RU Rock-Mount Servers </li></ul><ul><ul><li>4x2TB Drives, Dual Xeon 5670 @ 2.93GHz, 96GB RAM </li></ul></ul>The Setup Lab environment overview <ul><li>16 Nodes of UCS C210 M2, 2RU Rack-Mount Servers </li></ul><ul><ul><li>16xSFF Drives, Dual Xeon 5670 @2.93GHz, 96GB RAM </li></ul></ul>
  4. 5. <ul><li>Unified Fabric L2 &/or L3 for SAN/NAS, RDBMS, UCS, and Big Data </li></ul><ul><li>L2/L3 Top of Rack Infrastructure </li></ul>… .. … .. Note: Two topologies were tested to examine the benefits of providing an integrated solution that can support multiple technologies, such as Traditional RDBMS, SAN/NAS, Virtualization, etc. N7K N7K N7K N3k N3k N3k N3k N3k N3k N7K N7K N7K N5k N5k N5k N5k … N2k … N2k UCS
  5. 6. <ul><li>Cluster Size </li></ul><ul><ul><li>Number of Data Nodes </li></ul></ul><ul><li>Data Model </li></ul><ul><ul><li>MapReduce functions </li></ul></ul><ul><li>Input Data Size </li></ul><ul><ul><li>Total starting dataset </li></ul></ul><ul><li>Characteristics of Data Node </li></ul><ul><ul><li>I/O, CPU, Memory, etc </li></ul></ul><ul><li>Data Locality in HDFS </li></ul><ul><ul><li>Ability to processes data where it already is located. </li></ul></ul><ul><li>Background Activity </li></ul><ul><ul><li>Number of Jobs running, type of jobs, Importing, exporting </li></ul></ul><ul><li>Networking Characteristics </li></ul><ul><ul><li>Availability, Buffering, 10GE vs 1GE, Oversubscription, Latency </li></ul></ul>
  6. 7. A general characteristic of an optimally configured cluster is the ability to decrease job completion times by scaling out the nodes. Test results from ETL-like Workload (Yahoo Terasort) using 1TB data set.
  7. 8. The complexity of the functions used in Map and/or Reduce has a large impact on the type of job and network traffic. <ul><li>Note: </li></ul><ul><li>Yahoo Terasort has a more balanced Map vs. Reduce functions and the same size of intermediate and final data (1TB Input, Shuffle and Output) </li></ul><ul><li>Shakespeare WordCount has most of the processing in the Map Functions, smaller intermediate and even smaller final Data. (1TB Input, 10MB Shuffle, 1MB Output) </li></ul>
  8. 9. Network Graph of all Traffic received on an single node (80 Node run) Reducers Start Maps Finish Job Complete Maps Start These symbols represent a node sending traffic to HPC064 Note: Shortly after the Reducers start Map tasks are finishing and data is being shuffled to reducers As Maps completely finish the network is no loner used as Reducers have all the data they need to finish the job The red line is the total amount of traffic received by hpc064
  9. 10. Network Graph of all Traffic received on an single node (80 Node run) <ul><li>Output Data Replication Enabled </li></ul><ul><li>Replication of 3 enabled (1 copy stored locally, 2 stored remotely) </li></ul><ul><li>Each reduce output is replicated now, instead of just stored locally </li></ul>Note: If output replication is enabled, then the end of the terasort, must store additional copies. For a 1TB sort, 2TB will need to be replicated across the network.
  10. 11. Network Graph of all Traffic received on an single node (80 node run) Reducers Start Maps Finish Job Complete Maps Start The red line is the total amount of traffic received by hpc064 These symbols represent a node sending traffic to HPC064 Note: Due the the combination of the length of the Map phase and the reduced data set being shuffled, the network is being utilized throughout the job, but by a limited amount.
  11. 12. Given the same MapReduce Job, the larger the input dataset, the longer the job will take. Note: It is important to note that as dataset sizes increase completion times may not scale linearly as many jobs can hit the ceiling of I/O and/or Compute power. Test results from ETL-like Workload (Yahoo Terasort) using varying data set sizes.
  12. 13. The I/O capacity, CPU and memory of the Data Node can have a direct impact on performance of a cluster. Note: A 2RU Server with 16 disks gives the node more storage, but trading off CPU per RU. On the other hand a 1RU server gives more CPU per rack.
  13. 14. Data Locality – The ability to process data where it is locally stored. Note: During the Map Phase, the JobTracker attempts to use data locality to schedule map tasks where the data is locally stored. This is not perfect and is dependent on a data nodes where the data is located. This is a consideration when choosing the replication factor. More replicas tend to create higher probability for data locality. Map Tasks: Initial spike for non-local data. Sometimes a task may be scheduled on a node that does not have the data available locally.
  14. 15. Hadoop clusters are generally multi-use. The effect of background use can effect any single jobs completion. Note: A given Cluster, is generally running may different types of Jobs, Importing into HDFS, Etc. Example View of 24 Hour Cluster Use Large ETL Job Overlaps with medium and small ETL Jobs and many small BI Jobs (Blue lines are ETL Jobs and purple lines are BI Jobs) Importing Data into HDFS
  15. 16. <ul><ul><li>The relative impact of various network characteristics on Hadoop clusters. </li></ul></ul>Impact of Network Characteristics on Job Completion times
  16. 17. The failure of a networking device can affect multiple data nodes of a Hadoop cluster with a range of effects. Note: The tasks on affected nodes need to be rescheduled, schedule maintenance activities such as data rebalancing, increasing load on the cluster. <ul><li>Important to evaluate the overall availability of the system. </li></ul><ul><ul><li>Hadoop was designed with failure in mind, given any one node failure does not represent a huge issue, but network failures can span many nodes in the system causing rebalancing and decreased overall resources. </li></ul></ul><ul><li>Redundancy paths and load sharing schemes. </li></ul><ul><ul><li>General redundancy mechanisms can also increase bandwidth. </li></ul></ul><ul><li>Ease of management and consistent Operating System </li></ul><ul><ul><li>Main sources of outages can include human error. Ease of management and consistency are general best practices. </li></ul></ul>
  17. 18. Several HDFS operations and phases of MapReduce jobs are very bursty in nature Note: The extent of bursts largely depend on the type of job (ETL vs. BI). Bursty phases can include replication of data (either importing into HDFS or output replication) and the output of the mappers during the shuffle phase. <ul><li>A network that cannot handle bursts effectively will drop packets, so optimal buffering is needed in network devices to absorb bursts. </li></ul><ul><li>Optimal Buffering </li></ul><ul><li>Given large enough incast, TCP will collapse at some point no matter how large the buffer </li></ul><ul><li>Well studied by multiple universities </li></ul><ul><li>Alternate solutions (Changing TCP behavior) proposed rather than Huge buffer switches </li></ul><ul><li>( http://simula. stanford .edu/sedcl/files/dc tcp -final.pdf ) </li></ul>
  18. 19. Buffer being used during shuffle phase Buffer being used during output replication <ul><li>The buffer utilization is highest during the shuffle and output replication phases. </li></ul><ul><li>Optimized buffer sizes are required to avoid packet loss leading to slower job completion times. </li></ul>Note: The Aggregation switch buffer remained flat as the bursts were absorbed at the Top of Rack layer
  19. 20. Buffer being used during shuffle phase Buffer being used during output replication Note: The Fabric Extender Buffer utilization was roughly equivalent to that of the N3k, but has 32MB of buffer vs. 9MB of N3k <ul><li>The buffer utilization is highest during the shuffle and output replication phases. </li></ul><ul><li>Optimized buffer sizes are required to avoid packet loss leading to slower job completion times. </li></ul>
  20. 21. In a multi-use cluster described previously, multiple job types (ETL, BI, etc.) and Importing data into HDFS can be happening at the same time Note: Usage may vary depending on job scheduling options
  21. 22. In the largest workloads, multi-terabytes can be transmitted across the network Note: Data taken from multi-use workload (Multi-ETL + Multi-BI + HDFS Import).
  22. 23. Generally 1GE is being used largely due to the cost/performance trade-offs. Though 10GE can provide benefits depending on workload. Note: Multiple 1GE links can be bonded together to not only increase bandwidth, but increase bandwidth.
  23. 24. Moving from 1GE to 10GE actually lowers the buffer requirement on the switching layer. Note: By moving to 10GE, the data node has a larger input to receive data lessening the need for buffers on the network as the total aggregate speed or amount of data does not increase substantially. This is due, in part, to limits of I/O and Compute capabilities
  24. 25. Generally network latency, while consistent latency being important, does not represent a significant factor for Hadoop Clusters. Note: There is a difference in network latency vs. application latency. Optimization in the application stack can decrease application latency that can potentially have a significant benefit.
  25. 26. For more information: www.cisco.com/go/bigdata

×