Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Virtualization in 4-4 1-4 Data Center Network.

725 views

Published on

4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) setting, due to location based static IP address allocation to all network elements.
Download this ppt first and then open in powerpoint to view without merged figures and with animations.

Published in: Engineering
  • Be the first to like this

Virtualization in 4-4 1-4 Data Center Network.

  1. 1. VIRTUALIZATION IN 4-4 1-4 DATA CENTER NETWORK P R E S E N T A T I O N B Y : A N K I T A M A H A J A N
  2. 2. Introduction Previous work Proposed Plan Experimental setup Results Conclusion A G E N D A
  3. 3. INTRODUCTION • Data center network • Traditional architecture • Agility • Virtualization • 4-4 1-4 Data center network Large clusters of servers interconnected by network switches, concurrently provide large number of different services for different client-organizations. Design Goals: • Availability and Fault tolerance • Scalability • Throughput • Economies of scale • Load balancing • Low Opex A number of virtual servers are consolidated onto a single physical server. Advantages: • Each customer gets his own VM • Virtualization provides Agility Fig 1. Traditional Data Center • In case of hardware failure VM can be cloned & migrated to diff server. • Synchronized Fig 3: 4-replicated 4 1-4 Data Center VM images instead of redundant servers • Easier to test, upgrade and move virtual servers across locations • Virtual devices in a DCN • Reduced Capex and Opex.
  4. 4. 4-4 1-4 ARCHITECTURE • 4-4 1-4 is a location based forwarding architecture for DCN which utilizes IP-hierarchy. • Forwarding of packets is done by masking the destination IP address bits. • No routing or forwarding table maintained at switches • No convergence overhead. • Uses statically assigned, location based IP addresses for all network nodes. A 3-Level 4-4 1-4 Data Center Network
  5. 5. LOCATION-IP BASED ROUTING
  6. 6. MOTIVATION FOR THIS WORK 4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) setting, due to location based static IP address allocation to all network elements. Agility is essential in current data-centers, run by cloud service providers, to reduces cost by increasing infrastructure utilization. Server Virtualization provides the required agility. Whether the 4-4 1-4 network delivers performance guarantees in a virtualized setting, suitable to modern Data Centers, is the major motivation for this work.
  7. 7. PROBLEM STATEMENT How to virtualize the 4-4 1-4 data center network with the following constraints: • Use static IP allocation along with dynamic VMs. • No modification of network elements or end hosts. Design Goals: To design a virtualized data center using 4-4 1-4 topology, that is • Agile • Scalable and Robust • Minimize overhead incurred due to virtualization. • Minimum end-to-end Latency and maximum Throughput • Suitable for all kinds of data center usage scenarios: • Compute Intensive: HPC • Data Intensive: Video and File Streaming • Balanced: Geographic Information System
  8. 8. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Packet tunneled through physical network using location-IP header Packet sending at a server running a type-1 hypervisor
  9. 9. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Directory structure Physical Machines = 2^16. Virtual Machines = 2^17 (2 VMs/PM). Virtual Machines = 2^20 (16 VMs/PM). Directory Servers = 64. Number of Update Servers = 16. Hence, one DS per 1024 PMs and one US per (4 * 1024) PMs. This implies 64 DSs, for minimum 131072 VMs
  10. 10. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Data Structure of Directory
  11. 11. EXPERIMENTAL SET UP Simulation environment: Extension of NS2/Greencloud: • Packet-level simulation of communications in DCN, unlike CloudSim, MDCSim, etc. • DCN Entities are modeled as C++ and OTCL objects. DCN Workloads: Categories of Experiments setup • Computation Intensive Workload (CIW): The servers are considerably loaded, but negligible inter server communication. • Data Intensive Workload (DIW): Huge inter server data transfers, but negligible load at the computing servers • Balanced Workload (BW): Communication links and computing servers are proportionally loaded.
  12. 12. EXPERIMENTAL SET UP • In CIW and BW, tasks are scheduled in a Round Robin fashion by the Data Center object, onto VMs on servers fulfilling task resource requirement. • A task is sent to allocated VM by DCobject through core switches. Output is returned to the same core switch, which then forwards it to the DCobject. • In DIW and BW, intra-DCN comm or data-transfer is modelled by 1:1:1 TCP flows between servers. S: Source and Destination within same Level-0 D: Source and Destination are in different Level-0 but same Level-1 R: Random selection of Source and Destination pairs inside Level- 1.
  13. 13. SIMULATION PARAMETERS
  14. 14. NAM SCREEN SNAPSHOT 6 4 S E R V E R D C N
  15. 15. PERFORMANCE METRICS • Average packet delay • Network Throughput • End to End Aggregate/data Throughput • Average hop count • Packet drop rate • Normalized Routing overhead
  16. 16. RESULTS: AVERAGE HOP COUNT
  17. 17. RESULTS: COMPUTE INTENSIVE WORKLOAD • DVR vs LocR in 16 Servers: 50% less Delay and more throughput • 16 vs 64 Servers: Almost same. • Routing Overhead in DVR increases with more number of servers.
  18. 18. RESULTS: DATA INTENSIVE WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53%
  19. 19. RESULTS: BALANCED WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 42% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 42% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 41%
  20. 20. CONCLUSION Creation of a packet level simulation prototype in NS2/Greencloud for 4-4 1- 4 DCN. Modelling of compute-intensive, data-intensive and balanced workloads We conclude that our framework for virtualization in 4-4 1-4 DCN in has the following significance: • Routing over-head: No convergence overhead in location-based routing • Networking loops: network is free from networking loops • Faster hop-by-hop forwarding: as per-packet-per-hop mask operation is faster than table lookup and update operation. • Efficiency: Location- IP based routing delivers two to ten times more throughput than DVR with same traffic and same topology. • Scalable: In DIW and BW the performance increases by 50% when number of servers is increased by four times.
  21. 21. LIMITATION 4-4 1-4 is highly scalable in Data Intensive and Balanced workload data centers but moderately for heavy-computing data centers . In computation intensive workloads, the performance of 4-4 1-4 DCN with location based routing, either remains the same or increases marginally.
  22. 22. FUTURE WORK Simulation test-bed is ready Trace-driven workload Dynamic VM migration Optimum task Scheduling for 4-4 1-4 Energy consump tion
  23. 23. REFERENCES 1. A. Kumar, S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center Network Based on IP Address Hierarchy for Efficient Routing," in Parallel and Distributed Computing (ISPDC), 2012 11th International Symposium on, 2012, pp. 235-242. 2. D. Chisnall, The defitive guide to the xen hypervisor, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall Press, 2007. 3. D. Kliazovich, P. Bouvry, and S. Khan, “Greencloud: a packet-level simulator of energy-aware cloud computing data centers," The Journal of Supercomputing, pp.1{21, 2010, 10.1007/s11227-010-0504-1. Available: http://dx.doi.org/10.1007/s11227-010-0504-1 4. “The Network Simulator NS-2," http://www.isi.edu/nsnam/ns/.
  24. 24. THANK YOU
  25. 25. There are mysteries in the universe, We were never meant to solve, But who we are, and why we are here, Are not one of them. Those answers we carry inside.
  26. 26. RESULTS: DATA INTENSIVE WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53% • Routing overhead using dynamic routing:
  27. 27. RESULTS: BALANCED WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53% • DVR Routing overhead:

×