Ecofriendly solutions at work - SMB Datacenters


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • The NETWORK underpins all of the major datacenter transformation efforts. These major initiatives require the NETWORK evolve and keep pace with datacenter upgrades. The Data Center LAN faces a number of challenges as enterprises are centralizing applications and consolidating servers to simplify operations and reduce costs while business productivity increasingly depends on operations carried out at distributed branch offices. As businesses continue to expand across the globe, downtime is not an option—a Data Center LAN must efficiently operate 24x7. Centralization of Data Centers : to reduce costs, simplify operations and comply with regulatory guidelines, more and more enterprises are consolidating their data centers Server Consolidation : Servers are growing at very fast on annual rate and storage is increasing double. Both causing tremendous strain on the data center’s power and cooling capacities. Most enterprise servers operate only at 20 percent capacity; new technologies like virtualization are needed to better utilize these resources. Virtualization: is a technology used to share resources, makes single physical resources appear as many individually separate resources. Conversely it also makes individually separate physical resources appear as one unified resource. Virtualization can also include making one physical resource to appear, with somewhat different characteristics, as one logical resource. The benefits of virtualization are in creating more complex systems with minimal effort. It takes advantage of commodity hardware to build modular systems that easily scale and accommodate consolidation, advanced automation, security and ease of management. It is used on four main resource categories: servers, storage, networks, and end-user desktops. About SOX: Companies must regularly provide external auditors with proof of their compliance with laws and regulations. An example is the Sarbanes-Oxley (SOX) law, which applies to listed American companies and, generally, to non-US companies listed on a US Stock Exchange. These laws and regulations may aim at preserving the integrity of financial data (the case of SOX and the French Law on Financial Security) or medical data (US regulation 21 CFR Part 11), or the confidentiality of personal data, etc.
  • Where is Ethernet : Have a look among various interconnect technology where Ethernet is fitting architecture requirements. Ethernet is the largest technology adopted for Interconnection in large and mid-size DataCenter, largerly dominating interconnection for small Datacenter. Allied Telesis is a Ethernet company, with reliable and powerfull devices can satisfy lots of customer requirements. Coupling fabric (using MPI – Message passing Interface) Myrinet - Medium bandwidth (2G) & Low latency (~6.5us), Proprietary, Widely used Quadrics - High bandwidth (10G) & Low latency (~2us), Proprietary, Specialized applications InfiniBand - High bandwidth (10G) & Low latency (~7.8us), Open standard, but single chipset vendor , Widely used Gig-Ethernet - High bandwidth (10G) & High latency (~50us), Open standard, Widely used – Ubiquitous The growing prevalence of gigabit Ethernet (GbE) and the simplicity of deploying and managing an Ethernet-based Network Attached Storage (NAS) are making iSCSI an attractive, lowcost alternative. Additionally, Ethernet-based NAS solutions more easily take advantage of virtualization to rapidly scale and provide HA. While 4 or 8 Gbps Fibre Channel offers a speed advantage over GbE, Network Interface Cards (NICs) offering TCP Offload capabilities greatly enhance iSCSI performance. In addition, the emergence and adoption of lower-cost 10 GbE allows iSCSI to outperform Fibre Channel and accommodate any high-speed storage needs.
  • What kind of features one would expect in the switches in Data Centers: - Latency of traffic should be very less:  by eliminating the aggregation layer itself reduces the latency. But that is not good enough for SAN traffic, Video and Voice workloads.  Non-blocking switching or cut-through switching is expected to support real time traffic such as Video, Voice etc..   Traditionally, switches oversubscribe the bandwidth, that is, switches are not capable of receiving and transmitting of traffic of all ports at the same time with full port bandwidth. Hence the packets get blocked.  In non-blocking switches,  they are expected to send and receive traffic equal to number ports * each port bandwidth.  If there are ten 1G ports, switches are expected to receive 10G traffic and send 10G traffic.     - 802.1qbb (Priority based Flow Control):  When there is a congestion in the receiving node, 802.3x pause frame is generated normally. This makes all the traffic pause for some time. This standard allows pause frame generation on 802.1p priority levels. It lets the high priority traffic flow. Switches are expected to honor and generate theses kinds of frames. - 802.1qaz (Enhanced Traffic Selection):  This standard allows the bandwidth allocation for different priority levels or group of priority levels.  It lets higher priority bandwidth to be consumed lower priority traffic if there is no higher priority traffic.  SAN traffic would need to be going with higher priority levels. This feature is also expected to be supported by data center switches. - 802.1qau (Congestion Notification):  This standard allows end nodes to communicate the congestion notification.  It lets the end node receiving the congestion notification to apply rate limiting on the out traffic.  This feature is also expected to be supported by data center switches. - Port Density should be high. - Multi-Path support is required - Spanning Tree is not used in these cases as it only provides one path.  - VEPA Support would be required eventually. Due to VEPA,  it may need to support C-VLAN and P-VLANs. - Large number of VLANs support is required to work with other network services such as ADCs, WAN Optimization and Network Security (Firewall, IPS, IPSec VPN etc..).  - Ability to redirect the traffic not only based on L2 and L3 fields, but also L4 fields such as TCP, UDP Source and destination ports.  - Any switch architecture should work with Virtualization migrating from one physical server to new physical server. Public Data Center networks require Virtual Instance kind of concept within the switches to reuse VLANs (across different subscribers) due to limited number of VLAN IDs. It was necessary to have three tiers in earlier data center architectures due to large number of physical machines serving the content requires large number of Ethernet ports.  Due to poor density of the ports on the switches, multiple access layer switches were necessary. Multiple switches means there is lot more traffic across access layer switches. One more hierarchy of switches enable good throughput by eliminating mesh kind of access layer switches for intra switch traffic. One big change is collapse of three tiers to two tiers . Aggregation layer should disappear: - Virtualizaton technology is reducing the number of physical machines:  this implies that there are less number of ports. - Traffic on each port is increasing : Virtualization and Mulitcore processor are enabling multiple applications in one physical machine.  It is not uncommon to see the requirement of multi-gig traffic on a single port. - 10G and soon 40G/100G ports are facilitating the unified fabric for both kinds of traffic - Application traffic and SAN traffic, thus eliminating number of ports and interconnects.
  • In traditional top-of-rack (TOR) or the end-of-row (EOR) deployments, fixed-chassis access-layer switches are used to provide high-performance, HA services and high-density GbE and 10 GbE connections to servers in the data center. Top-of-Rack (TOR) - This configuration places high-performance switches at the top of the server rack in a row of servers in the data center. Cabling run lengths are minimized in this deployment and simpler than end-of-row (EOR) configurations. However, each legacy switch must be managed individually, complicating operations and adding expense as multiple discreet 24- or 48-port switches are required to meet connectivity needs in TOR configurations. End-of-Row (EOR) - In this configuration, high-density switches are placed at the end of a row of servers in the data center. Traditional modular chassis switches have commonly been used in this deployment, where cabling is quite complex. Switch port utilization is suboptimal with traditional chassis based switches, and most consume a great deal of power and cooling, even when not fully configured or utilized. In addition, these large chassis-based switches are usually large and take up a great deal of valuable data center space. Traditionally data centers have three tiers of switches:  Core switches,  Aggregate switches and Access switches. Core Switches :  These switches connect to the network which are connected to the WAN links. This is farthest switch farm with respect to servers. Access Switches :  These switches are also called top-of-rack switches .  Servers (Web Servers, Email Servers, Application Servers, Database Servers and others for which data center is built) get connected to the ports of these switches.  Aggregation Switches :  Aggregation switches is intermediate switch layer which is sandwiched between Core and Access switch layers.  Aggregation switch aggregates the traffic between core and access layers.  Note that there could be lot of traffic among servers (Specifically among application, web and database servers). This traffic need not be seen by the core switches.  This traffic just need to be among the access layer switches.  Aggregation layer eliminates the traffic being seen by every switch.  Core switches only see the traffic going to/coming from WAN/Corporate network.  Aggregation layer also reduces the traffic among access layer switches.
  • Allied Telesis is able to provide an high-performance and cost-effective solution from Server Connectivity to Users Access. Smooth migration path to Eco Friendly environments.
  • Ecofriendly solutions at work - SMB Datacenters

    1. 1. EcoFriendly solutions at work – SMB Datacenters Allied Telesis Solutions
    2. 2. Eco Friendly switches <ul><li>Why bother ? </li></ul>
    3. 3. Eco Friendly switches – Why? <ul><li>In 2005, 1% of worldwide electricity was used in data centers, and rate was growing at 14% per annum – J. Koomey, LBNL </li></ul><ul><li>2% of all C0 2 generated is from computers – the same amount as air transport – Gartner 2006 </li></ul><ul><li>9.4% of US Electricity is used to power the Internet – Forrester 2007 </li></ul><ul><li>3 to 5% of global consumption of energy spent to run Internet !!! – PC World 2010 </li></ul><ul><li>One Google search consumes enough electricity to run an 11-watt, energy-saving lightbulb for 15 minutes </li></ul>
    4. 4. Allied Telesis Solutions How do you build an Eco Friendly switch ?
    5. 5. Logo
    6. 6. Change manufacturing process <ul><li>Allied Telesis build assemblies to meet the higher RoHS level 6 (lead free) standard. </li></ul><ul><ul><li>RoHS level 5 allows the continued use of lead solder in networking equipment </li></ul></ul><ul><ul><li>Manufacturing to RoHS level 6 requires redesigns of all PCB (printed circuit boards) as the liquid lead-free solder has very different physical properties </li></ul></ul><ul><li>Allied Telesis uses water-based technology to clean chemicals from PCBs and not petrol/chemical-based solvents. </li></ul><ul><ul><li>An estimated 38.000 liters per year of solvent-based cleaner has been saved by adopting water-based technology </li></ul></ul><ul><li>Allied Telesis uses re-cycled water, and re-cycles water from our manufacturing facilities ensuring minimal impact to the environment. </li></ul>
    7. 7. Go even greener
    8. 8. Hardware design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input
    9. 9. Hardware design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input High Efficiency Power Supply – up to 85%+
    10. 10. Hardware design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input <ul><li>Remove the Fan, or reduce the speed: </li></ul><ul><li>Requires less power (smaller PSU). </li></ul><ul><li>Reduced Noise. </li></ul><ul><li>Increases Reliability do to fewer (less) moving parts. </li></ul>
    11. 11. Hardware Design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input <ul><li>Line Drivers can significantly reduce power: </li></ul><ul><li>Measure and Minimise – Automatically measure the cable length, and only apply the necessary power. </li></ul><ul><li>Power down ports when not needed – (overnight, weekends, ..) </li></ul><ul><li>‘ Sleep’ Mode – reduce power when not connected – no active link. </li></ul>
    12. 12. Hardware Design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input <ul><li>Turn off LED Indicators when not needed. </li></ul><ul><li>Simple front panel switch to turn on/off the LEDs. </li></ul><ul><li>Reduces Power requirements, reduces heat, increases reliability. </li></ul>
    13. 13. Hardware Design Single Chip Gigabit Ethernet Switch Power Supply Ethernet Cables Connectors Line Drivers Main Cooling Fan LED Indicators LED Drivers AC Input <ul><li>Use of Latest Technology Silicon: </li></ul><ul><li>High levels of integration – fewer chips </li></ul><ul><li>Better Chip geometries (65nm) for lower power </li></ul>
    14. 14. “ Green” User Interface
    15. 15. Results <ul><ul><li>AT-FS705LE 1.3W </li></ul></ul><ul><ul><li>AT-FS708LE 1.49W </li></ul></ul><ul><ul><li>AT-9000/28 30.14W </li></ul></ul><ul><ul><li>AT-9000/28SP 36W </li></ul></ul><ul><ul><li>AT-9000/52 46.92W </li></ul></ul><ul><ul><li>AT-8000S/24 26.5W </li></ul></ul><ul><ul><li>AT-8000S/48 32.6W </li></ul></ul><ul><ul><li>AT-8000GS/24 40W </li></ul></ul><ul><ul><li>AT-8000GS/48 65W </li></ul></ul>
    16. 16. Let’s put it another way 52 million cups of tea boiled with the saved energy. (British cups that is)
    17. 17. Allied Telesis Solutions SMB Datacenters
    18. 18. Data Center Trends <ul><li>Customer requirements </li></ul><ul><ul><li>Reducing power consumption </li></ul></ul><ul><ul><li>CONSOLIDATION </li></ul></ul><ul><ul><li>VIRTUALIZATION </li></ul></ul>Converge Networks Energy Efficiency SOX Compliance Data Center Consolidation <ul><li>Scalability on demand </li></ul><ul><li>Network resiliency </li></ul><ul><li>Reduced TCO </li></ul>Multimedia Application Support Major Requirements SLA
    19. 19. Anatomy of a Cluster – Physical <ul><li>I/O fabric </li></ul><ul><ul><li>Ethernet </li></ul></ul><ul><ul><li>Quadrics </li></ul></ul><ul><ul><li>Myrinet </li></ul></ul><ul><ul><li>InfiniBand </li></ul></ul><ul><li>OOB management fabric </li></ul><ul><ul><li>Ethernet </li></ul></ul><ul><ul><li>Serial </li></ul></ul><ul><li>Storage </li></ul><ul><ul><li>Direct Attached Storage (DAS) </li></ul></ul><ul><ul><li>Network Attached Storage (NAS) </li></ul></ul><ul><ul><li>Full Storage Area Network (SAN) </li></ul></ul><ul><li>SAN fabric </li></ul><ul><ul><li>Fiber channel </li></ul></ul><ul><ul><li>iSCSI w/ Ethernet </li></ul></ul><ul><ul><li>InfiniBand </li></ul></ul>
    20. 20. Datacenter requirements <ul><li>Ethernet Interfaces (GbE, 10GbE, CX4/DA...) </li></ul><ul><li>High Performances (Low latency and non blocking switching) </li></ul><ul><li>Protocols Support (L2… </li></ul><ul><li>Mechanical/Airflow </li></ul><ul><li>End of Row and Top-of-Rack Aggregation </li></ul><ul><li>High Availability </li></ul>
    21. 21. High Availability <ul><li>Redundant Stackable core </li></ul><ul><ul><li>Link Aggregate across the VC-Stack </li></ul></ul><ul><ul><li>High Redundant Network </li></ul></ul><ul><ul><li>No STP or VRRP is necessary </li></ul></ul><ul><ul><li>Easy to design to configure and to maintain </li></ul></ul>
    22. 22. End-of-Row / Top-of-Rack / Aggregation <ul><li>Aggregate rows of servers into an End-of-Row switch </li></ul><ul><ul><li>Allows dense aggregation </li></ul></ul><ul><ul><li>Less switches in network </li></ul></ul><ul><li>Connect one rack of servers into a Top-of-Rack switch </li></ul><ul><ul><li>Allows plug-and-play racks </li></ul></ul><ul><ul><li>Simplified cable management </li></ul></ul><ul><ul><li>Top-of–Rack switches interconnect trough an Aggregator switch </li></ul></ul><ul><li>Rack systems are extremely packed with servers and storage </li></ul><ul><li>42 rack unit height space needs to be maximized for computing power </li></ul>End-of-Row switch Aggregator switch Top-of-Rack switches
    23. 23. How does it look like? Services
    24. 25. <ul><li>Thank you! </li></ul>