Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

IP Expo 2009 – Ethernet - The Business Imperative

414 views

Published on

Published in: Technology, Business
  • Login to see the comments

  • Be the first to like this

IP Expo 2009 – Ethernet - The Business Imperative

  1. 2. Ethernet : the business imperative Olivier Vallois Data Centre Business Development Manager, HP ProCurve
  2. 3. Agenda <ul><li>A bit of history </li></ul><ul><li>Actual challenges for Ethernet </li></ul><ul><li>Where is Ethernet going ? </li></ul>March 15, 2010
  3. 4. A bit of history Who is next ? March 15, 2010 Token Ring 100VG FDDI ATM/LANE FiberChannel Infiniband ? 10Base 100Base 1000Base 10G CEE 2011 1970’s <ul><li>Cost </li></ul><ul><li>Speed </li></ul><ul><li>Technologies </li></ul>
  4. 5. Remaining/upcoming challenges <ul><li>Speed : 40G and 100G </li></ul><ul><li>Latency : cut-through techniques and Silicon improvements </li></ul><ul><li>Lossless : CEE (2011) </li></ul><ul><li>High availability : loop avoidance techniques TRILL is a proposed standard – proprietary techniques in the meantime </li></ul><ul><li>Server virtualisation Apps spread across multiple nodes : low latency/high speed/very large domain Security/Mgmt/Troubleshooting virtual switches & vNICs : VEPA Workload mobility : detect/log/adapt configuration/follow the VM Ability to plum any subnet to any server in any rack : very large domain </li></ul>March 15, 2010
  5. 6. New Data Center Transformed Network: March 15, 2010 VLL2 Domain Core L3 Interconnect Fabric Replication other VLL2 Large, flat, high-performance domain…
  6. 7. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … with exceptional cost/performance, VLL2 Domain New Data Center Transformed Network:
  7. 8. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … that can connect 1000’s of servers in a single domain VLL2 Domain New Data Center Transformed Network:
  8. 9. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … with flexibility to plumb any subnet to any server… VLL2 Domain New Data Center Transformed Network:
  9. 10. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … and move a workload from any server to any server VLL2 Domain New Data Center Transformed Network:
  10. 11. March 15, 2010 Core L3 Intelligent Edge Switches … with consolidated, virtual connections to the network New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  11. 12. March 15, 2010 Core L3 Intelligent Edge Switches … application delivery control provides a layered service… New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  12. 13. March 15, 2010 Core L3 Intelligent Edge Switches … and virtualizes server tiers… New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  13. 14. March 15, 2010 Core L3 … Network connections themselves are virtualized… New Data Center Transformed Network: DCM
  14. 15. March 15, 2010 Core L3 … as is the fabric… New Data Center Transformed Network: DCM Fabric Mgmt
  15. 16. March 15, 2010 Core L3 … and the entire network is presented as a resource pool New Data Center Transformed Network: DCM Fabric Mgmt Service Portal Resource Pool (s) Service Catalog
  16. 17. Vision summary <ul><li>Large, flat, high-performance domain… </li></ul><ul><li>… with exceptional cost/performance </li></ul><ul><li>… that can connect 1000’s of servers in a single domain </li></ul><ul><li>… with flexibility to plumb any subnet to any server… </li></ul><ul><li>… and move a workload from any server to any server </li></ul><ul><li>… with consolidated, virtual connections to the network </li></ul><ul><li>… application delivery control provides a layered service… </li></ul><ul><li>… and virtualizes server tiers… </li></ul><ul><li>… Network connections themselves are virtualized… </li></ul><ul><li>… as is the fabric… </li></ul><ul><li>… and the entire network is presented as a resource pool </li></ul>March 15, 2010
  17. 18. DC Network Switch Thermal Design: Side-to-side cooling  Front-to-back cooling (6600’s) March 15, 2010 Cool Aisle Hot Aisle Side-cooled switch draws rising warm air from inside rack Side-cooled switch draws re-circulated hot air from inside rack Extra heat is leakage that must be moved Hot air exhausted inside rack Cool Aisle Hot Aisle F2B cooled switch draws air directly from the cool aisle F2B cooled switch exhausts air directly Into hot aisle No extra heat is built up inside the rack Side-to-side Cooled Switches Front-to-Back Cooled Switches
  18. 19. 10Gb growth at the edge brings new topology approaches March 15, 2010 <ul><li>10G low-cost connectivity will evolve repeatedly for the next 3 years </li></ul><ul><li>Home-runs directly to core switches carry huge optics cost with 10G </li></ul><ul><li>Using TOR/Pod-based switches greatly reduces 10G costs </li></ul>10G Home run approach (up to $4k more per server! ) 10G Pod-based (6600) approach = cheap copper cabling = expensive fiber cabling Expensive Ports, fixed capacity Lower cost Ports, Flexible growth Expensive fiber Cheap Copper in pods
  19. 20. HP ProCurve 6600 Switch Series <ul><li>Five high-density Gig and 10Gig, data center optimized, 1U top-of-rack switches </li></ul><ul><li>Consistent ProVision ASIC- based switch fabric features and management </li></ul><ul><li>Front to back, reversible airflow </li></ul><ul><li>Highly available, power efficient hardware architecture and software features </li></ul>March 15, 2010 HP ProCurve 6600-48G Switch Series HP ProCurve 6600-24XG Switch HP ProCurve 6600-24G Switch Series Dramatically reducing complexity and OpEx
  20. 21. <ul><li>Automated, policy-based provisioning of network and server resources </li></ul><ul><li>Formalizes common data center workflow activities without organizational disruption </li></ul><ul><li>Eases compliance and troubleshooting in a virtualized, complex environment </li></ul><ul><li>Works with multivendor environments (servers and networks) </li></ul>March 15, 2010 HP ProCurve Data Center Connection Manager
  21. 22. Application Needs Span Technology Team ‘Silos’ in the Data Center March 15, 2010 <ul><li>Network Team </li></ul><ul><li>Provide appropriate connectivity, security, bandwidth </li></ul><ul><li>Network level resiliency, load balancing, firewall </li></ul><ul><li>Server Team </li></ul><ul><li>Provide hardware resources, virtualization, backup and restore </li></ul><ul><li>Typically owns application deployment </li></ul><ul><li>Storage Team </li></ul><ul><li>Provide appropriate storage for application directly or server </li></ul><ul><li>RAID and resiliency / backup settings </li></ul><ul><li>Compliance Team </li></ul><ul><li>Crucial to large DC or specific ‘production’ environments (FSI etc) </li></ul><ul><li>Ensures compliance to security, asset management, or regulatory standards </li></ul><ul><li>Manual, reactive, ‘discovery-based’ processes between IT Tech teams </li></ul><ul><li>As application provisioning and change volume increases these processes do not scale </li></ul><ul><li>Troubleshooting becomes difficult with no centralized information for an application’s total infrastructure requirements </li></ul>03/15/10 Enterprise Applications have requirements across technology silos
  22. 23. Today’s Automation is Element-Focused March 15, 2010 03/15/10 Data Center Network Intelligent Switches Intelligent Switches Typically, network elements are plumbed statically with standard configurations Blade Servers VMs Blade Servers Then servers are applied to the plumbing via their own element management. Customizing network config and server config together is still manual. Server Pod
  23. 24. Tomorrow’s Automation is Service-Focused March 15, 2010 03/15/10 Data Center Network Intelligent Switches Intelligent Switches Intelligent Switches What if the network was presented as an inventory of subscriptions – virtual connections? Server connection inventory “virtual patch panels” Blade Servers Server Pod Blade Servers … and the network access was granted & configured one Server/NIC at a time? VMs
  24. 25. DCM Use Case March 15, 2010 federate 03/15/10 2) Select an available connection 4) Configure Server according to subscription policies Server Admin DCM 3) Subscribe to the connection Network Infrastructure (dynamically deployed for each connection) 7) Automate other Network Policies using HP NAS Subscription, Registration or IP allocation event UCMDB Compliance Checking Fault Management Capacity Management info info info new server or VM 5) Register Server using RADIUS 6) Enforce L3 using DHCP RADIUS / DHCP registration Policy enforcement via RADIUS / DHCP responses Policies for Router, Firewall, Load Balancer, DLP, IDS, etc Connection Inventory 1) Set up policies Place new connections in the inventory Network Admin
  25. 26. DCM Architecture – Links Network Configuration to Server Tools March 15, 2010 03/15/10 Connection Inventory 1) Set up policies Place new connection in the inventory 2) Select an available connection Connection Inventory 4) Configure Server according to subscription policies Network Admin Server Admin new server WS WS 3) Subscribe to the connection Network Infrastructure (plumbed for each registered connection) 6) Automate other Network Policies using Opsware 5) Register Server with network registration event UCMDB federate Compliance Checking Fault Management Capacity Management info info info Authentication/ registration Edge policies Router, Firewall, Load Balancer, DLP, IDS, etc HP NAS Data Center Connection Manager VMWare Virtual Center Blades Essentials Insight Dynamics Virtual Connect HP SAS <ul><li>HP Software: </li></ul><ul><li>OV SC </li></ul><ul><li>OV NNM </li></ul><ul><li>Perigrine </li></ul><ul><li>Mercury </li></ul><ul><li>HP PAS </li></ul>DCM UI DCM UI F5, Checkpoint, Cisco, Riverbed, etc
  26. 27. Conclusion <ul><li>Ethernet has always won </li></ul><ul><li>Still plenty of challenges – not only FCoE </li></ul><ul><li>HP is a key player : standards and building blocks </li></ul>
  27. 28. We would like to invite you to join us on stand 610

×