Your SlideShare is downloading. ×
  • Like

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

IP Expo 2009 – Ethernet - The Business Imperative

  • 242 views
Published

 

Published in Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
242
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
9
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • 10GE will start to enter massively the HPC market with the 10G LOM Voltaire developed a 10GE switch Only 2 vendors doing FC E-ports : Cisco and brocade and will remain like that because the FC market is not a growing market Brocade made a step toward Ethernet with Foundry acquisition
  • Today, you will see pictures and discussions around modern DC networking trends, in particular those related to virtualization and the move of many of our enterprise customers to more HPC topologies. Within these trends, you will see the move towards VLL2 networks to enable virtual workloads mobility and you also see trend towards flatter HPC networks to address the increase we see in scale-out networks in the area of clustering, middleware, business intelligence and also multicast. And I will start discussing these trends right now. In particular, what we see a lot of our customers are asking for right now is a large flat layer 2 network domain. We think it as an interconnect fabric in the core surrounded by a set of intelligent edge switches where we can do policy-based segmentation at the edge. In other words, the interconnect fabric is very high performance, relatively dum and it can move traffic at very low latencies. The blue ring around the edge being the intelligent edge of the network is responsible for segmenting the network, tagging the appropriate virtual networks, dealing with access control, making sure that appropriate traffic is separated properly, especially in a multi-tenant environment.
  • The key to building a network such as this is to achieve exceptional cost performance over a very large number of ports … and I will discuss that in a minute. But by making the core of this network relatively dum, which we have accomplished, we can allow an extremely large domain with the flexibility for virtualization while at the same time not burdening the core of the network with a high degree of complexity or process overhead that dramatically increases the cost of ownership. Many networks today are much smaller than the network that enterprise are moving to, and with larger size, we come to increase the complexity, unless we remove complexity from the core and make it simpler. The key here in its network is that what you see in the upper left, the core L3 is also a client of that VLL2 domain and you can think of that VLL2 domain as a wiring service. It is substrate of wires that is sufficiently high performance and low latency that I can afford to have extra hops to a relatively dum fabric in order to achieve a very high connectivity.
  • So, one of the the purposes for this is to allow me to connect thousands of servers in a single domain. In today’s networks, this is not really feasible because of loop avoidance techniques and the complexity that surrounds a lot of L2 ethernet as we know it today. In particular, I don’t have the port density at 10G and 40G in most of current switches to support a simple topology and we are thus forced to use spanning tree or multi instance spanning tree or other proprietary mechanisms like per-VLAN spanning tree to avoid loops in these networks. Well, going forward, we think there are ways to get loop avoidance by having a virtual simplified topology. But to achieve that simplified topology, we have to again keep the feature set minimal.
  • The purpose of having thousands of servers in a single domain is to allow me to plum any subnet to any server. This is important in the area of provisioning new servers where I don’t want to be shackled to leaving space in racks because I don’t have certain subnets that are available in certain racks but rather I have the flexibility to place any workload anywhere in the Data center. And I can separate the process of facilities design from the process of server provisioning.
  • This slide begins to explore the automation side of DCM, by presenting the traditional, established data design methods. The colored lines Also in this case, because I have a VLL2 domain, my subnets can have a very large span if I want them to. And any server in any rack can have a workload that can be mobilized and vmotioned to any server in that DC if I have sufficient flexibility. Obviously, the offsetting problem that we create is that we want to be really sure that if we have a multi-tenant environment that 2 different networks will never see their traffic cross path. So, it is extremely important to have a segmentation model that is both simple so that it is not prone to user configuration errors but also to avail ourselves to potential physical separation within those large switches if we so choose to. But this need for workload mobility is one of the factor that drives to have VLL2 domain. If I cross subnet with a workload, the identity of the workload itself, the software that runs actually in this workload tends to need to know its IP@, its own network identity, and if I change that identity, I don’t have a seamless workload mobility environment. On the other hand, if I have very small L2 domains, the benefits of workload mobility are dramatically reduced. This is part of the reason along with the provisioning freedom of new servers that we want VLL2 domains. To keep Layer 3 in the Layer 2 domain, we force us to move to a store and forward mentality across that entire topology. And to a much greater degree of complexity for configuring all these switches. So, the goal here is a VLL2 domain, avoid access complexity, strong segmentation capability, and leave a lot of the L3 mechanics to a core external L3 switch.
  • The other key to this network is to allow endpoints to be aggregated for the purposes of cable aggregation and for the purpose of segmentation, traffic aggregation and traffic reduction, by offering virtual connections to the network. You may be familiar with some of HP’s products that we have in this area. HP Virtual Connect is one of them. HP is also proposing an IEEE standard that we call Virtual Ethernet port aggregation. So, in this picture, those blue multiplexers are just that : they are multiplexers of end station traffic to servers and the idea is that they are time division multiplexing a shared uplink. A key to this is to keep it simple and again, if I really care a lot about network segmentation, I am going to force the traffic of servers and Vmware or Zen environment to leave that softswitch and go through a conventional Ethernet switch where I can do policy-based segmentation of traffic. If I don’t mind about that and security is not an issue, I may often call for an entire set of servers and hypervisors to be dedicated to a single virtual network. There are many ways to do this. But the key here is to establish an architecture for consolidating these virtual connections and multiplexing them among the network. I go into more of the physics of DC design in a little bit.
  • The other key to this is another client of this large layer 2 domain, and that’s what we think as the Application Delivery Control. So, if I take some of the ADC, server load balancers, FW as we know them today and layer them onto this network, because that network is so low latency, so high performance, so high throughput, I can afford to have hops that I configure through this core to getto these devices.
  • In the process, I can actually virtualize server tiers in the network using conventional ADC and this can be a soft configuration. In this case, you can think of traffic from users comes in on the core L3 side. It’s the blue arrow. It is forced into a particular virtual network, what we might think as a DMZ, segmented into the L2 domain. That traffic is handled by a web server and may involve a request or a write to an application server which you may think of the green arrow going to serve the tier front end, in this case it could be an F5 or Citrix load balancing system. In that case, an entire server tier is virtualized using NAT and the ADC at the upper right have their own private list of the servers that are involved in providing that application service, and that is the yellow arrow that is going down to a server being in that case a J2EE server. So, by treating the L2 domain as a very high performance substrat, I build functionality and capabilities on top of that low layer service.
  • Also I want to be able to soft configure the access control and the policy-based segmentation that occurs at that outer layer. For example, if I want to host a server workload somewhere in this network , I don’t need to have to pin down what port that server is going to be attached to in order to configure the appropriate access control or VLAN tagging control. Instead, what I can do is allow that server to identify itself by using a technology I talk in a minute, and then configure the that edge switch appropriately at that port. If that port is an uplink from one of the shared multiplexers, then I may have several tagged VLAN that are clients of that particular network port and we can do this with a standard-based architecture that is taking advantage of industry standard such as radius and we can use DHCP as well to configure the actual guest OS on a Virtual machine or a dedicated server. We call it Data Center Connection Management, which is what DCM stands for.
  • We can also take the fabric itself and virtualize how it appears since again it is very high performance and very low latency built probably as a set of cut through cross fabric. I can draw lines to this fabric that represent connections within a virtual network and I can manage those to a certain level of bandwidth and response time, more or less as an SLA at the performance level as well as availability. We call it the fabric management or the fabric virtualisation.
  • And then, the key here is to take the network itself and the resources that it connects and to be able to pool those physical resources, especially those that are either not yet in use or in use now but might not be in use in the future, to classify their state and to represent that state as a set of resource pool and make it available to a service portal, and this is is how the network serves the self service DC.
  • Most important to communicate on this slide: HP can now deliver end to end data center solutions (including HP innovation in the switching fabric) with the extension of ProCurve’s Adaptive Networks vision to the data center. HP ProCurve introduces the 6600 series of data center optimized switching platforms, aimed at dramatically reducing complexity in the network and thus reducing opex. This new family of switches is aimed at optimizing network connectivity for dc server access layer. The 6600 series employs the same code base and management interface as the rest of the Provision ASIC based switching line. Pictured are the 6600-24XG (10gig optimized); the 6600-24G (mid density Gig optimized); and the 6600-48G (high density Gig optimized). Not pictured are the 6600-24G/4XG and 6600-48G/4XG, which are identical to the 24G and 48G platforms with the addition of four 10G uplink Additional material (use ONLY when you MUST provide more detail ): Enabling Unified Core-to-Edge Adaptive Networks in the Data Center that Dramatically Reduce Complexity and Lower Operational Expenses Data Center Optimized for Server Access Front-to-back airflow to collocate top of rack Hot-swappable power supplies and fans + High Performance Superior system performance/scalability Critical networking features + High Availability Redundant/Resilient Platform Proven Architecture High Availability SW features + Unified Core-to-Edge Solution ProVision™ ASIC-based architecture ProCurve unified infrastructure and management
  • Most important to communicate on this slide: In addition to the new 6600 series switching family, HP ProCurve announces the new Data Center Connection Manager (DCM). DCM helps customers automate the provisioning of server network connections, in a controlled, automated fashion. Why is this relevant? In terms of challenges, the virtualization ‘sprawl’ drives up demand for provisioning on the rest of the infrastructure, particularly networking if each virtual has unique needs in terms of connectivity and security settings. The flexibility granted by virtualization technologies like Vmotion and VC stress the static configuration of the network edge. What happens when you move a virtual machine workload from one blade system to another? The network has be re-provisioned based on the change. Provisioning automation tools in the market today (like Cisco Vframe) require an significant investment and overhaul of organizational structures to support their view of ‘best practices' in the data center. One of the key elements to Data Center Connection Manager is it’s flexibility in terms of who does what – it can adapt to fit existing organizational roles, or help organizations transform them. In typical data center environments, the process of provisioning a network connection to a server or virtual machine can be tedious and static, ripe for human error. Data Center Connection Manager provides a tool set to help bridge the gap between the server admin and network admin teams, creating an inventory of network connections which can be applied to servers based on defined policies. This is accomplished without disruption to the existing workflows. DCM works with mixed-vendor environments, both network and server, and is designed to work in concert with HP’s other provisioning tool sets. The capability can be deployed as a controller appliance, as well as a sw image to be housed on a ProCurve ONE services module. DCM can directly result in taking cost out of datacenter operations, and provide a flexible, auditable provisioning process for the network and server admin teams to gain efficiencies. Additional material (use ONLY when you MUST provide more detail): Automated, Policy Based Provisioning Provides policy-based, automated provisioning of network and server resources in the Data Center Formalizes common Data Center Workflow activities Automates and formalizes the workflow process between network and server IT teams for application provisioning Enable IT to do better capacity management, via connection inventory planning Eases compliance and troubleshooting an virtualized, complex environment Creates an authoritative, single source of information for linking network topology and configuration to servers and VM’s DCM Controller, Available April 2009, U.S. List price: $31,999 Data Center Connection Manager ONE Software, Available June 2009, U.S. List Price: $27,100
  • This slide drills down on the technology silo concept introduced on the previous slide. It also uses animation, so be sure to understand how the slide flows (or remove the animation) The high level point is that enterprise applications, and particularly those taking advantage of virtualization have dependencies across all of the technology silo’s in the data center. The silos give examples of the relationship between the business-level application and the technology-level products that run them. You can use examples here to illustrate the point – consider a large online ecommerce or auction site, the application that runs their business has specific networking needs, CPU and server hardware needs, storage and backup requirements, and depending on the market and size, might also have compliance requirements if they’re dealing with credit card numbers, or state/government business etc. Note the compliance team is ‘dotted line’ indicating that while not every data center has a specific compliance silo it is a function that most organizations will perform to greater or lesser degrees – could be as simple as asset management or as complex as full regulatory standards adherence in the Financial Services Market. The last point to make on this slide is important – in general the way that these ‘teams’ interact is usually manual, reactive, and discovery based (not normally pro-actively planned in advance) Application provisioning requirements are increasing due to the technology introduced on the previous slide, and these manual processes do not scale well. We can also introduce the concept of troubleshooting, without a single central repository of information it’s difficult as first you have to chase down all the details of an app’s requirements through the various silos.
  • This slide begins to explore the automation side of DCM, by presenting the traditional, established data design methods. The colored lines represent VLANs, and the intent is to show that typically network elements are statically configured with a standard configuration. The you apply servers and other edge devices to the network via their own element management, like Virtual Center, virtual connect, etc. If there is a requirement for customizing the network config to the server config, as there often is, that is done manually on a case by case basis using ticket-response architectures and workflow processes introduced on the ‘silo’ slide. The intent of this slide is to set up the next slide that proposes DCM’s dynamic approach to configuring network edge configurations.
  • Note this slide uses animation – and replaces some elements with others. The premise is ‘what if the network was presented as an inventory of subscriptions, or ‘virtual connections’? (to the server administrators). Assume they can ‘order’ the right network configuration in the same manner that you would buy someone online. The colors in the ‘virtual patch panels’ equate to the vlans or connectivity settings that were plumbed manually on the previous slide. Where ever the device happens to be, when it’s provisioned the server admin just has to select the right type of port, and when it connects the port is automatically configured according to the policy for that device. When a device connects to the network, its specific access was granted one NIC, MAC, or server at a time based on policy settings. This can happen dynamically, which means the infrastructure can support the mobile nature of virtualized servers and fast scaling applications – if the server moves, its policy follows it automatically, you do not need to manually re-provision the network.
  • So far we have explained steps 1 through 3. After subscribing to the policy, the server admin builds the VM or server according to the policy, then connects it to the network. The server registers automatically using RADIUS (assume MAC-Auth), and the network is dynamically deployed to the edge port. Then the server’s L3 settings are enforced via DHCP. In step 7 we can automate the configuration of other network components via script calls to HP NAS (opsware NAS) The last component is because we are an SQL database, the connection inventory is very useful from a compliance, troubleshooting, and capacity management perspective.
  • This slide overlays the common data center tools to the provisioning process. DCM offers a UI for both network admins and server admins. Server admins can use a variety of element manager tools to configure server hardware and virtual machines. HP NAS manages the automation of centralized networking components from vendors such as Cisco, F5, Checkpoint etc. HP Software has a wide offering of products in the process automation and compliance markets.

Transcript

  • 1.  
  • 2. Ethernet : the business imperative Olivier Vallois Data Centre Business Development Manager, HP ProCurve
  • 3. Agenda
    • A bit of history
    • Actual challenges for Ethernet
    • Where is Ethernet going ?
    March 15, 2010
  • 4. A bit of history Who is next ? March 15, 2010 Token Ring 100VG FDDI ATM/LANE FiberChannel Infiniband ? 10Base 100Base 1000Base 10G CEE 2011 1970’s
    • Cost
    • Speed
    • Technologies
  • 5. Remaining/upcoming challenges
    • Speed : 40G and 100G
    • Latency : cut-through techniques and Silicon improvements
    • Lossless : CEE (2011)
    • High availability : loop avoidance techniques TRILL is a proposed standard – proprietary techniques in the meantime
    • Server virtualisation Apps spread across multiple nodes : low latency/high speed/very large domain Security/Mgmt/Troubleshooting virtual switches & vNICs : VEPA Workload mobility : detect/log/adapt configuration/follow the VM Ability to plum any subnet to any server in any rack : very large domain
    March 15, 2010
  • 6. New Data Center Transformed Network: March 15, 2010 VLL2 Domain Core L3 Interconnect Fabric Replication other VLL2 Large, flat, high-performance domain…
  • 7. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … with exceptional cost/performance, VLL2 Domain New Data Center Transformed Network:
  • 8. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … that can connect 1000’s of servers in a single domain VLL2 Domain New Data Center Transformed Network:
  • 9. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … with flexibility to plumb any subnet to any server… VLL2 Domain New Data Center Transformed Network:
  • 10. March 15, 2010 Core L3 Intelligent Edge Switches Intelligent Edge Switches … and move a workload from any server to any server VLL2 Domain New Data Center Transformed Network:
  • 11. March 15, 2010 Core L3 Intelligent Edge Switches … with consolidated, virtual connections to the network New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  • 12. March 15, 2010 Core L3 Intelligent Edge Switches … application delivery control provides a layered service… New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  • 13. March 15, 2010 Core L3 Intelligent Edge Switches … and virtualizes server tiers… New Data Center Transformed Network: VLL2 Domain Intelligent Edge Switches
  • 14. March 15, 2010 Core L3 … Network connections themselves are virtualized… New Data Center Transformed Network: DCM
  • 15. March 15, 2010 Core L3 … as is the fabric… New Data Center Transformed Network: DCM Fabric Mgmt
  • 16. March 15, 2010 Core L3 … and the entire network is presented as a resource pool New Data Center Transformed Network: DCM Fabric Mgmt Service Portal Resource Pool (s) Service Catalog
  • 17. Vision summary
    • Large, flat, high-performance domain…
    • … with exceptional cost/performance
    • … that can connect 1000’s of servers in a single domain
    • … with flexibility to plumb any subnet to any server…
    • … and move a workload from any server to any server
    • … with consolidated, virtual connections to the network
    • … application delivery control provides a layered service…
    • … and virtualizes server tiers…
    • … Network connections themselves are virtualized…
    • … as is the fabric…
    • … and the entire network is presented as a resource pool
    March 15, 2010
  • 18. DC Network Switch Thermal Design: Side-to-side cooling  Front-to-back cooling (6600’s) March 15, 2010 Cool Aisle Hot Aisle Side-cooled switch draws rising warm air from inside rack Side-cooled switch draws re-circulated hot air from inside rack Extra heat is leakage that must be moved Hot air exhausted inside rack Cool Aisle Hot Aisle F2B cooled switch draws air directly from the cool aisle F2B cooled switch exhausts air directly Into hot aisle No extra heat is built up inside the rack Side-to-side Cooled Switches Front-to-Back Cooled Switches
  • 19. 10Gb growth at the edge brings new topology approaches March 15, 2010
    • 10G low-cost connectivity will evolve repeatedly for the next 3 years
    • Home-runs directly to core switches carry huge optics cost with 10G
    • Using TOR/Pod-based switches greatly reduces 10G costs
    10G Home run approach (up to $4k more per server! ) 10G Pod-based (6600) approach = cheap copper cabling = expensive fiber cabling Expensive Ports, fixed capacity Lower cost Ports, Flexible growth Expensive fiber Cheap Copper in pods
  • 20. HP ProCurve 6600 Switch Series
    • Five high-density Gig and 10Gig, data center optimized, 1U top-of-rack switches
    • Consistent ProVision ASIC- based switch fabric features and management
    • Front to back, reversible airflow
    • Highly available, power efficient hardware architecture and software features
    March 15, 2010 HP ProCurve 6600-48G Switch Series HP ProCurve 6600-24XG Switch HP ProCurve 6600-24G Switch Series Dramatically reducing complexity and OpEx
  • 21.
    • Automated, policy-based provisioning of network and server resources
    • Formalizes common data center workflow activities without organizational disruption
    • Eases compliance and troubleshooting in a virtualized, complex environment
    • Works with multivendor environments (servers and networks)
    March 15, 2010 HP ProCurve Data Center Connection Manager
  • 22. Application Needs Span Technology Team ‘Silos’ in the Data Center March 15, 2010
    • Network Team
    • Provide appropriate connectivity, security, bandwidth
    • Network level resiliency, load balancing, firewall
    • Server Team
    • Provide hardware resources, virtualization, backup and restore
    • Typically owns application deployment
    • Storage Team
    • Provide appropriate storage for application directly or server
    • RAID and resiliency / backup settings
    • Compliance Team
    • Crucial to large DC or specific ‘production’ environments (FSI etc)
    • Ensures compliance to security, asset management, or regulatory standards
    • Manual, reactive, ‘discovery-based’ processes between IT Tech teams
    • As application provisioning and change volume increases these processes do not scale
    • Troubleshooting becomes difficult with no centralized information for an application’s total infrastructure requirements
    03/15/10 Enterprise Applications have requirements across technology silos
  • 23. Today’s Automation is Element-Focused March 15, 2010 03/15/10 Data Center Network Intelligent Switches Intelligent Switches Typically, network elements are plumbed statically with standard configurations Blade Servers VMs Blade Servers Then servers are applied to the plumbing via their own element management. Customizing network config and server config together is still manual. Server Pod
  • 24. Tomorrow’s Automation is Service-Focused March 15, 2010 03/15/10 Data Center Network Intelligent Switches Intelligent Switches Intelligent Switches What if the network was presented as an inventory of subscriptions – virtual connections? Server connection inventory “virtual patch panels” Blade Servers Server Pod Blade Servers … and the network access was granted & configured one Server/NIC at a time? VMs
  • 25. DCM Use Case March 15, 2010 federate 03/15/10 2) Select an available connection 4) Configure Server according to subscription policies Server Admin DCM 3) Subscribe to the connection Network Infrastructure (dynamically deployed for each connection) 7) Automate other Network Policies using HP NAS Subscription, Registration or IP allocation event UCMDB Compliance Checking Fault Management Capacity Management info info info new server or VM 5) Register Server using RADIUS 6) Enforce L3 using DHCP RADIUS / DHCP registration Policy enforcement via RADIUS / DHCP responses Policies for Router, Firewall, Load Balancer, DLP, IDS, etc Connection Inventory 1) Set up policies Place new connections in the inventory Network Admin
  • 26. DCM Architecture – Links Network Configuration to Server Tools March 15, 2010 03/15/10 Connection Inventory 1) Set up policies Place new connection in the inventory 2) Select an available connection Connection Inventory 4) Configure Server according to subscription policies Network Admin Server Admin new server WS WS 3) Subscribe to the connection Network Infrastructure (plumbed for each registered connection) 6) Automate other Network Policies using Opsware 5) Register Server with network registration event UCMDB federate Compliance Checking Fault Management Capacity Management info info info Authentication/ registration Edge policies Router, Firewall, Load Balancer, DLP, IDS, etc HP NAS Data Center Connection Manager VMWare Virtual Center Blades Essentials Insight Dynamics Virtual Connect HP SAS
    • HP Software:
    • OV SC
    • OV NNM
    • Perigrine
    • Mercury
    • HP PAS
    DCM UI DCM UI F5, Checkpoint, Cisco, Riverbed, etc
  • 27. Conclusion
    • Ethernet has always won
    • Still plenty of challenges – not only FCoE
    • HP is a key player : standards and building blocks
  • 28. We would like to invite you to join us on stand 610