• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Best Practices for Datacenter I/O Infrastructure
 

Best Practices for Datacenter I/O Infrastructure

on

  • 655 views

Current I/O infrastructures rely on multiple costly fabrics, server adapter, and leaf switches to provide data and storage networks. Server consolidation and virtualization exacerbates these costs by ...

Current I/O infrastructures rely on multiple costly fabrics, server adapter, and leaf switches to provide data and storage networks. Server consolidation and virtualization exacerbates these costs by driving more I/O through each server. New products such as NextIO’s vNET can significantly reduce these costs, while providing increased capabilities and throughput.

Today’s IT professionals are confronted with a variety of challenges regarding server I/O and networking. Server virtualization, the rise of the cloud, and the need for multiple fabrics (whether Fibre Channel and Ethernet, or multiple Ethernet networks) are all increasing the cost of I/O networks relative to other parts of the datacenter. The result has been a slowdown in the adoption of server virtualization, and limited use of “secondary” fabrics such as Fibre Channel in the datacenter. Fabric consolidation approaches such as Fibre Channel over Ethernet (FCoE) to date have not delivered the simplification or cost saving needed. However, new I/O consolidation products based on the PCI-Express (PCI-E) bus are arriving on the scene that deliver the simplification required at a significantly lower cost than can be delivered by the status quo or over fabrics such as Ethernet and InfiniBand. Furthermore, because PCI-E based products can support any PCI-E peripherals, they offer the ability to consolidate a far greater set of I/O resources. These products can deliver significantly lower acquisition cost, higher performance, and a greater range of consolidation, with less disruption to current datacenter business practices. This session will touch upon best practices deploying this new technology.

Statistics

Views

Total Views
655
Views on SlideShare
642
Embed Views
13

Actions

Likes
0
Downloads
0
Comments
0

2 Embeds 13

http://iponline.imago.emcuk.com 8
http://online.ipexpo.co.uk 5

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Best Practices for Datacenter I/O Infrastructure Best Practices for Datacenter I/O Infrastructure Presentation Transcript

    • Best Practices for Datacenter I/O Infrastructure
    • • Before VMs, I/O was an issue The for only a few applications Problem • With VM/cloud environments, I/O has become the bottleneck Impact of I/O • Only solution today is to Overprovisioning overprovision I/O• Higher CapEx• Larger server footprint• Complex fabric infrastructure• Higher OpEx 2
    • Illustrating the Problem – 10 Servers with GbE and FC 2 FC Switches 4 48-port GbE Switches 40 Quad-Port 20 FC HBAs Ethernet NICs Ten 4U Servers 20 FC Cables 160 Ethernet Cables with FC and GbE• 2 FC Switches• 4 48-port GbE Switches• 20 FC HBAs• 40 Quad-port GbE NICs• 20 FC Cables• 160 CAT5 Cables• 46U of rack space 3
    • Illustrating the Problem – Ten Servers w/FC and 10GbE 2 FC Switches 2 20-port 10GbE Switches 30 10Gb Single-Port 20 FC HBAs Ethernet NICs Ten 4U Servers 20 FC Cables 30 Ethernet Cables with FC and 10GbE• 2 FC Switches• 2 20-port 10GbE Switches• 20 FC HBAs• 30 single-port 10GbE NICs• 20 FC Cables• 30 10GbE Cables• 44U of rack space
    • The The Concept of Shared I/O Answer: • Converge I/O through a Shared I/O common mechanism • Separate compute and Server The Value of Shared I/O Processor I/O so that they can be• Lower CapEx managed and Memory provisioned separately• Smaller servers• Simpler fabric FC Ethernet • Provide an interface at infrastructure• Lower OpEx the server that can support today and tomorrow’s peripherals FC Ethernet 5
    • Options for • Governancefabric Free in the server Blades are limited Yet another Shared I/O and • Changes on compute, point Highest Costly entryI/O Consolidation • • memory resources Great technology, performance Proprietary drivers• Buy Server • but does it really Peripheral device • Supported by all Blades choicescost? reduce are limited OSs, VMs• Use FCoE • • Vendor upgrade Forklift lock-inall Supported by• Use another • when 40GbE peripheral devices Technology lock-in fabric • comes around Future-proof (InfiniBand) • Does it really simplify the rack?• Use PCI Express 6
    • Traditional Server I/O Comparison Traditional Network Switch Ethernet PCIe Shared I/O Fibre Channel FC Ethernet FCoE SAS PCIe NIC HBA Traditional Network Top of Rack NextIO vNET Top of RackDedicated Ethernet NIC and Fibre Channel HBA per Shared network adapters among multiple serversserverDedicated network connections per server Single cable for multiple network connections using PCIe128 or more cables per rack 64 cables per rack4 or more Fibre Channel and Ethernet cards per server 2 PCIe Pass Through Host Interface Card4 or more fabric switches for each network Top of Rack 2 Shared I/O Top-of-Rack Appliancesor Leaf switches
    • Illustrating the Solution – Shared I/O with 10 Servers 2 vNET IOV Appliances Ten 4U Servers Ten 4U Servers with with FC and GbE FC and 10GbE• 2 FC Switches • 2 FC Switches• 4 48-port GbE 20 PCIe 20 Passive • 2 20-port Switches Cables PCIe HICs 10GbE Switches• 20 FC HBAs • 20 FC HBAs• 40 Quad-port • 30 single-port GbE NICs 10GbE NICs• 20 FC Cables • 20 FC Cables• 160 CAT5 Cables • 30 10GbE Cables• 46U of rack space • 44U of rack space
    • Simplification/Savings from Shared I/O Solution FC + GbE FC + 10GbE vNET 6 4 2 TOR Switches (2xFC, 4xGbE) (2xFC, 2x10GbE) (vNETs) 60 50 20 Server Cards (20xFC, 40xQuad GbE) (20xFC, 30x10GbE) (Passive PCIe HICs) 180 50 20 Cables (20xFC, 160xCAT5) (20xFC, 30x10GbE) (PCIe) Total Rack Space 46U 44U 28U
    • $avings vs. Traditional Server I/O Deployment Item Mid Tiered Data Large Enterprise Mega Enterprise Center Data Data Center (500 servers) (7500 servers) (50,000 servers) Switch Ports, Cards $475K $7.13M $47.5M & Cables Power/Cooling $40K/yr $600K/yr $4M/yr Management Costs $62.5K/yr $937K/yr $6.25M/yr Deployment Costs $75K $1.13M $7.5M CapEx Savings OpEx Savings• Mid-Tiered – $550,000 • Mid-Tiered – $100K• Large Tiered – $8.26M • Large Tiered – $1.5M• Mega Tiered - $55M • Mega Tiered - $10M 10
    • vNET™ I/O Maestro The next generation rack-level virtualization appliance thatsimplifies deployment and management of complex server I/O 11
    • Extend PCIe NextIO Consolidates & Virtualizes Outside the Server I/O devices Server Virtual Physical Servers Adapters AdaptersStandards based (inside vNET)• No special device drivers needed• No changes to OS, drivers, applications LAN and network & storage infrastructuresShared resource SANpool concept:• Eliminates TOR Leaf Switches• Reduces I/O card count• Enables dynamic • vNET virtualizes the physical adapters reallocation of I/O resources • The virtualized adapters appear as• Reduce cable sprawl actual physical adapters to the servers
    • • Provides interface to configure, monitor nControl™ and manage I/O resource poolManagement Software • Create I/O profiles • I/O characteristics of a server identity applied easily to new servers • Failover: migrate I/O profiles to standby server remotely • Server changes are transparent to networks • Enables servers to be deployed in minutes • Saves Opex: reduces time to bring up new servers and applications w/o networking staff • Speeds time to revenue 13
    • NextIO - Any to Any Connectivity CPUs GPUs Compute LAN Flash Network Storage DisksiSCSI NAS SAN
    • Datacenter Topology with vNET I/O Maestro Add Connectivity and Capacity on Demand Migrate Deploy up to 30 Resources from servers with vNICs and vHBAs server to server Passive PCIe Host per server Adapter Card to extend PCIe signalsCorporate from server to vNET v Network 10G Ethernet SAN Server Bandwidth Uplink Ports 8G Fibre Channel up to 20 Gbps links Uplink Ports Uplink ports to vNET™ I/O Maestro Core Switches Resource Manager Transparent to Servers, OS , Applications and Network &Storage Infrastructures 15
    • • Simplified ServervNET™ I/O Maestro Management – Remote and unified management ofDelivers Value Quickly all server I/O – Faster server deployment speeds time to revenue • Lower Costs – Up to 40% reduction in CapEx: minimizes total number of server adapter cards, switch ports, cables – Up to 60% reduction in OpEx: lowers energy consumption and reduces server management time and effort • Greater business flexibility and agility 16
    • Questions? 17
    • Best Practices for Datacenter I/O Infrastructure
    • YOUR YEAR-ROUND IT RESOURCE – access to everything you’ll need to know
    • THE WHOLETECHNOLOGY STACKfrom start to finish
    • COMMENT & ANALYSISInsights, interviews and the latest thinking on technology solutions
    • VIDEOYour source of live information – all the presentations from our live events
    • TECHNOLOGY LIBRARY Over 3,000 whitepapers,case studies, product overviews and press releases from all the leading IT vendors
    • EVENTS, WEBINARS & PRESENTATIONS Missed the event? Download the presentations thatinterest you. Catch up with convenient webinars. Plan your next visit.
    • DirectoryA comprehensive A-Z listing providing in-depth company overviews
    • ALL FREE TO ACCESS 24/7
    • online.ipexpo.co.uk