The document provides an overview of cloud computing, data center technologies, and Cisco solutions. It defines cloud computing, discusses concepts like SaaS, PaaS, and IaaS, and examines trends driving data center evolution such as server virtualization and automation. It also introduces Cisco's Nexus switching portfolio including the Nexus 1000V virtual switch, Nexus 5000 series, and Nexus 2000 fabric extenders for scalable server access.
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Data Center Convergentes - Carlos Spera - 20 de octubre - UY
1. Data Center
Convergente
Carlos Spera
cspera@la.logicalis.com
BDM Data Center
Logicalis Southern Cone
Business and Technology Working as One
2. Cloud Computing (Definiciones)
Wikipedia:
“Cloud computing es un paradigma que permite ofrecer servicios de computación a
través de Internet”
Russ Daniels HP:
“Escalado horizontal, control de recursos en grado fino, autoservicios, coste variable
según uso”
ServePath:
“The use of a 3rd party service to perform computing needs on a publicly accessible
IP basis. Cloud computing services are usually performed in consolidated Data Centers
to keep costs low while improving overall utilization”
Elementos habituales en todas las definiciones:
Acceso a través de Internet (la “nube”)
Virtualización
Escalabilidad
Coste por uso
Business and Technology Working as One
3. Cloud Computing: Conceptos
Definimos a “Cloud Computing” como un estilo de computación donde los
recursos de IT son:
Brindados a los clientes como un servicio utilizando tecnologías de Internet.
Masivamente escalables.
De alcance global.
Distribuibles dinámicamente, “a demanda” en cantidad y calidad medibles.
Asignados Just in Time
Servicios a múltiples clientes que comparten los mismo recursos. (Multi-Tenant)
Se paga solo por el servicio que se utiliza.
La virtualización es el fundamento para avanzar hacia los servicios del cloud
computing
Business and Technology Working as One
4. SaaS, PaaS, IaaS ?!!? Los “xaaS”
SaaS (Software as a Service): Significa una sola instancia de un software o
aplicación que corre en la infraestructura del proveedor y sirve a múltiples
organizaciones de clientes.
Ejemplo: Salesforce.com
PaaS (Plataform as a Service): Es la encapsulación y la abstracción de un
ambiente de desarrollo.
Ejemplo: Amazon EC2
IaaS (Infraestructura as a Service): Es un medio de entrega de
almacenamiento y capacidades de cómputo como servicios estandarizados en
la red.
Ejemplo: rackspacecloud.com
Business and Technology Working as One
5. Tipo de gestión sobre SaaS, PaaS, IaaS
Business and Technology Working as One
5
7. Las empresas hacia el cloud computing
Las empresas dispondrán de una infraestructura dedicada para algunos
propósitos y consumirán servicios On Demand obtenidos de la nube para otros .
Business and Technology Working as One
8. Virtualización
La Virtualización consiste en la abstracción de los recursos físicos
existentes en un equipo informático para poder correr sobre el
mismo equipos virtuales.
Cada uno de estos equipos virtuales ve un servidor completo,
interactuando con el mismo a través de la tecnología de
virtualización.
Business and Technology Working as One
9. ¿Qué se puede virtualizar?
Servidores (VMs, la nueva unidad atómica en el DC)
Networking. (Switches, Load Balancers)
Seguridad. (Firewall)
Almacenamiento. (Storage)
Escritorios de usuarios. (Virtual Desktop)
Aplicaciones. (Ej: Paquete Office)
Business and Technology Working as One
10. Beneficios de la virtualización
Reduccion de los esfuerzos de administración:
Menores costos operacionales
Menos servers para administrar.
Rapid deployment
Ahora 1-6 Semanas (Compra, setup, software, test).
Con la virtualización se puede reducir a horas.
Reducción en los costos de infraestructura y servidores.
Mejora en la utilización de los recursos.
Incrementa y mejora la disponibilidad.
Herramientas para mejorar la seguridad.
Business and Technology Working as One
11. Next Generation Data Center
A medida que la infraestructura IT se vuelve mas compleja, los requisitos de
IT cambian de gerenciar operaciones técnicas a operaciones de servicios.
Esto plantea la necesidad de transformación del DC.
Cuatro fuerzas evolutivas La nueva generación de Data Centers
Están dando forma al NGDC será…..
• Una infraestructura provisionada
dinámicamente por medio del uso de
capacidades automatizadas soportando el
proceso de negocio de la compañía.
• Servicios de tecnología construidos sobre
infraestructura virtual.
• Procesos estandarizados.
• Arquitecturas tecnológicas que permitan
consolidar recursos de IT.
Business and Technology Working as One
12. La evolución de la arquitectura de los DC
Data Center 1.0 Data Center 2.0 Data Center 3.0
Client-Server and Service Oriented and
Mainframe Distributed Computing Web 2.0 Based
IT Relevance and Consolidate
Virtualize
Automate
Centralized Decentralized Virtualized
Application Architecture Evolution
Business and Technology Working as One
13. ¿Cuáles son las tendencias tecnológicas?
10gb a los servidores.
Unified I/O. (FCoE).
Server virtualization.
Server mobility (inter & intra DCs).
Segurizacion de Virtual Servers Farm (Trafico Este-Oeste)
Aceleracion y optimizacion de aplicaciones.
Business and Technology Working as One
15. VN-Link Brings VM Level Granularity
Problems:
VMotion
• VMotion may move VMs across
physical ports—policy must
follow
• Impossible to view or apply
policy to locally switched traffic
• Cannot correlate traffic on
physical links—from multiple
VLAN
VMs
101
VN-Link:
•Extends network to the VM
•Consistent services
Cisco VN-Link Switch •Coordinated, coherent management
Business and Technology Working as One
16. Cisco Nexus 1000V
Faster VM Deployment
Cisco VN-Link—Virtual Network Link
Policy-Based Mobility of Network Non-Disruptive
VM Connectivity & Security Properties Operational Model
Server Server
VM VM VM VM VM VM VM VM
#1 #2 #3 #4 #5 #6 #7 #8
VM Connection Policy
Cisco Nexus 1000V
Linked to VM UUID
Applied in Virtual Center
Defined in the network
VMW ESX VMW ESX
Defined Policies
WEB Apps
HR
DB
Virtual
Compliance Center
Business and Technology Working as One
17. Cisco Nexus 1000V
Richer Network Services
VN-Link: Virtualizing the Network Domain
Policy-Based Mobility of Network Non-Disruptive
VM Connectivity & Security Properties Operational Model
Server Server
VM VM VM VM
VM VM VM VM VM#1 VM#2 VM#3 VM#4
#1 #2 #3 #4 #5 #6 #7 #8
VN-Link Property Mobility
Maintains connection state
Ensures VM security
VMotion for the network
VMs Need to Move
Cisco Nexus 1000V
Hardware Failure
SW Upgrade/Patch
DRS
VMotion
VMW ESX VMW ESX
Virtual
Center
Business and Technology Working as One
18. Cisco Nexus 1000V Architecture
Server 1 Server 2 Server 3
VM VM VM VM VM VM VM VM VM VM VM VM
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12
VEM
VMware vSwitch VEM
VMware vSwitch
Nexus 1000V VMware vSwitch
VEM
VMW ESX VMW ESX VMW ESX
Virtual Supervisor Module (VSM)
Virtual or Physical appliance
Virtual Ethernet Module (VEM)
running Cisco OS (supports HA)
Enables advanced networking
Cisco Nexus 1000V Enables: Virtual Center
capability management, monitoring,
Performs on the hypervisor
& configuration Connectivity
Provides eachVM with VMware
Policy Based Nexus 1000V
“switch port” VM with dedicated
Tight integration & Security
Mobility of Network
Virtual Center
Properties
Collection of VEMs = 1 Distributed
Non-Disruptive Operational Model
Switch
VSM
Business and Technology Working as One
19. Cisco Nexus 5000
UNIFIED
DISTRIBUTED LOSSLESS
VIRTUAL VIRTUAL
FABRIC
SERVER
LINE CARDS
AWARENESS
WIRE-SPEED
10GE LOW LATENCY
MULTIPATHING
Business and Technology Working as One
23. NX-OS Non-Stop Forwarding
OS Designed to leverage
distributed hardware architecture.
Fabric & forwarding engine Supervisor
removed from supervisor.
(Control-
Plane)
Each I/O module has independent
control-plane and forwarding
hardware.
Fabrics
EOBC
Control-plane & data-plane
separation.
I/O Module
(Forwarding
Fully distributed system for non- Engine)
disruptive SSO & ISSU.
Business and Technology Working as One
24. Nexus 5K & 2k Switching Family Overview
•Nexus 5020 •Nexus 5010 •Nexus 5548
• 56-port Layer 2 Switch • • 48-port Switch 2
• 32 fixed ports 1/10GE/FCoE/DCB 8
• 40 fixed ports 10GE/FCoE/DCB -
• 2 Expansion Module Slots • 1 Expansion Module Slot p
o
r
t
L
a
• Ethernet • Ethernet + FC • Fibre Channel • Fibre Channel • Ethernet • Ethernet + FC y
• 4 Ports 10GbE, • 8ports 1/10GbE, e
• 6 ports 10GbE, FCoE, DCB • 16 ports 1/10GbE, FCoE, DCB
r
FCoE, DCB
• 4 ports 1/2/4G FC • 8 ports 1/2/4G • 6 ports 2/4/8G FCoE, DCB • 8ports 1/2/4/8GFC
FC FC 2
•Nexus 2248 FEX •Nexus 2232 FEX •Nexus 2224 FEX S
w
i
t
c
• 48 Fixed 100M/1GbE ports • 32 1/10 GE Ethernet/FCoE • 24 Fixed 100M/1GbE ports h
• 4 Fixed 10GbE uplinks • 8 10 GE DCB/FCoE uplinks • 2 Fixed 10GbE uplinks
• 2
0
Business and Technology Working as One
f
•Cisco ®
Data Center Network Manager (DCNM) and Fabric Manager i
x
25. Data Center Access Layer Options
Top of Rack (ToR)
• Typically 1-RU servers
• 1-2 GE LOMs
• Mostly 1, sometimes 2 ToR switches
• Copper cabling stays within rack
• Low copper density in ToR
• Higher chance of East-West traffic hitting
aggregation layer
• Drives higher STP logical port count for
aggregation layer
• Denser server count
Middle of Row (MoR) (or End of Row)
• May be 1-RU or multi-RU servers
• Multiple GE or 10GE NICs
• Horizontal copper cabling for servers
• High copper cable density in MoR
• Larger portion of East-West traffic stays
in access
• Larger subnets less address waste
• Keeps agg. STP logical port count low
(more EtherChannels, fewer trunk ports)
• Lower # of network devices to manage
Business and Technology Working as One
26. Cisco Nexus 2000 Fabric Extender (FEX)
Nexus 2000 Fabric Extender (FEX)
• Nexus 5000 + Nexus 2000 is a Virtual Chassis
• Nexus 2000 is a Virtual Line Card to the Nexus 5000
• No Spanning Tree between Nexus 2000 and Nexus 5000
• Nexus 5000 maintains all management and configuration
Business and Technology Working as One
28. Cisco Nexus 5500 Series Switches
Breakthrough Innovation
Multi-protocol
Ethernet (1/10 GbE) + Storage (FC, FCoE, iSCSI, NAS)
Multi-Layer and Highly Scalable
48 & 96 port models in 1RU & 2RU
FEX-link - Over 900 100 M/1 GbE & 600 10 GbE ports
FabricPath & Layer 2 /Layer 3
Multi-purpose
Traditional Ethernet, virtualized and unified pods
Massively scalable server access or mid- market aggregation
Industry’s Highest Density & Performance for
Fixed Switches
Business and Technology Working as One
29. Unified Ports
Dynamic and Efficient Port Allocation
Availability
16-port Expansion Module Unified Port
Unified Port
on the Nexus 5548, 5548-UP
and 5596-UP
All Ports on the Nexus
5548-UP and 5596-UP
Native FC Lossless Ethernet –
Benefits FCoE, iSCSI, NAS
Use-cases
Simplify switch purchase -
remove ports ratio guess Flexible LAN & storage
work convergence based on business
needs
Increase design flexibility
Service can be adjusted based
Remove specific protocol on the demand for specific traffic
bandwidth bottlenecks Business and Technology Working as One
30. Nexus 5548P
Nexus 5548UP
Nexus 5596UP
Nexus 5500 Layer 3 Modules
N55-D160L3 / N55-M160L3
Business and Technology Working as One
31. Cisco Nexus 2000 Fabric Extenders (FEX)
Model Nexus 2148T Nexus 2224TP Nexus 2248TP Nexus 2232PP-10G
Product Shipping Yes Yes Yes Yes
Form Factor 1 RU 1 RU 1 RU 1 RU
Uplink Ports 4 x 10GbE SFP+ 2 x 10GbE SFP+ 4 x 10GbE SFP+ 8 x 10GbE SFP+
Uplink Transceivers Copper CX-1 (passive): 1m, 3m, 5m.
Supported Optical: FET (Nexus 2200 platforms), SR, LR [distance limited to 300m]
Host Facing Ports 48 x 1GbE RJ45 24 x 100/1000Base-T RJ45 48 x 100/1000Base-T RJ45 32 x SFP/SFP+
(1000BaseT only) (1/10G)
FCoE N/A N/A N/A Yes
Dimensions 1.72 x 17.3 x 20.0 1.72 x 17.3 x 17.7in 1.72 x 17.3 x 17.7in 1.72 x 17.3 x
in 17.7 in
Operational Power 165W 95W 110W 270W
Supports FET No Yes Yes Yes
Multiple PortChannel Not Supported Yes Yes Yes
member ports on a FEX
Scalability 576 GbE Ports 5010/20 288 GbE Ports w/ 576 GbE Ports w/N5010/20 384 1/10GbE Ports
(12 FEX) -- 768 Gbe ports N5010/20 (12 fex) (12 FEX) w/N5010/20 (12 FEX)
Business and Technology Working as One
32. Nexus 2000 — Deployment Benefits
Nexus 2000 combines benefits of both ToR and EoR architectures
Physically resides on the top of each rack but
Logically acts like an end-of-row access device
Nexus 2000 deployment benefits
Reduces cable runs
Reduces management points
Ensures feature consistency across hundreds of servers
Enables Nexus 5000 to become a high-density 1GE access layer switch
Investment protection
VN-Link capabilities
Business and Technology Working as One
33. Nexus 3000 Series
Ultra Low Latency, L2/L3 10GE/40GE Data Center Switch
Most applications are NOT Wire Rate on all ports
sensitive to switching
latency Latency: <1 usecs
Application latency is orders
Cisco NX-OS Support
of magnitude greater than
network latency HA, Security, QoS , MGMT
Some High Performance Flexible Port Configuration
Computing and High
48x 10GE SFP+ and 4 QSFP
Frequency Trading
applications are latency 64x10GE
sensitive
Business and Technology Working as One
34. I/O Consolidation
Today I/O Consolidation with FCoE
SAN A SAN B
LAN
LAN SAN A SAN B
Nexus
5000
N2232 N2232
Business and Technology Working as One
35. NON-Unified Fabric – Phase 0
A Segregated LAN and SAN…
In existing architectures, LAN and SAN
connectivity is segregated directly from the
Servers, where NICs and HBAs connect into
Ethernet switches and Fibre Channel Fabrics.
This may result in excess of 8+ cables to/from
each physical server
In Ethernet, redundancy relies upon
technologies such as Spanning Tree Protocol
to provide a loop-free topology...
Business and Technology Working as One
36. Unified Fabric – Phase 1
A Unified Fabric in the Access…
The Nexus 5000 allows for the consolidation of
Ethernet and Fibre Channel to be carried across
the same physical piece of cable - Ethernet
Leveraging standards-based FCoE, the Nexus
5000 is able to provide direct FCoE connectivity
from the Server through a Converged Network
Adapter (CNA) to the Nexus 5000.
The Nexus 5000 is then able to perform Ethernet
switching for regular Ethernet frames, and Fibre
Channel forwarding for FC frames...
Business and Technology Working as One
37. Unified Fabric – Phase 2
Unified Fabric in the Data Center
Once FCoE-enabled modules become available
on the Nexus 7000 or the MDS 9500 series
platforms, multi-hop FCoE topologies may be
possible by retaining FCF capabilities across the
different platforms
Additionally, with the introduction of direct FCoE
attached targets, these may also be directly
connected to any of these FCoE-enabled
devices...
Business and Technology Working as One
38. Key Benefits of Unified Fabric
Reduce overall DC power consumption by up to
8%. Extend the lifecycle of current data center.
Wire hosts once to connect to any network - SAN,
LAN, HPC. Faster rollout of new apps and
services.
Every host will be able to mount any storage
target. Drive storage consolidation and improve
utilization.
Rack, Row, and X-Data Center VM portability
become possible.
Business and Technology Working as One
39. Cisco UCS ventajas
Unified Fabric – simplifies infrastructure using industry
standards.
Embedded Management – one management domain simplifies
management framework.
Large Memory Footprint – unique memory architecture allows for
faster performance and lower costs for large RAM servers.
Virtualization Adapter – improves performance and reduces NIC
infrastructure.
Service Profiles – allows for stateless computing, mobility, rapid
provisioning and rapid recovery.
Business and Technology Working as One
40. Unified Computing System
Embedded Management (UCS Manager)
Embedded Management (UCS Manager)
Unified
Unified
Fabric
Fabric
(FCoE)
(FCoE)
Expanded Memory
Expanded Memory Stateless Computing and
Stateless Computing and VM-FEX (Virtual Adapters)
VM-FEX (Virtual Adapters)
Service Profiles
Service Profiles Business and Technology Working as One
41. Embedded Managment
Major Components and Relationships
UCS Manager
Management resides in the Fabric Interconnect
UCS 6100 - Fabric Interconnect
Fabric Extender is a logical part of the
Fabric Interconnect
UCS 2104 - IOM
Inserts into Blade Chassis
Chassis is logical part of the Fabric Extender
UCS 5108 – Blade Chassis n
ai
Blade inserts into the Chassis om
D
Blades are a logical part of the chassis nt
e
UCS Blade Server em
ag
Industry Standard Architectures n a
M
e
UCS Mezzanine Adapters n gl
Si
VIC, Menlo (Q & E), Oplin
Business and Technology Working as One
42. UCS FEX Architecture
LAN/SAN Uplinks
20Gb/s 40Gb/s 80Gb/s
• Wire once for bandwidth, not connectivity
• Policy-driven bandwidth allocation
• All links can be active all the time
• Integrates as a single system into your data center
Business and Technology Working as One
43. UCS Manager
• Browser-based GUI, CLI, or
published native XML API
• Embedded in 6000 Series
Fabric Interconnects
• Clustered implementation
• Manages all UCS hardware
components
• Deploys Server Profiles to
Stateless Blades
• Scales to manage multiple
chassis
Business and Technology Working as One
44. Cisco UCS ventajas
Unified Fabric – simplifies infrastructure using industry standards.
Embedded Management – one management domain simplifies
management framework.
Large Memory Footprint – unique memory architecture
allows for faster performance and lower costs for large RAM
servers.
Virtualization Adapter – improves performance and reduces NIC
infrastructure.
Service Profiles – allows for stateless computing, mobility, rapid
provisioning and rapid recovery.
Business and Technology Working as One
45. Optimizing Memory with the Xeon 5600
Legacy
12 – 18 DIMMs
Max 192/288/384GB
Max 96GB
Low Performance/High
High Performance Cost
Cisco UCS With
Memory Extension
Xeon 5500 Xeon 5500
48 DIMMs
Max 384GB
Higher Performance Business and Technology Working as One
46. Savings With Memory Extension
Increased System Utilization = Fewer Systems = Lower Costs
Typical System Cisco UCS
Memory Constrained Memory Extension
•Higher cost •Lower cost
•~2x CPU = •Fewer CPUs
underutilized
•More efficient
•Wasted power
•Fewer network ports
•More network ports
•Lower software costs
Business and Technology Working as One
•Higher software costs
47. Cisco UCS ventajas
Unified Fabric – simplifies infrastructure using industry standards.
Embedded Management – one management domain simplifies
management framework.
Large Memory Footprint – unique memory architecture allows for
faster performance and lower costs for large RAM servers.
Virtualization Adapter – improves performance and
reduces NIC infrastructure.
Service Profiles – allows for stateless computing, mobility, rapid
provisioning and rapid recovery.
Business and Technology Working as One
48. Adapter CNA
First Gen Second Gen Third Gen
“Free” SAN Access for Existing Driver Stacks VM I/O Virtualization
Any Ethernet Equipped and Consolidation
Host (VIC)
10GbE/FCoE 10GbE/FCoE
Eth QP
FC FC Eth
vNICs
Software FCoE 10GbE FC
0 1 2 3 57
UCS 82598KR-CI (Oplin): PCIe x16
PCIe Bus UCS UCSM81KR:
10 Gigabit Ethernet Adapter,
Virtual Interface Card (VIC);
based on Intel 82598 controller
UCS M72KR-E (Menlo-E): Emulex CNA Unified virtual adapter
(Ethernet only)
UCS M72KR-Q (Menlo-Q): QLogic CNA and I/O consolidation card
Business and Technology Working as One
49. Virtualized Adapter VIC
Unified I/O with the VIC Virtualized Adapter
– Very high performance: Full 10G speeds
FC
with 500k IOPS Eth
FC Eth
– Compatible with VMware, Windows, Linux
0 1 2 57
Up to 58 virtual adapters on a single
physical adapter 58 vNICs
– Any combination of FC & Ethernet
– Dynamically create I/O devices VM VM VM
Integration with VMware ESX VNICs exposed directly to the Virtual machine
– VM-FEX: eliminate the virtual switch layer by
passing vNIC's directly to your VM's. Virtualized Adapter VM-level network visibility
– Get the capability of DirectPath I/O and still
have VMotion / DRS / HA
Business and Technology Working as One
50. Use Case
LAN SAN A SAN B LAN SAN A SAN B
8 4
2
2
Nearly twice
the Cables
16 Servers Enet FC Total 16 Servers Enet FC Total
Adapters 20 20 40 Adapters 20 0 20
Switches 2 2 4 Switches 2 0 2
Cables 40 40 80 Cables 40 0 40
Mgmt Pts 2 2 4 Mgmt Pts 2 0 2
Business and Technology Working as One
51. Cisco UCS ventajas
Unified Fabric – simplifies infrastructure using industry standards.
Embedded Management – one management domain simplifies
management framework.
Large Memory Footprint – unique memory architecture allows for
faster performance and lower costs for large RAM servers.
Virtualization Adapter – improves performance and reduces NIC
infrastructure.
Service Profiles – allows for stateless computing, mobility,
rapid provisioning and rapid recovery.
Business and Technology Working as One
52. Stateless Computing
•RAID settings
•Disk scrub actions
•Number of vHBAs
•HBA WWN assignments
•FC Boot Parameters
•HBA firmware
•FC Fabric assignments for
HBAs
•QoS settings
•Border port assignment per
vNIC SAN
•NIC Transmit/Receive Rate
Limiting
•VLAN assignments for NICs
•VLAN tagging config for NICs
•Number of vNICs
LAN
•PXE settings
•NIC firmware
•Advanced feature settings
•Remote KVM IP settings
•Call Home behavior
•RemoteUUID firmware
Server KVM
•Serial over LAN settings
•Boot order
•IPMI settings
•BIOS scrub actions Business and Technology Working as One
•BIOS firmware
•BIOS Settings
53. Unified Fabric – FCoE
Cost savings due to reduced components
Reduced power and cooling requirements
UCS Manager (Embedded Management)
Reduced operational costs of management tasks
Easy integration with existing management frameworks
Memory Expansion
Reduces CPU, power, cooling and software licensing costs
Higher server consolidation and larger virtual machine
density
Virtualized Adapters (VM-FEX)
Virtual machine visibility to the network
Network policy follows the virtual machine
Service Profiles
Rapid provisioning through automation
Rapid infrastructure repurposing – meet the demand shift
Business and Technology Working as One
VM Connection Policy = Defined in the network, applied in Virtual Center
VM Connection Policy = Defined in the network, applied in Virtual Center
Virtual Supervisor Module (VSM) Virtual or Physical appliance running Cisco OS (supports HA) Performs management, monitoring, & configuration Tight integration with VMware Virtual Center Virtual Ethernet Module (VEM) Enables advanced networking capability on the hypervisor Provides each VM with dedicated “switch port” Collection
Add 5596 Transcript : So the current portfolio, we have the 5K, the 2K's, and all of the expansion modules. Everything here we have been selling for some time. The new products that we -- on the 2K side is the 2224 FEX that we recently launched, and then we also launched the Next Generation 5K, the 5548, the switch and two expansion modules that go in the switch. So the expansion modules that are listed here, the Ethernet 16 port and the Ethernet + FC + 8 port, they only will work on the 5548. They're not backward-compatible with the 5010 and 5020. So you cannot mix and match the expansion module on the left with the expansion modules on the right. The 5010 and the 5020 can only take four of these expansion modules, whereas the 5548 can take the two on the right and there will be more to come. The FEXs, the 21, 22, they are backwards-compatible with both the current and the Next Generations. So across 5010, 5020 and 5048 the FEXs continue to work and we'll talk about scale as we go down on how we're enhancing the scalability of the FEXs with the 5548.
Cisco Unified port technology enables ports to be dynamically allocated to support Fibre Channel, iSCSI or FCoE data or loss less Ethernet thus offering unparalleled flexibility and choice Unified ports allows the customers not to worry about predetermining the amount of physical, rigid ports they require for convergence prior to making a network switch purchase— removes all guess work around the selection of port types and ratios thus simplifying the purchasing decisions. This technology provides variable connectivity options and complete flexibility and choice enabling customer-paced network convergence and design flexibility. With Unified ports customer can shift protocol support allowing them to provide service based on the demand and bandwidth requirements.
Transcript: Now let's look at an actual simplified deployment model for the consolidated fabric, and in this case using Fibre Channel over Ethernet. On the left we show an simplified model of a traditional data center infrastructure today. In the servers at the bottom of the left hand side, you see a multitude of network interface modules inside the servers. Again, typically between six and eight adapters per physical server. And in the access layer, you have many different types of switch devices. You have your traditional LAN switches for Ethernet, your SAN switches for your storage traffic, or Fibre Channel traffic, and you have redundant links, naturally, in order to maximize uptime, in both the Fibre Channel space, and also in the LAN space. Now this is simplified because traditionally in data centers you also have many different cluster environments that are often autonomous, separate networks. So if you look at the network on the right, we've implemented I/O consolidation through the use of a unified fabric. And in this case, because it's Fibre Channel, also the use of Fibre Channel over Ethernet, or FCoE. So we go down from four switches on the left, to two switches on the right, and two switches for redundancy purposes. And we go down from about six to eight adapters on the left, to just two converged network adapters per server in the picture on the right. And if those were six adapters on the left, this would correlate to a 66% reduction in the number of cables inside this simplified network architecture. In this environment we're showing that the Nexus 5000s take in traditional Ethernet traffic, 10GE traffic, coming from the converged network adapters out of the servers, as well as Fibre Channel over Ethernet, again, also coming from the converged network adapters. And then the Nexus 5000s actually convert the FCoE traffic back to Fibre Channel for connectivity back to the SAN A and SAN B target systems. Author’s Original Notes: Today: Parallel LAN/SAN Infrastructure Inefficient use of Network Infrastructure 5+ connections per server – higher adapter and cabling costs Adds downstream port costs; cap-ex and op-ex Each connection adds additional points of failure in the fabric Longer lead time for server provisioning Multiple fault domains – complex diagnostics Management complexity – firmware, driver-patching, versioning
5 Things Based on Unified Fabric Simplifies server & reduces physical infrastructure Brings unified fabric and network to server Reduces TCO versus traditional servers Large Memory Footprint Higher server consolidation & larger VM density Reduces CPU, power/cooling, and SW licensing costs Virtualization Adapter True wire once architecture – highly dynamic Network policy and visibility brought to VMs Hypervisor bypass support Reduce NIC and mezz card infrastucture Embedded Management Management abstracted to Fabric Interconnects Chassis have no state and hold blades All blades share same management domain Service Profiles Hardware state abstracted to Fabric Interconnects Blade identities can be duplicated, automatically moved and deployed, and failed-over to another blade Firmware and bios included – competition does not do “ Stateless” environment – any blade can be defined and assume any identity Significant process/manual savings
The UCS Manager is the management logic and interface that is used to manage and configure all aspects of the Unified Computing System hardware. UCS Manager offers either a web browser-launched graphical user interface, command line interface via Telnet or SSH, or a native XML API, which both of the other interfaces are mere front ends for. The XML API is published so that 3 rd -party software such as BMC Software’s BladeLogic for UCS can take full advantage of it. It is embedded in the hardware of the 6100-series Fabric Interconnects, and so does not require a separate computer to host it. The UCS Manager is also clustered between the two redundant Fabric Interconnects in a fully-configured UCS system. Both Interconnects run the UCS data management engine and other application processes, with the one taking a primary and one taking a secondary role. All of the hardware components within the UCS system, as well as the connectivity between those components and the data center, are managed and configured by the UCS Manager. It is important to note that unlike other hardware vendors, Cisco has resolved to provide only an easily-integrated layer of hardware management, rather than force customers to use a specific software package for their other operational activities like provisioning and monitoring. The UCS Manager only maintains the UCS hardware layer, and does not manage other data center elements such as LAN switch configuration, SAN zoning, storage provisioning, operating system and application provisioning and patching, etc. There are plenty of software solutions that have already been adopted by customers for those purposes, and Cisco has provided the XML API for UCS as the most comprehensive means of integrating with them.
5 Things Based on Unified Fabric Simplifies server & reduces physical infrastructure Brings unified fabric and network to server Reduces TCO versus traditional servers Large Memory Footprint Higher server consolidation & larger VM density Reduces CPU, power/cooling, and SW licensing costs Virtualization Adapter True wire once architecture – highly dynamic Network policy and visibility brought to VMs Hypervisor bypass support Reduce NIC and mezz card infrastucture Embedded Management Management abstracted to Fabric Interconnects Chassis have no state and hold blades All blades share same management domain Service Profiles Hardware state abstracted to Fabric Interconnects Blade identities can be duplicated, automatically moved and deployed, and failed-over to another blade Firmware and bios included – competition does not do “ Stateless” environment – any blade can be defined and assume any identity Significant process/manual savings
Add slide for intra-DC Vmotion (ties to slide 17, 19) Cluster runnining DRS, take node down with dynamic power mgmt, all the VMs move elsewhere—move at 10Gb speed Scaling DRS and DPM 2 DIMMS on the left Add graph for detail on slide—benchmark or buisness case
5 Things Based on Unified Fabric Simplifies server & reduces physical infrastructure Brings unified fabric and network to server Reduces TCO versus traditional servers Large Memory Footprint Higher server consolidation & larger VM density Reduces CPU, power/cooling, and SW licensing costs Virtualization Adapter True wire once architecture – highly dynamic Network policy and visibility brought to VMs Hypervisor bypass support Reduce NIC and mezz card infrastucture Embedded Management Management abstracted to Fabric Interconnects Chassis have no state and hold blades All blades share same management domain Service Profiles Hardware state abstracted to Fabric Interconnects Blade identities can be duplicated, automatically moved and deployed, and failed-over to another blade Firmware and bios included – competition does not do “ Stateless” environment – any blade can be defined and assume any identity Significant process/manual savings
Cisco offers a number of adapter options to give IT organizations a choice between standard configurations and new state-of-the-art adapters that support “virtual” connections. The Palo adapter ultimately serves optimize I/O performance in virtual environments. The single biggest stumbling block in the mainstream adoption of virtualization in production environments has been concerns about I/O performance. The ability to provide granular control over I/O performance across any number of virtual links mitigates those performance concerns. We will have three different adapter families and four different adapters. Cost: Intel Oplin based adapter, supports FCOE software driver. There is currently an Open Source FCOE driver available. Compatibility – 2 Adapters: Standard CAN architecture modified to fit our mezzanine form factor. Based on Emulex or Qlogic FC ASIC (LP11xx and QL2642). Ethernet is standard Intel 10GE Oplin Don't require specific CNA drivers. Palo – Virtualized Adapter Explained in detail in later slide We don't require the whole California to be one kind of adapter. They can be different on the double-width blade, but it isn’t recommended.
Transcript : So Palo adaptor, it's something that we are actually somewhat proud in our environment and it is a Unified I/O adaptor. It's a single SIP solution so instead of Menlo being three SIPs, Menlo SIP, Oplin SIP as well as the QLogic or Emulex SIP, in Palo we do everything in one SIP. That creates us actually a couple of benefits. One of them is it uses a lot less energy than a Menlo does. And also because we actually built it after, we basically have a possibility of defining how it works and go a little bit deeper than what we do with Menlo and Oplin. So Palo as adaptor, as a hardware adaptor, is actually capable of doing that 128 virtual adaptors that we have been talking for a long time. And in the UCS environment, that 128 never actually saw shop because we can't do that many virtual interfaces in the UCS. But as an adaptor, as a pure adaptor, it is capable of doing those 128 virtual adaptors. The adaptors that the Palo creates can be any combination of Fibre Channel and Ethernet. So whatever mixes and matches into your environment can be created and they are dynamic so they can (inaudible) using the port profiles. And like in the previous slide we were talking about the port profile so we can say QoS parameters, rate limiting and after the FCS we can go into the security parameters as well. So we have been working with VMware for some years now, and I have to thank Ed Bugnion for being a great leader of getting all this collaboration work to actually go on. And the integration with VMware ESX 4 is basically having the embedded California distributed virtual switch in a pass-through mode. In a pass-through mode what we do is every vNIC in a VM is actually backed by a virtual interface from Palo so it's 1:1 mapping. And at that point we actually get the network visibility at the VM level and the vNICs are exposed at the switchboard on a DVS and we do reduce some CPU cycles. All the policies of those port profiles get actually put in place inside of the UCS fabric interconnect. So in a pass-through mode it is basically a hardware instantiation of the VM link. We use the VNLink technology actually to accomplish what we are planning to do in terms of the network visibility on a VM level. Author’s Original Notes:
En este ejemplo vemos que en lo que anteriormente teniamos por ejemplo un rack con 16 servidores entonces: 20 adaptadores para la red de datos y 20 para la red de Storage (FC) 2 switches (redundancia) para datos y 2 ara Storage 40 cables para cada uno de las redes 2 puertos de management para cada uno de las redes Y teniendo la posibilidad de virtualizar y Unified IO, podemos bajar todo esto a aproximadamente la mitad: 20 adaptadores para la red de datos 2 switches (redundancia) para datos 40 cables para red de datos 2 puertos de management datos Y alli luego veremos como podemos diferenciar las redes de datos de las de Storage
5 Things Based on Unified Fabric Simplifies server & reduces physical infrastructure Brings unified fabric and network to server Reduces TCO versus traditional servers Large Memory Footprint Higher server consolidation & larger VM density Reduces CPU, power/cooling, and SW licensing costs Virtualization Adapter True wire once architecture – highly dynamic Network policy and visibility brought to VMs Hypervisor bypass support Reduce NIC and mezz card infrastucture Embedded Management Management abstracted to Fabric Interconnects Chassis have no state and hold blades All blades share same management domain Service Profiles Hardware state abstracted to Fabric Interconnects Blade identities can be duplicated, automatically moved and deployed, and failed-over to another blade Firmware and bios included – competition does not do “ Stateless” environment – any blade can be defined and assume any identity Significant process/manual savings