4. 4
Oracle’s Network Fabric Strategy
Oracle’s approach to delivering fabrics that are tightly integrated
with application infrastructure
• Leveraging Oracle’s unique ability to tightly integrate network
services from application-to-disk
• Delivering maximum application performance and scale by removing
both hardware and software I/O bottlenecks
• Attacking network sprawl with network virtualization and
convergence that seamlessly integrates into existing infrastructure
• Simplifying datacenter operations by integrating network
management and orchestration into Oracle’s application-to-disk
management
Expanding on the success of Exadata and Exalogic!
5. 5
Oracle’s Network Fabric
Conceptual View
Oracle’s Network Fabric
Oracle Enterprise
Manager 11g
Server Storage Network
Virtualization
Operating System
Application
s
Middleware Database
Ops Center
Provisioning and
Patching
Problem Diagnosis
Performance
Management
Grid Control
6. 6
Network
Virtual Machine
Applications/Middleware/D
atabase
Operating System
Servers and Storage
Management
Oracle Solaris and Oracle Linux
Oracle Solaris network virtualization and resource controls
Fabric
Services
Oracle's Network Fabric Integration
Fabric
Managemen
t
Oracle Enterprise Manager Ops Center 11g
CMM, ILOM, Events, Monitoring, Topology Views
Fabric
Performanc
e
Application Performance and Scale
RDMA, Stack Bypass, CPU Offload, Zero buffer copies
Fabric
Virtualization
Oracle VM for x86 and Oracle VM for SPARC
Server adapter and network partitioning
Fabric
Interfaces
10GbE CNAs, InfiniBand HCAs
FCoE, iSCSI, iSER, SRP, PoIB, RDS, SDP, EoIB
Fabric
Hardware
10GbE Switches and
InfiniBand Switches and Gateways
Integration Across the Oracle Stack
Delivering Best-in-Class Enterprise Networking Services
10. 10
Oracle’s Network Fabric Differentiation
Oracle’s
Network
Fabric
HP
Virtual
Connect
Cisco
UCS
1 Architecture is application-focused ✓✓✓ ✗ ✗
2
Minimizes risks with integration, testing and tuning of
network services across application stack ✓✓✓ ✗ ✗
3
Simplifies datacenter operation with application-to-disk
infrastructure management that includes network fabric ✓✓✓ ✗ ✗
4
Matches application I/O needs through selection and
integration of most appropriate fabric technology ✓✓✓ ✗ ✗
5
Maximizes application performance with hardware and
software engineered together to remove I/O bottlenecks ✓✓✓ ✓ ✓
6
Increases application mobility with orchestration of
server, storage and network resources ✓✓✓ ✓✓ ✓✓✓
7
Eliminates networking layers and consolidates
switches/cables to reduce equipment and power/cooling ✓✓✓ ✓✓ ✓✓
8
Consolidates and converges smaller pipes and
outmoded LAN, storage and inter-server networks ✓✓✓ ✓✓✓ ✓✓✓
✓✓✓ = Strong Capabilities ✓✓ = Medium Capabilities ✓ = Weak Capabilities ✗ = Not Well Aligned
11. 11
Oracle Exadata Database Machine
• Exadata runs virtually all database applications
much faster and less expensively than any
other computer in the world.
• 10 – 100x faster than for data warehousing
• 20x faster for OLTP applications
• InfiniBand minimizes latency, maximizes
throughput
• 40 Gigabit fault-tolerant storage and server network
• InfiniBand capabilities with RDMA for OS by-pass and
CPU offload
• Eliminates SAN data bandwidth bottlenecks
12. 12
Oracle Exadata Database Machine
Software Enhancements to Leverage Oracle’s Network Fabric
• Oracle Linux Unbreakable Enterprise Kernel speeds up InfiniBand
messaging by 200%
• RDS eliminates buffer copies and enables Remote Direct Memory Access
• Oracle Database RAC uses RDS to achieve 63% greater application TPS
• Exadata uses RDS for communication between DB Server and Exadata Cell
RDS = Reliable Datagram Sockets
DBRM = Database Resource Manager
ASM = Automatic Storage Management
CELLSRV = Cell Service Manager
IORM = I/O Resource Manager
iDB = Intelligent Database Protocol
TPS = Transactions Per Second
13. 13
• Delivers unprecedented Java application
performance
• 12x improvement for internet applications
• 4.5x improvement for Java messaging
applications
• InfiniBand provides extreme scale,
application isolation, and flexibility
• Lossless InfiniBand connects servers and
storage together forming a single large
computer
• InfiniBand partitions and virtual lanes provide
QoS, isolation, and security
• RDMA-enabled protocols improve application
performance and responsiveness
EL X2-2
Oracle Exalogic Elastic Cloud
17. 17
Power Cost
Acquisition Cost
$337k
$1,638/year
Unsurpassed performance, simplified infrastructure, lower cost
Traditional
$203k
$864/year
Management Tools
40% Less
47% Less
3 Tools 1 Tool 2/3 Less
Performance
18.4 Gb/S 32 Gb/S
74% More
Performance
Network Sprawl
368 Network
Elements
106 Network
Elements
71% Less
Oracle’s Network Fabric Advantages
Thirty Two Servers Connected with Sun QDR InfiniBand Gateway
Advantages
18. 18
Capability
11u - 216 physical ports
supporting 648 QDR
InfiniBand connections
Performance
Non-blocking switch fabric
Data throughput 41 Tb/S
300 nS port-to-port latency
Availability &
Management
Redundant power and
cooling
Enclosure management
InfiniBand Subnet Manager Host based subnet
manager for scalability
Scales to thousands of
servers
Modularity enables new
capacity to be brought on
line with no downtime
Sun Datacenter InfiniBand Switch 648
Industry’s Densest Solution for Scalable Clusters
19. 19
Capability
1U - 24 physical ports
supporting 72 QDR
InfiniBand connections
Performance
Non-blocking switch fabric
Data throughput 4.6 Tb/S
300 nS port-to-port latency
Availability &
Management
Redundant power and
cooling
Enclosure management
InfiniBand Subnet Manager
Includes subnet manager
for easy InfiniBand
deployments
Densest 1U InfiniBand
switch replaces six 36-port
switches
Sun Datacenter InfiniBand Switch 72
Industry’s Densest Solution for Mid-Sized Clusters
20. 20
Network Fabric Simplification
Non-Blocking High Speed 72-Node Server Cluster Fabric
Number of
Switch-to-Switch
Cables
Space
Requirement
Number of
Switches
Sun Datacenter InfiniBand
Switch 72
1
1 RU
0
36-port Switch
Building Blocks
6
6 RU
72
Build a non-blocking fabric with fewer switches and cables in less space
21. 21
Capability
1u - 36 QDR InfiniBand
connections (QSFP)
Performance
Non-blocking switch fabric
Data throughput 2.3 Tb/S
100 nS port-to-port latency
Availability &
Management
Redundant power and
cooling
Enclosure management
InfiniBand Subnet Manager
Includes subnet manager
for easy InfiniBand
deployments
High-bandwidth, low
latency to maximize
application performance
Sun Datacenter InfiniBand Switch 36
Ultra Dense, Non-Blocking, Low Latency Switch
22. 22
Sun QDR InfiniBand Host Channel Adapters
High bandwidth connectivity for Servers and Storage
• 40 Gb QDR InfiniBand connectivity
• Supports rich set of network and storage
protocols
• Ideal for delivering network services to
high performance clusters
Form factors:
• Sun InfiniBand Dual Port 4x QDR PCIe Low
Profile Host Channel Adapter M2
• PCI ExpressModule (x8 PCIe Base 2.0)
• Two QDR InfiniBand QSFP ports
• Sun InfiniBand Dual Port 4x QDR PCIe Low
Profile Host Channel Adapter M2
• PCIExpress Low Profile (x8 PCIe Base 2.0)
• Two QDR InfiniBand QSFP ports
23. 23
10GbE Switched NEM
Switched 10GbE w/24 ports
10GbE Network Interface Cards
Designed for Sun rack and blades
10GbE ToR Switch
72 10GbE ports (16 QSFP + 8 SFP+)
Copper
Splitter
Optical
SplitterCopper
40GbE
Transceivers and Cabling
Fiber and copper solutions
Designed for maximize performance &
value
40GbE Cable Solution
Copper or Fiber Optic
Sun 10 Gigabit Ethernet Portfolio
Integrated Data Center Bridging Capabilities
24. 24
Sun Storage 10 GbE FCoE CNA ExpressModule
Solution for Adapter Level Convergence
SG-XEMFCOE2-Q-SRFCoE ExpressModule 2 port SR optical
SG-XEMFCOE2-Q-TA FCoE ExpressModule 2 port TA
8GbE FC
8GbE FC
8GbE FC
SAN Adapter
(HBA)
10GbE
10GbE
25. 25
Sun 10 GbE ExpressModule
Intel 82598EB Solution for Adapter Level Convergence
• Support for Converged Ethernet
– DCB: Priority Grouping (ETS); PFC, DCBX
– iSCSI initiators and remote boot
– FCoE Initiator, Virtual N-Port, FCoE HW Off-
loads and boot, SMI-S, SNIA HBA APIv2
• Stateless offloads
• Queues per port: 128 TX and RX
queues
• Advanced Virtualization Feature Set
– Hypervisor vSwitch support up to 64 VMs
– Hypervisor bypass support up to 128 VFs
EM: X1110A-Z
FEM: X4871A-N
LP: X1107A-Z
26. 26
Capability
10 Gigabit Non-blocking,
low latency Ethernet Switch
Up to 2 NEMs per Sun
Blade 6000 Chassis
Blade Network
Connectivity
10 x 10GbE ports
2 x SAS2 6Gb/s ports
External
Network
Connectivity
3 x QSFP 10GbE ports
2 x SFP+ 10GbE ports
Availability &
Management
Oracle ILOM Service
Processor
Sun Ethernet Fabric
Operating System (SEFOS)
Highest port count of any
commercial blade switch
Cable reduction capability
High-bandwidth, low
latency to maximize
application performance
Sun Blade 6000 Switched Network Express Module 24p
Ultra Dense, Non-Blocking, Low Latency Blade Switch
27. 27
Form Factor 1 RU
Connectivity
Options
64 x 10 GbE or 16 x 40 GbE
(QSFP);
8 x 10 GbE (SFP+)
Latency <900 ns
Availability
Redundant power; Redundant,
hot-pluggable fans
Bandwidth Fully non-blocking at 1.44 Tbps
A
Lower cost of operation:
Highest density
Up to 4:1 cable reduction
Ultra low latency cut
through switching
Sun Network 10GbE Switch 72p
Ultra Dense, Non-Blocking, Low Latency TOR Switch
28. 28
10GbE Network Fabric Simplification
Use Case: Connecting 52 - 72 Servers with TOR switch
• Competitors 24-port building block
– 9 switch chassis (9RU)
– 144 wasted interconnect ports
– 72 interconnect cables
• Competitors 48-port building block
– 4 48-port + 1 24-port chassis (5RU)
– 144 wasted interconnect ports
– 72 interconnect cables
• Oracle 72-port building block
– 1 72-port chassis (1RU)
– No wasted ports or cables
– Non-blocking
29. 29
Functionality
In-Chassis Network and
Storage
Virtualization/Consolidation
Blade
Connectivity
One Virtualized 10GbE NIC
per Blade Slot
Two SAS2 6Gb/s Storage
channels per Blade Slot
External
Connectivity
Two 10GbE Uplink ports
Ten 1GbE Pass-Thru Ports
Management &
Availability
Oracle ILOM Service
Processor
Up to two NEMs in each
Sun Blade 6000 Chassis
Lower cost of operation:
Switch-less networking
10:1 cable reduction
In-silicon virtualization
Zero management
Improved storage bandwidth
with SAS 2 connectivity
between Sun Blade 6000
Storage Module M2 and
server blades
Sun Blade 6000 Virtualized Multi-Fabric 10GbE M2 NEM
Cost Effective Solution for 10GbE Cable Consolidation
30. 30
Sun Blade 6000 Ethernet Connectivity Selection
Sun Blade 6000
1GBe Pass Thru
NEM (NEM-10)
Sun Blade 6000 Virtualized
Multi-Fabric 10GbE M2
Network Express Module
Sun Blade 6000 Ethernet
Switched NEM 24p
10GbE
Uplinks 10 x 1GbE 2 x 10GbE 14 x 10 GbE
Uplink Interfaces 10 x 1GBaseT 2 x SFP+ 2 x SFP+, 3 x QSFP
Blade Connectivity 1GbE Up to 10GbE, 6Gbps SaS 10GbE, 6Gbps SaS
Max Bandwidth 10Gbps 20Gbps 480Gbps
Blade NIC LOM (1GbE) LOM (1GbE), FEM (PT PCIex) Intel 82599 FEM (10GbE)
Virtualization
None 1 PF/Blade/NEM 2 PF/Blade/FEM
No Virtual Functions 128 Virtual Functions/FEM
Link Aggregation N/A Not Supported Yes
DCB Support N/A Not Supported Yes
Target Market
Simple 1GbE
pass-thru
1 GE switching with 10GbE
uplinks
10GbE High Throughput,
Low latency, Convergence
Typical Workloads
Low bandwidth
applications,
existing switching
Enterprise Infrastructure
applications such as Oracle
Fusion, or single instance db or
clustered db
Network intensive
enterprise applications
Web/App Serving
DB access/backup
31. 31
Network Fabric Simplification For Blade Systems
With the Sun 10GbE Switched NEM 24P
• Fully redundant
• 20-40% greater uplink bandwidth
• Up to 80% fewer network components
• Up to 85% fewer cables
• Eliminates 3-4 external 48-port switches
Datacenter
LAN Total
Servers
Number
Chassis
Total
Switch
Modules
Available
Uplink
Bandwidth
In-rack
Cables
External
Switch
ports
Required
External
Switches
Total
Network
Components
Oracle 40 4 8 1.9 Tbps 12 0 0 20
HP 40 3 10 1.6 Tbps 80 80 4 94
IBM 40 3 6 1.2 Tbps 60 60 3 69
32. 32
Sun Blade 6000 Network Fabric Simplification
NEM toTOR
• Eighty (80) blades in 82U wired redundantly
• Wire with 16 QSFP cables (4:1 reduction)
• Non-blocking configuration requires only 24
cables and has 480Gbps uplink capacity
• Easily connects to in-rack Unified Storage
Datacenter
Network
33. 33
BEST DENSITY
ACTIVE-ACTIVE
CONTROLLERS
BEST
SCALABILITY
ACTIVE-ACTIVE
CONTROLLERS
BEST VALUE
FULL SUITE OF
DATA SERVICES
NEW BENEFITS
Best Density and Scale: Industry-leading density, scale up to 1PB for Consolidation
Flash Everywhere and More Of It: Industry-leading flash capacity for Application
Performance
Doubled the Processing Power: Performance to drive enterprise Data Protection
STANDARD FEATURES (ALL MODELS)
High Performance Fabric Support: 10GbE, InfiniBand, FC
All Data Protocols: iSCSI, NFS, CIFS, WebDAV, etc.
Advanced Data Services: Snap, dedup., compression, replication, etc.
CLIENTS AND APPLICATIONS (ALL
MODELS)
Oracle Solaris • Oracle Linux
Oracle Database, Middleware, and Applications
Oracle VM • VMware • Windows
ZFS Storage Appliances
Second Generation Systems
BEST
FLEXIBILITY
SINGLE OR DUAL
CONTROLLERS
IT organizations have had to change the way they think. They are no longer providers of infrastructure, they are providers of business driven IT services. Of course, they continue to be asked to deliver more with less.
As IT managers continue to consolidate their infrastructure, it is causing new performance bottlenecks to crop up. If these bottlenecks can be addressed, consolidation can continue to provide big paybacks. These same bottlenecks also inhibit applications from being able to scale to meet current and future business requirements.
Application infrastructure is obviously necessary, but the less time and money spent on infrastructure issues the more time that can be spent focusing on supporting business. That is why a single view, from application to disk, is so important to be able to have a holistic understanding of available resources, application performance and system wide potential for performance problems.
Oracle’s network fabric strategy is our approach to using fabrics in the datacenter and how Oracle is uniquely positioned to deliver on the promise of fabrics.
The remainder of the presentation discusses why this approach is the only approach that delivers the full benefits of today’s high performance fabrics, discusses the high level architecture and how pieces of Oracle’s technology stack play a role and then finally provides several examples of how we are already delivering pieces of this strategy with customers seeing real results today!
As Oracle continues to engineer the hardware and software to work together, the benefits our customers see will only get better.
The same networking cloud fabric architecture used in Exadata and Exalogic is now available to create customized private clouds, providing integration with application infrastructure with high bandwidth, predictable latency and industry standard interface to datacenter
✓✓✓= Strong Capabilities
✓✓ = Medium Capabilities
✓ = Weak Capabilities
✗ = No Capabilities
In Oracle’s internal testing one rack of Oracle Exalogic Elastic Cloud demonstrated (Ref. Oracle PR 09/19/2010):
12X improvement for Internet applications, to over 1 Million HTTP requests per second.
4.5X improvement for Java messaging applications, to over 1.8 Million messages per second.
12X improvement for Internet applications, to over 1 Million HTTP requests per second.
4.5X improvement for Java messaging applications, to over 1.8 Million messages per second.
The benefits to you in connecting 32 servers connected redundantly with InfiniBand with Ethernet access via the Sun QDR InfiniBand Gateway are enormous when compared to a competing solution with 32 servers connected with 6 gigabit Ethernet ports (2 LOM) and 2 FC ports. These include:
Lower acquisition and power costs
Up to 71% less network elements to buy and manage
A single tool to manage the entire network
And perhaps most importantly, up to 74% better performance
Values shown above represent configurations consisting of 1x Blade Chassis populated with 1x Blade and 2x GbE pass-thru modules. Each blade is configured with 2x E5540 CPUs, 12x 4GB DIMMs, 2x 146GB SAS HDDs and 2x (on-board) GbE Links
Power Calculator versions used (03/03/10):
Sun: On-line Sun Blade 6000 Power Calculator
HP: Blade System Power Sizing Tool v3.15.0
Virtual Machines Calculation:
Sun X6270 has 18 DIMM slots, using 8GB DIMMs max memory = 144GB
HP BL460c G6 has 12 DIMM slots, using 8GB DIMMs max memory = 96GB (*note: BL460c G6 does offer 16GB DIMMs)
Avg. 2GB memory required per virtual machine @ 70% utilization
X6270 = 144GB / 2GB per VM = 72 Virtual Machines x .70 utilization = 50.4 Virtual Machines per blade x 10 blades = 504 Virtual Machines
BL460c G6 = 96GB / 2GB per VM = 48 Virtual Machines x .70 utilization = 33.6 Virtual Machines per blade x 10 blades = 336 Virtual Machines
Power Calculations:
Configuration specifics: Sun X6270 Blades HP BL460c G6 Each blade with: 2x E5540 CPUs 12x 4GB DIMMs 2x 146GB SAS HDDs 2x (on-board) GbE Chassis configurations: Sun Blade 6000 Chassis with: 2x 5600w Power Supply Modules (N+N Redundancy) 6x Cooling Fans 2x GbE Pass-thru NEMs 1x Chassis Monitoring Module HP c7000 Chassis with: 4x 2250w Power Supplies (N+N Redundancy) 8x Cooling Fans 2x Ethernet Pass-thru Modules 1x Onboard Administrator 100% Utilization:
Sun config: 277.9w
HP config: 351.5w
Power Cost calculations
note: .1032 c/kWh is the commercial cost (avg. for 2008 & 2009) per kWh … found at
http://www.eia.doe.gov/fuelelectric.html
SUN Config: 277.9w/hr (100% utilization) x 8760 hrs. per year = 2,434 kWh/yr
24,344 kWh/yr x .1032 per kWh = $251.23 annual power cost
HP Config: 351.5w/hr (100% utilization) x 8760 hrs. per year = 3,079kWh /yr
30,791 kWh/yr x .1032 per kWh = $317.76 annual power cost
Carbon footprint: http://www.42u.com/efficiency/energy-efficiency-calculator.htm
The NEM and ToR were designed to be highly synergistic evidenced by common use of QSFP connectors. Not shown in the slide are the variety of QSFP copper “straight-through” cables for point to point applications. Straight-through cables are ideal for NEM to NEM stacking and NEM to ToR scale out applications. No other solution in the market provides this density and cabling innovation. QSFP will soon proliferate in the datacenter as the standard 40GbE interconnect. We have an early market advantage and have supplied a complete ecosystem of copper and optical cables and transceivers to simplify network design and deployments for Oracle application stacks.
Pictured above are unique QSFP transceivers. These devices fit into the QSFP connectors/cages and provide a standard MTP connector. We have chosen not to sell the straight-through MTP to MTP optical cables as these are available in bulk from a number of suppliers. However we do stock optical splitter cables (seen in yellow) that break out the 40GbE connector into 4 discrete 10GbE connections utilizing LC connectors.
The first generation of CNA’s have focused on converging LAN and SAN networking onto a single adapter. This means that a CNA functions as a fully featured Ethernet NIC and a fully featured Fibre Channel HBA, enabling both traffic types to travel on a shared 10GbE Ethernet link. This obviously leads to immediate cost savings since a single CNA can replace multiple NICs and HBAs.
Enabling Fibre Channel data to reliably travel over an Ethernet link has required the development of two new technologies: Enhanced Ethernet and Fibre Channel over Ethernet.
Required for Sun environments, lower hardware TCO
Talk about architectural freedom, allowing you to use the best hardware and manage them all in one place enabled by Ops Center’s ability to abstract away the intricacies of the hardware environment.