The next generation datacenter – The software defined datacenter
IPnett migrates the classic datacenter to a flexible, software defined datacenter. We streamline the resources of the datacenter and lower both the capital cost and the operational cost by reducing the number of units in the datacenter.
Presentation of software that optimizes and controls datacenters and network.
4. Storage Virtualization
What Storage Virtualization Was Meant To Be
New data services can be built in software
An open software platform that pools geo-
distributed heterogeneous storage arrays
Software Defined Data Center
Server Storage
Virtual
Data
Center
Virtual
Data
Center
Virtual
Data
Center
Virtual
Data
Center
Virtual
Data
Center
Network
SDDC Management/ portal
Virtualize Virtualize Virtualize
The Storage Industry
NEEDS TO PROVIDE…
Storage pools and management abstracted
from hardware
Automate and leverage the capabilities of the
underlying hardware platforms
5. Storage Virtualization
ViPR: EMC Software-Defined Storage
GLOBAL DATA SERVICES
HDFSObject
Store
Metering
Provisioning
AUTOMATED STORAGE MANAGEMENT
Tenants Self-Service
Portal
API
VMAX
Storage
VNX
Storage
Isilon
Storage
CONTROL PATH DATA PATH
VIRTUAL STORAGE POOLS
Third-Party Commodity
Other
Svcs
VIRTUAL STORAGE ARRAYS
8. ScaleIO
• Installs on existing servers that run databases, hypervisors, or any other applications
ScaleIO agent
(minimal footprint)
• Aggregate applications servers’ local disks
• Add storage and/or compute on the fly
Eliminate the need for SAN storage, the switching fabric, and HBA cards
X X X
ScaleIO ECS is software that uses application servers to create an elastic, scalable, and resilient
virtual SAN at a fraction of the cost and complexity of traditional SANs
8
9. Core, fundamental features
• Converged
– Run alongside applications
• Highly Scalable
– Scale to 100s or 1000s of nodes
– High I/O parallelism
– Scale linearly w/applications (eliminate
I/O bottlenecks)
• Elastic
– Add, move, remove nodes or disks “on
the fly”
– Auto-rebalance
• Resilient
– 2 copy “mesh” mirroring
– Fast, auto-rebuild
– Extensive failure handling / HA
• Platform Agnostic (physical or
virtual)
– Runs on any x86 server (windows will be
supported very soon) and ARM
– Bare-metal or virtual (i.e. ESX, Xen, Hyper-V)
– SSDs/PCIe or HDDs
– Agnostic to network speed
• Ease of management
– Storage as just another application
– Any IT admin can manage entire
datacenter stack
• Partitioning / Tiering / Multi-Tenancy
– Protection-domains
– Storage pools for multi-tiering
– Bandwidth/IOPs limiter
• Snapshots (writeable)
9
10. Life without ScaleIO
HBA NIC/IB
application(s)
file-system
vol. manager
block dev.
drivers
DAS
block
semantics
file-system
semantics
mostly
unutilized,
contain OS
files
Host
switch
switch
switch
Fabric
External
Storage
Subsystem
HBA
10
(description assumes
bare metal for
simplicity)
11. HBA NIC/IB
switch
switch
switch
Fabric
Notes
Access to OS partition may still be
done “regularly”
ScaleIO data client (SDC) is a block
device driver
application(s)
file-system
vol. manager
block dev.
drivers
DAS
External
Storage
Subsystem
HBA
SDC
ScaleIO
protocol
file-system
semantics
Host
Exposes ScaleIO shared
block volumes to the
application
ScaleIO Data Client (SDC)
block
semantics
11
12. NIC/IB
application(s)
vol. manager
block dev.
drivers
DAS
SDS
Space
allocated
to ScaleIO
Notes
ScaleIO data server (SDS) is a
daemon / service
file-system
HBA
Host
ScaleIO
protocol
Owns local storage that
contributes to the
ScaleIO storage pool
ScaleIO Data Server (SDS)
switch
switch
switch
Fabric
External
Storage
Subsystem
HBA
block
semantics
12
13. NIC/IB
application(s)
vol. manager
block dev.
drivers
DAS
SDS
Space
allocated
to ScaleIO
Notes
ScaleIO data server (SDS) is a
daemon / service
file-system
HBA
S c a l e I O C o n f i d e n t i a l
Host
ScaleIO
protocol
Owns local storage that
contributes to the
ScaleIO storage pool
switch
switch
switch
Fabric
External
Storage
Subsystem
HBA
block
semantics
ScaleIO, Inc. All rights reserved 201313
Local storage could
be dedicated disks,
partitions within a
disk, or even files
ScaleIO Data Server (SDS)
14. Host
NIC/IB
vol. manager
block dev.
drivers
DAS
Space
allocated
to ScaleIO
HBA
SDC
SDS
ScaleIO
protocol
An SDC and an SDS
can live together.
SDC serves the I/O
requests of the
resident host
applications.
SDS serves the I/O
requests of various
SDCs.switch
switch
switch
Fabric
External
Storage
Subsystem
HBA
Co-located SDC & SDS
block
semantics
application(s)
file-system
file-system
semantics
14
15. app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
ETH/IB
Fully converged configuration
15
17. app
C
app
C
app
C
app
C
app
C
app
C
S S S S S S
ETH/IB
2-layer configuration
17
Similar to a traditional storage subsystem box,
but:
• Software based
• Highly scalable
• and …
20. app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
S S S S S S
ETH/IB
app
C
app
C
app
C
app
C
app
C
app
C
Any mixture will do
20
21. app
C S
app
C S
app
C S
app
C S
app
C S
app
C S
S S S S S S
ETH/IB
app
C
app
C
app
C
app
C
app
C
app
C
Any mixture will do
21
+ fully supporting asymmetric
nodes (e.g., nodes with different
number of spindles, etc.)
22. Volume layout (no redundancy)
SDS1
…
SDS2 SDS3 SDS4
SDS5 SDS6 SDS100
Vol1
Vol2
• Chunks are
spread across
the cluster in a
balanced
manner
• No hot spots,
no I/O splitting
• Asymmetric
nodes
supported 22
24. 2-copy mirror scheme
SDS1
…
SDS2 SDS3 SDS4
SDS5 SDS6 SDS100
Vol1
Vol2
• “Mesh” mirroring
24
Mirrors are distributed across the
cluster in a balanced manner as well
25. Scale IO – Rebuild
• Multiple spindles behind multiple CPUs do any-to-any mesh
rebalancing
• Tens of gigabytes per second (network limited)
• Failure handling:
– 4TB drive: Re-assuring copy scheme/policy of objects does not take long
• Just as re-assuring copy scheme after placing an entire node does
not have to take too long
29. Juniper Networks & VMware NSX
partnership
Mutual technology
alliances
Joint development
and integration
Vision Advance data center network transformation
through optimizing physical and virtual networks
Goals
Drive simplification in the data center
Optimize physical and virtual networks
30. WAN
FLAT DATA CENTER DESIGN WITH VXLAN
Hypervisor based environment, terminating VXLAN tunnels on MX and on virtual-switch of servers
Each tenant having its own virtual network slice
Connecting cloud assets on LAN with customers coming from WAN
Virtualized L2, and L3
DC GW
Intra DC
Network
Intra DC
Network
TOR TOR TOR
Intra DC
Network
High scale
multitenant VXLAN
implementation
Overlay LAN hiding LAN
complexities
SDN Orchestration
& Controller
Tenants
31. JUNIPER & VMWARE - CONNECTING AND
AUTOMATING VIRTUAL DATA CENTERS
VMWare NSX Controller
vCenter Suite
vCenter
module
vCenter
module
vCenter
module
Network Director
Virtual
Network
Physical
Network
Multi-Tier
L2/L3
Flat Fabric
L2/L3
IP Fabric
All L3
32. JUNIPER CONTRAIL INTRODUCTION
BRINGING NETWORKS INTO THE CLOUD ERA
CONTRAIL Highly scalable SDN Solution that
automates creation of virtual networks
1 Simple integration with physical & virtual
Open approach using proven standards
Agile delivery of network and services
2
3
34. JUNIPER NETWORKS CONTRAIL OVERVIEW
Simple, Open, Agile
SDN Controller
Configuration Analytics
Control
Virtualized Server
VM VM VM
Virtualized Server
VM VM VMIP fabric
(underlay network)
BGP
Federation
Horizontally scalable
Highly available
Federated
BGP
Clustering
REST
XMPP
SDN CONTROLLER
Control
WAN
Orchestrator
XMPP BGP + Netconf
MX
35. WAN
Virtualized Server
VM vRouter
Virtualized Server
vRouterVMIP fabric
(underlay network)
MX: Gateway
Element
Contrail Controller
Open, Standards-based Controller
Contrail vRouter
VM engine which handles the forwarding
plane work on the compute node
Contrail Analytics
Real-time analytics engine on various
protocols between any network
Gateway Element
MX Series (or other router) or EX9200 can
serve as a gateway eliminating the need for
a SW gateway & improving scale
QFX, Q-Fabric, EX
COMPONENTS OF CONTRAIL FUNCTIONALITY
4 key Components of the Contrail Family
Contrail
SDN Controller
Controller Analytics
37. Underlay – QFX5100
Strategic ToR Family
Multiple 10GbE/40GbE port
count options
Supports multiple data center
switching architectures
Compatibility Highlights:
EX4300 - mixed 1GbE /
10GbE Virtual Chassis
QFabric – performance enhancements
New Innovations:
Virtual Chassis Fabric
Topology-Independent In-Service Software
Upgrades
Insight Module
Low Latency
Rich L2/L3 features (including MPLS)
Optimized FCoE
SDN ready
38. Underlay – QFX5100:
Scalable DC Architectures
Spine-Leaf
…
QFX5100, QFX3x00
or EX4300 leaf
QFX5100 or EX9200 spine
QFX5100, QFX3x00 &
EX4300 members
Virtual Chassis
Up to 10 members
QFX5100, QFX3x00
EX4300 members
VC Fabric
Up to 20 members
QFabric
QFX5100, QFX3x00
QFabric Nodes
Up to 128 members
OPEN ARCHITECTURES
JUNIPER ARCHITECTURES
Managed as a Single Switch
Layer 3
L3 Fabric
QFX5100, QFX3x00
or EX4300 member
QFX5100
39. Underlay – QFX5100
Insight Module Captures Microbursts
Captures microburst events which exceed defined thresholds
Adjustable sampling intervals
Reports the microburst events instantaneously via
CLI
Syslog
Log file (human readable
format)
Streaming (Java Script
Object Notification, CSV,
TSV formats)Time
QueueDepthorQueueLatency
Buffer Utilization Monitoring
And Reporting
High Threshold
Low Threshold
Microburst
40. Underlay – QFX5100
Hitless operations
Junos VM (Master)Junos VM (Master) Junos VM (Backup)Junos VM (Master)
Network Resiliency
NetworkPerformance
QFX5100 Topology -
Independent ISSU
Competitive
ISSU Approaches
Data Center Efficiency During
Switch Software Upgrade
High Level QFX5100 Architecture
x86 Hardware Broadcom Trident II
Kernal Based Virtual Machines
Broadcom Trident II
PFE PFE
Linux Kernel
41. Underlay – QFX5100:
Software features
Q4 2013 Q1 2014
• Planned FRS Features*
• L2: xSTP, VLAN, LAG, LLDP/MED
• L3: Static routing, RIP, OSPF, IS-IS, BGP, vrf-lite,
GRE
• Multipath: MC-LAG, L3 ECMP
• IPv6: Neighbor Discovery, Router
advertisement, static routing, OSPFv3, BGPv6, IS-
ISv6, VRRPv3, ACLs
• MPLS, L3VPN, 6PE
• Multicast: IGMPv2/v3, IGMP snooping/querier,
PIM-Bidir, SSM, Anycast, MSDP
• QoS: Classification, Cos/DSCP rewrite, WRED,
SP/WRR, ingress/egress policing, dynamic buffer
allocation, FCoE/Lossless flow, DCBx, ETS. PFC, ECN
• Security: DAI, PACL, VACL, RACL, storm control,
Control Plane Protection
• 10G/40G FCoE, FIP snooping
• Micro-burst Monitoring, analytic
• Sflow, SNMP
• Python
• Planned Post-FRS Features
• Virtual Chassis
• 10 Member VC: Mix of QFX5100,
QFX3500/3600, EX4300
• VCF, 20 nodes at FRS
• VC features
• Parity with Standalone
capabilities
• HA: NSR, NSB, GR for routing
protocols, GRES
• ISSU on Standalone QFX5100 and all
QFX5100 VC/VCF
• NSSU in a mixed VC/VCF
• 64-way ECMP
• VXLAN gateway*
• Openstack, Cloudstack integration*
* After Q1 time frame*Please refer to release-note and manual for latest information
42. Underlay – QFX5100
Virtual Chassis Deployment
L2/L3 or L3
EX4300 EX4300EX4300EX4300
L2/L3 or L3
EX4300 EX4300EX4300EX4300
L2/L3 or L3
QFX QFX QFX QFX
L2/L3 or L3
QFX QFX QFX QFX
Opus Opus
QFX EX4300 Opus OpusQFX
Opus Opus Opus
OpusOpus
QFX QFX EX4300
EX4300QFX
Virtual Chassis – Ring Topology
Opus Opus
QFX
Opus EX4300
Virtual Chassis – Mesh Topology
Shipping
Q3 2013
Q1 2014
VCF
43. Underlay – QFX5100
Spine & Leaf with IP underlay
Scales to 100000’s of 10GbE ports at 1:3 oversubscription