2. NetApp Employees and NetApp Partners Only
NetApp Confidential Information – Limited Use. Access to this document
is restricted to NetApp employees and NetApp partners who are under
NDA obligations. DO NOT share this document with anyone else without
prior written permission from the NetApp Competitive Advantage Team.
Failure to comply with this notice may be considered a violation of
NetApp’s Terms of Use.
3NetApp Confidential - NetApp and NetApp Partner Use Only
3. Get out of the functional silo discussion
Specific VPLEX limits
Specific NetApp advantages
Things to remember
4
5. Cloud requires standardization
− Automation and Orchestration requires standards
− Gets the CIO out of managing infrastructure
− Creates room for business innovation
Agile Data Infrastructure
6
6. Cloud requires standardization
− Automation and Orchestration requires standards
− Gets the CIO out of managing infrastructure
− Creates room for business innovation
Agile Data Infrastructure is standardization
− One platform for calibrated scale
− One platform providing all required services
− The end of silo’s of functional storage
Agile Data Infrastructure
7
8. EMC: Cloud Requires “Federation”
Virtual
Clients
Private
Cloud
Public
Cloud
Information
Virtualization
Security
Federation
Virtual
Applications
9
9. EMC: Cloud Requires “Federation”
Virtual
Clients
Private
Cloud
Public
Cloud
Information
Virtualization
Security
Federation
Virtual
Applications
Standardized
10
10. EMC: Cloud Requires Federation
Virtual
Clients
Private
Cloud
Public
Cloud
Information
Virtualization
Security
Federation (another layer of complexity)
Virtual
Applications
??
11
11. Why “Federate” when you can:
1. Standardize the existing and new storage platform and,
2. Rid the data center of the extra moving parts and,
3. Simplify operations and lower OPEX,
4. Get IT out of the way of the business?
A good question…
12
12. Does what
VMAX cannot –
at VNX price points
Over 8,000 MetroClusters
Highest level of resiliency
Set and forget
Bullets in the Gun
13
13. Does what
VMAX cannot –
at VNX price points
Over 8,000 MetroClusters
Highest level of resiliency
Set and forget
EMC has no
answer to this
They pigeon-hole
conversation
clustered ONTAP
Non-Disruptive Operations
Never migrate again
Any workload
Bullets in the Gun
14
15. EMC positioning of siloes of storage functions
16
Prod Data
Mobility
VPLEX
16. EMC positioning of siloes of storage functions
17
Prod DR
Prod Data
Data
Mobility
Replication
RecoverPointVPLEX
Other Options:
MirrorView
Replicator
SRDF
17. EMC positioning of siloes of storage functions
18
Prod
BUR
DR
Dev
Test
Prod
Prod Data
Data
Mobility
Replication
Mirroring
Local or
remote
RecoverPointVPLEX
Other Options:
MirrorView
Replicator
SRDF
Other Options:
VNX Snapshot
SnapView
SnapSure
SAN Copy
TimeFinder
18. VPLEX: Metro, Geo, Global
19
Move and relocate VMs,
applications, and data
over distance
Disaster avoidance
Data center migration
Workload rebalancing
MOBILITY
Maintain availability
and non-stop access by
mirroring across locations
High availability
Eliminate storage
operations from failover
AVAILABILITY
Access
anywhere
Access
anywhere
Access
anywhere
Enable concurrent
read / write access to
data across locations
Instant and simultaneous
data access over distance
Streamline workflows
COLLABORATION
NetApp Confidential - NetApp and NetApp Partner Use Only
19. The moving parts - simplified
20
• Hosts
• VPLEX Clusters & Engines
• FC SAN (no NAS!)
• Intercluster ISL
& Cross Connect ISL
• Block Storage Arrays
• VPLEX Witness
20. Cache
Cache Directory D Cache Directory F Cache Directory HCache Directory B
Cache Directory C Cache Directory E Cache Directory G
Cache
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Cache Directory A
CacheCache
Distributed Cache Coherency – the key to VPLEX
Directory based distributed cache coherency efficiently
maintains cache state consistency across all Engines
21
21. Cache
Cache Directory D Cache Directory F Cache Directory HCache Directory B
Cache Directory C Cache Directory E Cache Directory G
Cache
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Cache Directory A
New Write:
Block 3
CacheCache
Distributed Cache Coherency – the key to VPLEX
Directory based distributed cache coherency efficiently
maintains cache state consistency across all Engines
22
22. Cache
Cache Directory D Cache Directory F Cache Directory HCache Directory B
Cache Directory C Cache Directory E Cache Directory G
Cache
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Cache Directory A
Read:
Block 3
CacheCache
Distributed Cache Coherency – the key to VPLEX
Directory based distributed cache coherency efficiently
maintains cache state consistency across all Engines
23
23. Cache
Cache Directory D Cache Directory F Cache Directory HCache Directory B
Cache Directory C Cache Directory E Cache Directory G
Cache
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Engine Cache Coherency Directory
Block Address 1 2 3 4 5 6 7 8 9 10 11 12 13 …
Cache A
Cache C
Cache E
Cache G
Cache Directory A
Read:
Block 3
CacheCache
Distributed Cache Coherency – the key to VPLEX
Directory based distributed cache coherency efficiently
maintains cache state consistency across all Engines
24
24. Use FC over IP for back end fabric
Use existing SANs for back end fabric
Encapsulate existing LUNs
on arrays without migration (must be 4K
multiple)
Local HA protection with
up to 4 Engines (8 directors) per site
“Stretch” a volume over distance - R/W
VPLEX can do these things
25
26. Complex Installation – 112 Pages
Local
− Physical setup
− 17 Main Tasks
At least 75 discrete tasks
− Additional per LUN
− Additional per host
− Additional per WWN
Six separate tools
Metro or Geo
− Physical Setup – 2X
− 34 Main Tasks
At least 175 discrete tasks
− Additional per LUN
− Additional per host
− Additional per WWN
Six separate tools
27
27. Single writer workloads
− vSphere: Good
− Databases: Not good
− Oracle RAC - the magic is in RAC, not VPLEX
VPLEX introduces a new management UI
− IONIX and Unisphere cannot manage VPLEX
VM Restart in A/A clusters when:
− Failure of VPLEX cluster at A – VMs must restart on B
− Loss of back end at A – VMs must restart on B
Stretched ESX clusters require Metro HA
VPLEX backend path load balancing – ROUND ROBIN
Where VPLEX falls down -1
Rollback
Scenarios
Async Con Grps
‘Dirty Cache’
28
28. Witness (avoiding ‘split brain’)
− Metro Only
− Synchronous Consistency Groups only (no async fail over support)
− Geo: diagnostics only
− Independent from other clusters
All Directors should see all volumes
− Massive increase in initiator counts
Lose ONE director in a cluster and
− Asymmetric Backend Visibility = ‘degraded mode’
HA or NDU prevented
Performance is negatively impacted
Where VPLEX falls down - 2
29
29. Failure handling in a vSphere deployment
30
(<1ms) Metro
Cross Connect
(<10 ms) Metro
Host Restart hosts Restart hosts
VPLEX Cluster No interruption: alternate
path & witness (usually)
Manual restart on HA
side (sometimes Auto)
Disk No interruption: VPLEX
path to remote disk
No interruption: VPLEX
path to remote disk
Witness No interruption, VPLEX
invokes static rules
Suspend ‘non-preferred’
VPLEX and restart on
preferred
Intercluster Link Static rule invoked,
suspend on cluster, restart
hosts – no distributed
volume support
Preferred site: no
interruption
VM’s in non preferred
site: GOS fail, restart
VMs in preferred
No vSphere
Fault-tolerance support
but is road mapped
NEW
30. VPLEX Increases OPEX
LUN Operation Steps
Create a LUN Create it on array
Assign it to VPLEX
Map it to host
Resize a LUN Resize it on array
Resize it on VPLEX
Resize it on host
Snapshot of a LUN EMC ONLY
Make snap of LUN on array
Make clone of LUN on array
VPLEX drives
OPEX UP
NAS is not
supported
How do you NDU migrate to a new VPLEX cluster?
31
31. VMAX – “Go Wide” - each host maps to each director
VPLEX – “NDU” - Each director sees all volumes
− Each host needs 4 paths to each LUN
vSphere – Each LUN visible to all cluster members
Best Practices – Conflicts and Impossibilities
32
32. VMAX – “Go Wide” - each host maps to each director
VPLEX – “NDU” - Each director sees all volumes
− Each host needs 4 paths to each LUN
vSphere – Each LUN visible to all cluster members
Common sense time:
− How many initiators can VPLEX handle?
− How many does the solution require?
− How many best practices will be broken?
− How brittle will the resulting solution be?
Best Practices – Conflicts and Impossibilities
33
33. VPLEX or SRM?
Disaster Avoidance Disaster Recovery
You know in advance You don’t know in advance
Goal is to be non-disruptive Always somewhat disruptive
Entire process can be slow (Hours to Days) Entire process needs to be FAST
Accomplished via VPLEX and vMotion Accomplished via SRM
Examples
Incoming hurricane
Power grid maintenance
Datacenter migration (over time)
Examples
Unexpected floods
Unexpected hardware failures
Datacenter migration (all at once)
34
34. Supported synchronous Active / Active storage architecture
R/W on both ends, use a Witness
Stretched layer 2 connectivity
622 Mbps bandwidth (min) between sites
Latency requirements
− <1ms if Cross Connected Metro HA
− <10ms with vSphere 5 / Metro vMotion
− <50ms Geo
Single vCenter (w/ vCenter heartbeat)
Disaster Avoidance Requirements – VPLEX
35
35. VPLEX Networking Recommendations
Plan for different I/O traffic patterns
Look at OTV and LISSP ** the real magic for application mobility
Put management traffic onto a vSwitch
Minimize latency
Use Cross Connect with Metro if possible (Make them!)
− Link requires own ISL and physical network
− Do not share with intercluster link
36
36. What problem does VPLEX solve?
SAN only
Does not simplify storage infrastructure
Highly complex installation
Increases OPEX
No storage pooling support
Limited Business Continuity
No Storage Efficiency
No Cloning, Tiering
Takeaways
37
37. Field Portal – fieldportal.netapp.com
− NetApp product and solution information
Communities – communities.netapp.com
− Solution and Technology spaces
− Experts live here
Resources
38