SlideShare a Scribd company logo
1 of 60
1EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Deep Dive
Version 1.0, Jan 2014
Kevin Wang
An Introduction of the latest cutting-edge technology in Tiered
Storage
Advanced FAST VP Feature
Introduction
2EMC CONFIDENTIAL—INTERNAL USE ONLY.
Contents
Component Slide #
Documentation 3
FAST Specific Errors 4
FAST VP Concepts
Review
5
VP Compression and Time
to Compress
11
FAST VP with FTS 27
FAST VP Allocation by
Policy
37
FAST VP SRDF
Coordination
42
Case Study 47
Q & A 59
3EMC CONFIDENTIAL—INTERNAL USE ONLY.
Documentation
• Detailed documents/whitepapers on FAST VP
can be found in support.emc.com, will reference
some of these in the following slides which
include:
– FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and
Performance
– Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for
EMC Symmetrix VMAX Series Arrays
– EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide (latest
version)
– Best Practices for Fast, Simple, Capacity Allocation with EMC Symmetrix Virtual
Provisioning
• Other training material can refer to FAST VP
Solution Support Session: FAST VP Step by Step
4EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST Specific Errors
• Ucode:
– General VP errors: 7F10, 7F3F, 7F43
– Error sent by engine: 24AF, 20AF, 04DA
• Engine:
– The engine can go into degraded mode if it cannot perform some
function.
– When we go into degraded mode the GUI on the SP will show that we
are in this state.
– symfast –sid xxx list –state will also show this state
5EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Concepts
Review
6EMC CONFIDENTIAL—INTERNAL USE ONLY.
Two Variations on FAST
• FAST (also referred to as FAST DP) supports
disk group provisioning for Symmetrix VMAX:
– Full LUN movement of disk group provisioned Thick Devices
– Supports FBA and CKD devices
– Introduced in Enginuity 5874
– Not applicable to VMAX 10K arrays
• FAST VP supports virtual provisioning for
Symmetrix VMAX:
– SubLUN movement of Thin Devices
– Introduced in Enginuity 5875 with support for FBA devices
– Enginuity 5876 added support for CKD devices
7EMC CONFIDENTIAL—INTERNAL USE ONLY.
When to use FAST and FAST VP
• Workloads with a higher skew will benefit more
from FAST or FAST VP:
– Workloads with skew above 80/20 are considered good
candidates.
– Unbalanced workloads direct a higher percentage of I/O to a
small percentage of the storage allocated.
– Heavily utilized devices are moved to faster technologies, to
reduce response time.
– Under utilized devices are moved to less expensive
technologies, to reduce cost.
• Workloads with a lower skew may not benefit:
– Workloads with a skew closer to 50/50 (uniform workload) are
less likely to contain candidates for promotion/demotion.
8EMC CONFIDENTIAL—INTERNAL USE ONLY.
• 30% More
Performance
• 80% Less Footprint
• 20% Lower Costs
• 40% More
Performance
• 60% Less Footprint
• 15% Lower Costs
• 20% More
Performance
• 50% Less Footprint
• Same Costs
Sample Performance Data (94% >= 80/20)
Heavy Skew
95% of IO on
5% of data
~12% of
workloads
EFD
3%
FC
0%
SATA
97%
Capacity 1
Moderate Skew
90% of IO on
10% of data
~45% of
workloads
EFD
3%
FC
15%
SATA
82%
Capacity 2
Low Skew
80% of I/O on
20 % of data
~37% of
workloads
EFD
3%
FC
27%
SATA
70%
Capacity 3
9EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP for Symmetrix
• Without FAST VP a Thin Device is bound to a pool which
contains disks with same technology, RAID protection and
rotational speed.
• With Fast VP busier Thin Device extents are moved to pool(s) in
a faster storage tier though Thin Device stays bound to original
pool.
Untiered VP Storage with busy
and less busy Thin Device
Extents residing in same Pool
Tiered Virtually Provisioned
Storage with busier extents on
Faster tiers
0
0
0
000
0
0
0
01 1 1
1 1 1
1 1 1
2 2 2
2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2
Tier 0 (EFD)
Tier 1 (FC)
Tier 2 (SATA)
T H I N
POOLS
9
10EMC CONFIDENTIAL—INTERNAL USE ONLY.
Elements of FAST
• Symmetrix Tier – a shared storage resource with common
technologies
• FAST Policy – manages data placement and movement across
Storage Types to achieve service levels for one or more Storage
Groups
• Storage Group – logical grouping of standard devices for common
management
10
FAST VP Tiers FAST VP
Policies
Storage Groups
Thin Devices
ThinProd1_SG
ThinProd2_SG
ThinDev_SG
R53_EFD_Pool
EFD R5 Thin Tier
R66_FC_Pool
FC R6 Thin Tier
R614_SATA_Pool
SATA R6 Tier
Production
25%
50%
25%
Development
25%
100%
11EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression and Time to
Compress
12EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression
• Saves space within a thin pool
• Works with all TDEVs
– Fixed Block Architecture (FBA), including D910 on IBM i
– Count Key Data (CKD)
• Supported with local and remote replication
products
– TimeFinder
– Symmetrix Remote Data Facility (SRDF)
• Supported with internal data movement products
– Virtual LUN VP mobility (VLUN)
– FAST for Virtual Pools (FAST VP)
13EMC CONFIDENTIAL—INTERNAL USE ONLY.
VP Compression Details
• Requires Enginuity 5876 code (Seine) and SE
7.5+
• Pools enabled for VP compression at creation or
by setting the attribute on an existing pool
• Once enabled, a background task reserves
capacity in the pool to temporarily uncompress
data
– This capacity is called the Decompress Read Queue (DRQ)
– Capacity ranges between 76 and 3000 MB depending on pool
size
• Compression can be initiated
– Manually using SYMCLI or Unisphere
– FAST VP will automatically compresses infrequently used data
14EMC CONFIDENTIAL—INTERNAL USE ONLY.
Considerations When Using VP
Compression• Limit of 10 terabytes of compressed data per
VMAX engine
• Compression can be disabled when no longer
needed
• Disabling compression does not uncompress
data
– Data must be uncompressed before disabling compression
– Space reserved for DRQ returned to pool
• Allocated, but unwritten space will be reclaimed
• Persistent allocations cannot be compressed
• FTS Encapsulated devices cannot be
compressed
15EMC CONFIDENTIAL—INTERNAL USE ONLY.
Data Access
Read Write
• Uncompresses the track into
reserved area in the pool
• Space in the reserved area is
controlled by a Least Recently
Used (LRU) algorithm
• LRU ensures that space is always
available to uncompress a track
• Recompression is not required
• Written in uncompressed form
to the thin device
• If under FAST control, data will
be compressed based on time of
last access
• Can be manually compressed
16EMC CONFIDENTIAL—INTERNAL USE ONLY.
Migration
Source
State
Target Compression Enabled Target Compression Disabled
Compressi
on Enabled
Compressed tracks are migrated to the
target as compressed tracks.
Target pool: Utilized space increases
by the compressed size and the free
space decreases by the compressed
size.
Source pool: Utilized space decreases
by the compressed size and the free
space increases by the compressed
size.
Compressed device cannot be migrated
to a pool with compression disabled.
Compressed device must be
uncompressed before it can be
migrated.
Compressi
on
Disabled
Uncompressed tracks are migrated to
the target pool as uncompressed tracks.
Target pool: Utilized space increases
by the uncompressed size and the free
space decreases by the uncompressed
size.
Source pool: Utilized space decreases
by the uncompressed size and the free
space increases by the uncompressed
Uncompressed tracks are migrated to
the target pool.
Target pool: Utilized space increases
by the uncompressed size and the free
space decreases by the uncompressed
size.
Source pool: Utilized space decreases
by the uncompressed size and the free
space increases by the uncompressed
17EMC CONFIDENTIAL—INTERNAL USE ONLY.
Enabling/Disabling VP Compression -
SYMCLI
• To create a new pool with compression enabled
– symconfigure –sid 78 -cmd
“create pool 101_SATAR6, type = thin,
vp_compression = Enable;” commit
• To enable compression on an existing pool
– symconfigure –sid 78 –cmd
“set pool 101_SATAR6, type = thin,
vp_compression = Enable;” commit
• To disable compression on an existing pool
– symconfigure –sid 78 –cmd
“set pool 101_SATAR6, type = thin,
vp_compression = Disable;” commit
18EMC CONFIDENTIAL—INTERNAL USE ONLY.
Manual Compression - SYMCLI
• SYMCLI Syntax
symdev –sid 265 –file archive.txt compress
symdev –sid 265 –devs 025:02A compress –stop
symsg –sid 265 –sg ESXsg compress
symdg –sid 265 –g ESXdg uncompress
symcg –sid 265 –cg VMcg uncompress –stop
• Stopping the compress action does not
uncompress data that has been compressed
– Manual intervention is required
19EMC CONFIDENTIAL—INTERNAL USE ONLY.
Enabling Compression
TDEV
ED
CBA
Extents allocated for Decompress-Read-Queue (DRQ)Command issued to enable compression for thin pool
DRQ: no data DRQ: no data DRQ: no data
20EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Compression Flow
TDEV
Compressed ExtentED
CBA
E
C
A
DRQ: no data DRQ: no data DRQ: no data
User issues command to compress TDEVAllocate compressed extentEvaluate extent AStore compressed data for extent A
and update pointers
Reclaim uncompressed extent AEvaluate extent BZero-Reclaim extent B, which contains
all zero data
Evaluate extent CStore compressed data for extent C
and update pointers
Reclaim uncompressed extent CEvaluate extent DSkip extent D (less than 50% compressible)Evaluate extent EStore compressed data for extent E
and update pointers
Reclaim uncompressed extent E
21EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Read Flow
TDEV
Compressed ExtentD
E
C
A
DRQ: no data DRQ: no data DRQ: no dataDRQ: C
C > (DRQ)
Host requests data from extent DExtent D is uncompressed, so the data is
returned as usual
Host requests data from extent CExtent C is compressed, so its data is
uncompressed into an unused – or the least
recently used – extent in the DRQ
Extent C’s uncompressed data is returned to the
host from the DRQ
Note, extent C’s data remains compressed in the
event the extent allocated from the DRQ is
required to service another read from a
compressed extent
22EMC CONFIDENTIAL—INTERNAL USE ONLY.
Dir. Header
Write Flow
TDEV
Compressed ExtentD
E
C
A
DRQ: no data DRQ: no data DRQ: no dataDRQ: C
C > (DRQ)
A
Host writes to extent DExtent D is uncompressed, so write flow is
handled normally
Host writes to extent AExtent A is compressed, so a new extent must be
allocated to decompress the data
After extent A is decompressed, pointers are
updated to reflect the data’s new location
Write to extent A continues as normalNOTE: Extent A will not be automatically
recompressed
23EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• VP Compression was introduced in 5876 code (Seine)
• FAST VP’s implementation of the feature is to automate
VP Compression at the sub lun level for thin devices that
are under Fast control.
• The Time to Compress control parameter is what
enables/disables the feature
• Feature is set to disabled by default, the parameter
defaults to “Never” = disabled
• To enable the feature the time to Compress is set to a
“time” value. Any FAST extents that are idle for greater
than this value are candidates for automatic compression.
• Even if the extents qualify for compression the data will
only get compressed if the pool is enabled for VP
Compression.
24EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• For customers time to compress can be set to a min of 40
days and a max of 400 days
• For testing the time to compress can be set to much
lower values.
• Every FAST performance move now decompresses the
data first before moving it (even if compression is not
active on the system)
• For more detailed info on Time to Compress see pages
45 & 46 in “Implementing Fully Automated Storage
Tiering for Virtual Pools (FAST VP) for EMC Symmetrix
VMAX Series Arrays”
25EMC CONFIDENTIAL—INTERNAL USE ONLY.
Time to Compress
• Enabling Compression on a pool
• When creating the pool via symconfigure:
– “create pool xxx, type=thin, vp_compression=ENABLE”
• If pool already present via symconfigure:
– “set pool xxx, type=thin, vp_compression=ENABLE”
• Setting the Time to Compress
• symfast –sid xxx set –control_parms –time_to_compress
<NumDays>
• Customers not allowed to set below 40 days
• Inhouse can set to a minimum of 1 day via cli but need to put the
following into the API options file
“SYMAPI_MIN_TIME_TO_COMPRESS = 1”
• Should enter this variable into options file both on host and Symm
Service Processor
26EMC CONFIDENTIAL—INTERNAL USE ONLY.
Compression Rate
• The FAST VP Compression Rate determines the
aggressiveness with which data is compressed
• Can be configured between 1 and 10 with 5 as
the default. The lower the value the more
aggressive the rate of compression.
• To set via Symcli:
– symfast –sid xxx set –control_parms –fast_compression_rate
<value>
27EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with
FTS
28EMC CONFIDENTIAL—INTERNAL USE ONLY.
Federated Tiered Storage Overview
• FTS allows external storage arrays to be used as
back-end disks for Symmetrix VMAX arrays
• LUNs on external arrays can be used by the
Symmetrix as:
– Raw storage space for the creation of Symmetrix devices
– Data sources that can be encapsulated and the information made
available to a host accessing the Symmetrix
• Symmetrix presents the external storage as
unprotected volumes
– Data protection is provided by the external array
• FTS is a free Enginuity feature
29EMC CONFIDENTIAL—INTERNAL USE ONLY.
Components for FTS - 1
• DX Directors
– Stands for DA eXternal and behaves just like a DA
– Handles external LUs as though they are Symmetrix
drives
– Runs on Fiber Optic SLICs just like FA and RF
emulations
8E0, 8E1 7E0, 7E1
DX Director Pair
VMAX 40K Engine
30EMC CONFIDENTIAL—INTERNAL USE ONLY.
Components for FTS - 2
• eDisks
– Associated with an external SCSI logical unit
– Accessible through the SAN
– Belong to virtual, unprotected RAID groups
– Also referred to as “external spindle”
• External Disk Group
– Virtual groups created to contain eDisks
– Group numbers start with 512
• Virtual RAID Group
– Created for each eDisk
– Not locally protected in the Symmetrix
– Relies on protection provided by the external array
31EMC CONFIDENTIAL—INTERNAL USE ONLY.
FTS Virtualization - 1
• Two modes of operation for external storage
– External provisioning uses storage as raw capacity, data is
lost
– Encapsulation allows preservation of data on external
storage
• Standard Encapsulation
• Virtual Provisioning Encapsulation
• External provisioning
– External disk (spindle) is created and used as raw capacity
for new Symmetrix devices
– External disk groups have numbers starting with 512
– External disks are displayed as unprotected drives, RAID
protection is expected to be provided by the remote array
– Virtual Provisioning Data Devices (TDATs) can be created
on external disks
32EMC CONFIDENTIAL—INTERNAL USE ONLY.
FTS Virtualization - 2
• Standard Encapsulation
– Creates an eDisk (spindle) for each external LUN and adds it to an
external disk group
– A Symmetrix device is also created at the same time
– Access to user data on the device is permitted through the
Symmetrix device
• Virtual Provisioning Encapsulation
– Creates an eDisk (spindle) for each external LUN and adds it to an
external disk group
– A data device and a fully allocated thin device are also created
– This thin device can be used for data migration using VLUN
migration for Virtual Provisioning
33EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS
• FAST VP (and VLUN) and are fully supported with FTS
• FTS tier was considered the lowest storage tier regardless of its actually
technology and performance prior to 5876.159.102 and SE 7.4 (Fast
Policy can have 4 Tiers :- EFD -> FC -> SATA -> external FTS tier)
• Starting with that release (and SE 7.5), a FTS tier can be any tier in a
FAST VP policy - a.k.a User Defined FTS.
• When an external tier is created, a technology type (EFD, FC, or SATA)
can be specified, in addition to the external location. By specifying a
technology type, a related expectation of performance is associated with
the tier. This will then affect the tiers ranking amongst the other 3 tiers
when added to a FAST VP policy.
• For the 6 possible tier types that can be included in a FAST VP policy, the
rankings, in descending order of performance, are as follows:-
 Internal EFD -> External EFD -> Internal FC -> External FC -> Internal SATA ->
External SATA
34EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS
• After an external tier has been created, the technology type can be
modified. If the type is changed, the ranking of the tier will change within
its Policy. As such, an external tier can be upgraded or downgraded
within a Policy. (note: The technology type of a tier can only be modified for an external tier)
• Enginuity executes an initial performance discovery of the tier when it is
first added to a FAST VP policy. This is done to ensure the performance
of the external tier is in line with the expectations of the external
technology (EFD = 3ms / FC = 14ms / Sata ~ 20ms+). Subsequent
lighter performance discoveries are done periodically to validate or
incrementally adjust the previously discovered performance.
• If the performance of an external tier falls below expectations, an event
is triggered alerting users to this (event ID 1511). Users can resolve the
discrepancy by either addressing the cause of the degraded
performance on the external tier, or by lowering the expectations of the
tier.
• symaudit list -sid <Box#) -text , would detail a User if a Tier was
underperforming (below, FTS Tier Ext_FC1 is an external Tier with FC
drives but defined as EFD , FTS Tier Ext_FC2 is defined as FC :-
– 03/14/13 18:15 Fast Other SE29b FAST Tier (Ext_FC1) performing worse than expected (LOW) Actual Response Time: 28.09 ms Expected Response Time: 3
ms (or less)
– 03/14/13 18:25 Fast Other SE29d FAST Tier (Ext_FC2) performing worse than expected (LOW) Actual Response Time: 38.9 ms Expected Response Time: 14
ms (or less)
35EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS
• FTS devices get added to the bin as Group Number 512 :-
• A7 for an eDisk TDAT FTS device :-
• We can create a Virtually Provisioned Pool of TDATs using the devices. The
Mirror type for FTS devices is "NORMAL" (Unprotected) - only FTS can have an
Unprotected Mirror/Raid type. Only externally provisioned data devices can be
added to FAST VP tiers (encapsulated data devices cannot).
• To add the Pool to a Tier we use :-
– symtier -sid <Box #> create -name <Tier Name> -external -tgt_unprotected -technology
FC -vp -pool <Pool Name>
– 8D,,,FAST,LIST,TIER (Tier 8 / Pool A = External (Y) set as FC and unprotected)
• Cli commands like symtier list / symfast list –fp –v / symcfg show –pool <Pool Name>
-thin , details similar info.
36EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP with FTS
• 98,FAST,SUMM (or 8D,,,FAST,LIST,POOL,RESP) - shows External Tier 8 / Pool A is
set as FC (and color coded accordingly – white = EFD / blue = FC / purple = SATA ,
red plus white or blue or red for FTS)
• To modify a Tiers technology (and performance expectation) :-
– symtier -sid <Box #> modify -tier_name <Tier Name> -technology <EFD|FC|SATA>
– Here I have changed Tier 8 from FC, as above, to EFD (note: its actual technology on the external array is
FC).
• 98,FAST,MOVE,READ,<SG Number> - this shows the
Movement Policy and the associated Tier rankings
in the Policy. Here the ranking is :-
EFD -> External EFD -> FC -> SATA.
So our External Tier is ranked 2nd because we
defined it as EFD (ranked above the Internal FC and
SATA Tier as we had defined it as EFD, although it
actually is FC on the remote FTS array).
37EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP Allocation by
Policy
38EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation by Policy
• Allows thin devices to be allocated in any of the pools under the fast
policy that the thin device is associated with.
• Introduced in 5876 (Seine)
• If preferred all the thin devices under fast control can be bound to one
pool in the policy. When that pool fills up the allocations will
automatically spill over into the other pools (in the policy). Criteria for
choosing the pool to allocate in is:
– If performance metrics are available for the extent, allocate from a pool in
the appropriate tier
– If performance metrics are not available, allocate from the bound pool
– If bound pool is full, choose tier that has lowest capacity in the policy
• Compliance is honored unless all other pools are full
• Detailed description of the feature can be found in pages 41-43 of
“Implementing Fully Automated Storage Tiering for Virtual Pools
(FAST VP) for EMC Symmetrix VMAX Series Arrays”
39EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation by Policy
• Feature is disabled by default
• To enable via Symcli:
– symfast –sid xxx set –control_parms –
vp_allocation_by_fp ENABLE
40EMC CONFIDENTIAL—INTERNAL USE ONLY.
Allocation Flow
Allocation
request
Alloc by
Policy
Enabled?
Try alloc
from bound
pool
Extent has
valid tier &
in
compliance
Try alloc from all
pools in extent‘s
assigned tier
Try alloc
from pool
Try alloc
from all
pools in
tier
Try alloc
from pool
Select tiers in
policy from
smallest to
largest
Failure
All
pools
failed
Failure
All
pools
failed
All
tiers
failed
41EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP allocation by policy – misc.
• FAST VP controlled allocations will not obey
PRC.
– Allocations will violate PRC, since PRC only exists to
protect against FAST movements and is not designed
to block host allocations.
– Therefore it will be possible to exhaust some of the
higher-performing tiers (like EFD) if heavy new
allocations occur on a system, which has a 100%
capacity to policy match.
42EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF
Coordination
43EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination
• By default, FAST VP will act completely independently on each side
of an SRDF link. Typically, the R1 and R2 devices in an SRDF
pairing will undergo very different workloads - read/write mix for the
R1 and writes only for the R2. As a result, decisions regarding data
placement on each side of the link could also, potentially, differ.
• Enginuity 5876 introduces SRDF awareness for FAST VP, allowing
performance metrics to be periodically transmitted from R1 to R2,
across the SRDF link. These R1 metrics are merged with the R2
metrics, allowing FAST VP promotion/demotion decisions for R2 data
to account for R1 workload.
• SRDF coordination can be enabled or disabled per storage group,
with the default being disabled.
44EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination
• Example :- RDF Device 1B40
98,FAST,STAT,MOVE,1B40,1,SHOW (showing Movement Policy Scores) on
R1 side
• 98,FAST,STAT,MOVE,1B40,1,SHOW on R2 side (noting here the Scores are
different to the R1)
• 98,FAST,STAT,PROF,1B40,1,SHOW (displays IO profiles and counts for the
device) for the R1 here
45EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination
• 98,FAST,STAT,PROF,1B40,1,SHOW (R2 device has different IO profiles and
counts to the R1 above)
• To Enable RDF Coordination (issue on R1 SG) :
symfast -sid <Box#> modify -sg <SG Name> -rdf_coordination ENABLE -fp_name
<Policy Name>
• The Cli command symfast -sid <Box#> show -association -sg <SG Name> will show if RDF
Coordination is currently Enabled or Disabled. Also can be verified at Inlines with 8D,,,FAST,LIST,ASSN
(Flag of "R" for RDF Coordination)
46EMC CONFIDENTIAL—INTERNAL USE ONLY.
FAST VP SRDF Coordination
• Later the IO Profiles and Counts from the R2 side look like the R1 Side (R2
screen shot below). Also note the addition of an RDF profile for each extent.
• Now the R2 Movement Policy Scores look like the R1 side (R2 screenshot
below)
• In this mode Tier allocations of R1 and respective R2 devices would look
very similar (as seen with symfast -sid <Box#> list -association -sg <SG
Name> -demand) - Allocation of the data for the R2 devices would be much
closer to that of the R1 data. As the Policies maybe different on each array,
FAST VP may not match the allocations completely.
47EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case
Study
48EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Symmetrix VMAX 20K
• Customer is forced to use FAST VP to move data
from pool VP_GREEN to pool VP_BLUE because
of a known bug with VLUN migration (KB 92545)
and was told to use FAST VP as a workaround until
an upgrade to code 5876.229 can be scheduled,
but FAST VP did not move data as what they
wanted so remote support was engaged.
• PSE Lab was engaged and worked for more than
one month to help customer on this issue.
49EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• 23 Gb data OoC which is unable to be moved into FT_BLUE Tier (Only one thin
pool VP_BLUE in this tier)
• The storage group which is using in this FAST VP configuration is I_tiermove_sg
and device 1D55 is the only one member of that SG.
50EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Preliminary Analysis (Why FAST VP cannot move data in this scenario):
51EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Initially FAST was implemented to move extents from one FC tier VP_GREEN to
another FC tier VP_BLUE and this was not suitable as FAST doesn't cater for
this type of movement with disks of the same type and technology.
• Preliminary Conclusion and Suggestion:
• The reason that why FAST VP cannot move data is that FAST doesn't cater for
this type of movement with disks of the same type and technology.
• The suggestion which was given by EMC is that create a new FAST VP policy in
effort to move from the existing FC tier VP_GREEN to the EFD tier VP_RED and
FC Tier VP_BLUE and then from this those two tiers back to the new destination
FC tier VP_BLUE
52EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Customer followed the suggestion, but new problem was found:
• A single FAST policy was created by EMC’s suggestion. FAST was enabled with just a single policy with
the extents % set for 100/100/0. They wanted to have all the LUN extents that located in FC Pool
VP_GREEN in a SG I_tiermove_sg moved to EFD VP_RED and then back down to VP_BLUE by
changing policy later. The reason that why they doing this is because they cannot move data between
VP_GREEN and VP_BLUE via FAST VP as they are both FC disks technology.
• FAST VP did not move all the data to target Tier by the new policy, so we must find out why.
53EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
As we can see that device 1D55 still has some extents which is located in pool
VP_GREEN
54EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Further Analysis(Why FAST VP stopped moving data again):
• PSE dialed into the box and determined that not all extents had been moved is because the R/T of the
EFD disks were on average not better than 50% of the response they were getting from their FC tiers
hence some of the moves were blocked.
• FAST VP will not promote data to the next pool unless the response time to be gained is greater than
x% according to these rules:
• EFD <= 50% FC
• EFD <= 30% SATA
• FC <= 50% SATA
• Here you can see that POOL 1 which is the EFD pool is giving 4.8 msec response time and the FC pool
2 is giving 8.1 response time. EFD > 50% FC
• As such FAST VP will not move data into the EFD pool.
55EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study –Westpac Banking, SR 58924478
• Solution for Response Time checking issue:
• So as a workaround the PSE disabled the R/T so that this would not be the reason
for blocking any remaining moves and they are now ok and have been moving the
extents which located in VP_GREEN as expected to the EFD VP_RED and then
back to the FC VP_BLUE 100% by adjusting the FP policy to 0/100/0.
• Finally all the data tracks of device 1D55 have been moved to pool VP_BLUE.
• The FC/EFD response time is only but one of the parameters that FAST uses to
determine what extents get moved. Policy is basically used to act like VLUN
Migration its hardly normal FAST workload. The R/T would need to be blocking FAST
movement when they implement FAST in a real world scenario i.e. typical workload
and as such engineering would not approve drive swaps just purely based on that
gen2 has longer RT than Gen3.
• The reason that why customer cannot perform VLUN Migration for this case:
• Device 1D55 started off in VP_GREEN pool. If they use VLUN migrate to move data
to VP_BLUE without any modifying, we will hit the bug which listed in KB 92545 .
• They tried to create a FAST tier, assign the TARGET pool to in and retried the VLUN
migrate but it failed because that the device 1D55 is already bound to VP_BLUE, not
VP_GREEN.
56EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Symmetrix VMAX 20K
• Customer reports that after removing a Thin Pool
FC_T2_P1_49 from FAST VP, and did symmigrate
to move data out of this pool into FC_T2_P1, he still
sees new allocations in the pool FC_T2_P1_49
from TDEVs that are not bound to this pool.
• PSE Lab and SSG was engaged for this issue.
57EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Symmetrix VMAX 20K
• Customer reports that after removing a Thin Pool
FC_T2_P1_49 from FAST VP, and did symmigrate
to move data out of this pool into FC_T2_P1, he still
sees new allocations in the pool FC_T2_P1_49
from TDEVs that are not bound to this pool.
• PSE Lab and SSG was engaged for this issue.
58EMC CONFIDENTIAL—INTERNAL USE ONLY.
Case Study–Capital Group , SR 59368480
• Root Cause:
• This could be related to the fact that “any new
device allocation task to the pool will be put in the
task queue. If the pool is dis-associated from the
tier while the device allocation tasks are still in the
queue, the fast will complete the task regardless
whether the pool is still under fast control. That's
why there were new allocations to the pool after
being removed from tier. However, there should be
no more new allocations to the pool after the tasks
in the queue have been completed”.
59EMC CONFIDENTIAL—INTERNAL USE ONLY.
Q&A
60EMC CONFIDENTIAL—INTERNAL USE ONLY.
THANK YOU

More Related Content

What's hot

Symm configuration management
Symm configuration managementSymm configuration management
Symm configuration management
.Gastón. .Bx.
 
102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide
Amit Sharma
 
Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02
Newlink
 

What's hot (20)

Time finder
Time finderTime finder
Time finder
 
SCSI-3 PGR Support on Symm
SCSI-3 PGR Support on SymmSCSI-3 PGR Support on Symm
SCSI-3 PGR Support on Symm
 
Symm configuration management
Symm configuration managementSymm configuration management
Symm configuration management
 
EMC VNX
EMC VNXEMC VNX
EMC VNX
 
102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide
 
EMCSymmetrix vmax-10
EMCSymmetrix vmax-10EMCSymmetrix vmax-10
EMCSymmetrix vmax-10
 
VNX Overview
VNX Overview   VNX Overview
VNX Overview
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems
 
Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...
Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...
Storage Area Networking: SAN Technology Update & Best Practice Deep Dive for ...
 
EMC Vnx master-presentation
EMC Vnx master-presentationEMC Vnx master-presentation
EMC Vnx master-presentation
 
Emc
EmcEmc
Emc
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshop
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
 
Vmax architecture
Vmax architectureVmax architecture
Vmax architecture
 
EMC: VNX Unified Storage series
EMC: VNX Unified Storage seriesEMC: VNX Unified Storage series
EMC: VNX Unified Storage series
 
Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 

Similar to FAST VP Deep Dive

Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
Citrix
 
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
ldangelo0772
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Community
 

Similar to FAST VP Deep Dive (20)

Advanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAdvanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtop
 
Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
Citrix Synergy 2014 - Syn232 Building a Cloud Architecture and Self- Service ...
 
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
z/VM Performance Analysis
z/VM Performance Analysisz/VM Performance Analysis
z/VM Performance Analysis
 
Presentation data domain advanced features and functions
Presentation   data domain advanced features and functionsPresentation   data domain advanced features and functions
Presentation data domain advanced features and functions
 
INF7827 DRS Best Practices
INF7827 DRS Best PracticesINF7827 DRS Best Practices
INF7827 DRS Best Practices
 
Virtual Storage Center
Virtual Storage CenterVirtual Storage Center
Virtual Storage Center
 
Session 7362 Handout 427 0
Session 7362 Handout 427 0Session 7362 Handout 427 0
Session 7362 Handout 427 0
 
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
Cisco at v mworld 2015 vmworld - cisco mds and emc xtrem_io-v2
 
Introducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver SuiteIntroducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver Suite
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
VMworld 2013: How SRP Delivers More Than Power to Their Customers
VMworld 2013: How SRP Delivers More Than Power to Their Customers VMworld 2013: How SRP Delivers More Than Power to Their Customers
VMworld 2013: How SRP Delivers More Than Power to Their Customers
 
DB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and controlDB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and control
 
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
 
SanDisk: Persistent Memory and Cassandra
SanDisk: Persistent Memory and CassandraSanDisk: Persistent Memory and Cassandra
SanDisk: Persistent Memory and Cassandra
 
Tổng quan công nghệ Net backup - Phần 2
Tổng quan công nghệ Net backup - Phần 2Tổng quan công nghệ Net backup - Phần 2
Tổng quan công nghệ Net backup - Phần 2
 
Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph Ceph Day Shanghai - On the Productization Practice of Ceph
Ceph Day Shanghai - On the Productization Practice of Ceph
 
Platform Security Summit 18: Xen Security Weather Report 2018
Platform Security Summit 18: Xen Security Weather Report 2018Platform Security Summit 18: Xen Security Weather Report 2018
Platform Security Summit 18: Xen Security Weather Report 2018
 

FAST VP Deep Dive

  • 1. 1EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP Deep Dive Version 1.0, Jan 2014 Kevin Wang An Introduction of the latest cutting-edge technology in Tiered Storage Advanced FAST VP Feature Introduction
  • 2. 2EMC CONFIDENTIAL—INTERNAL USE ONLY. Contents Component Slide # Documentation 3 FAST Specific Errors 4 FAST VP Concepts Review 5 VP Compression and Time to Compress 11 FAST VP with FTS 27 FAST VP Allocation by Policy 37 FAST VP SRDF Coordination 42 Case Study 47 Q & A 59
  • 3. 3EMC CONFIDENTIAL—INTERNAL USE ONLY. Documentation • Detailed documents/whitepapers on FAST VP can be found in support.emc.com, will reference some of these in the following slides which include: – FAST VP for EMC Symmetrix VMAX Theory and Best Practices for Planning and Performance – Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays – EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide (latest version) – Best Practices for Fast, Simple, Capacity Allocation with EMC Symmetrix Virtual Provisioning • Other training material can refer to FAST VP Solution Support Session: FAST VP Step by Step
  • 4. 4EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST Specific Errors • Ucode: – General VP errors: 7F10, 7F3F, 7F43 – Error sent by engine: 24AF, 20AF, 04DA • Engine: – The engine can go into degraded mode if it cannot perform some function. – When we go into degraded mode the GUI on the SP will show that we are in this state. – symfast –sid xxx list –state will also show this state
  • 5. 5EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP Concepts Review
  • 6. 6EMC CONFIDENTIAL—INTERNAL USE ONLY. Two Variations on FAST • FAST (also referred to as FAST DP) supports disk group provisioning for Symmetrix VMAX: – Full LUN movement of disk group provisioned Thick Devices – Supports FBA and CKD devices – Introduced in Enginuity 5874 – Not applicable to VMAX 10K arrays • FAST VP supports virtual provisioning for Symmetrix VMAX: – SubLUN movement of Thin Devices – Introduced in Enginuity 5875 with support for FBA devices – Enginuity 5876 added support for CKD devices
  • 7. 7EMC CONFIDENTIAL—INTERNAL USE ONLY. When to use FAST and FAST VP • Workloads with a higher skew will benefit more from FAST or FAST VP: – Workloads with skew above 80/20 are considered good candidates. – Unbalanced workloads direct a higher percentage of I/O to a small percentage of the storage allocated. – Heavily utilized devices are moved to faster technologies, to reduce response time. – Under utilized devices are moved to less expensive technologies, to reduce cost. • Workloads with a lower skew may not benefit: – Workloads with a skew closer to 50/50 (uniform workload) are less likely to contain candidates for promotion/demotion.
  • 8. 8EMC CONFIDENTIAL—INTERNAL USE ONLY. • 30% More Performance • 80% Less Footprint • 20% Lower Costs • 40% More Performance • 60% Less Footprint • 15% Lower Costs • 20% More Performance • 50% Less Footprint • Same Costs Sample Performance Data (94% >= 80/20) Heavy Skew 95% of IO on 5% of data ~12% of workloads EFD 3% FC 0% SATA 97% Capacity 1 Moderate Skew 90% of IO on 10% of data ~45% of workloads EFD 3% FC 15% SATA 82% Capacity 2 Low Skew 80% of I/O on 20 % of data ~37% of workloads EFD 3% FC 27% SATA 70% Capacity 3
  • 9. 9EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP for Symmetrix • Without FAST VP a Thin Device is bound to a pool which contains disks with same technology, RAID protection and rotational speed. • With Fast VP busier Thin Device extents are moved to pool(s) in a faster storage tier though Thin Device stays bound to original pool. Untiered VP Storage with busy and less busy Thin Device Extents residing in same Pool Tiered Virtually Provisioned Storage with busier extents on Faster tiers 0 0 0 000 0 0 0 01 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 Tier 0 (EFD) Tier 1 (FC) Tier 2 (SATA) T H I N POOLS 9
  • 10. 10EMC CONFIDENTIAL—INTERNAL USE ONLY. Elements of FAST • Symmetrix Tier – a shared storage resource with common technologies • FAST Policy – manages data placement and movement across Storage Types to achieve service levels for one or more Storage Groups • Storage Group – logical grouping of standard devices for common management 10 FAST VP Tiers FAST VP Policies Storage Groups Thin Devices ThinProd1_SG ThinProd2_SG ThinDev_SG R53_EFD_Pool EFD R5 Thin Tier R66_FC_Pool FC R6 Thin Tier R614_SATA_Pool SATA R6 Tier Production 25% 50% 25% Development 25% 100%
  • 11. 11EMC CONFIDENTIAL—INTERNAL USE ONLY. VP Compression and Time to Compress
  • 12. 12EMC CONFIDENTIAL—INTERNAL USE ONLY. VP Compression • Saves space within a thin pool • Works with all TDEVs – Fixed Block Architecture (FBA), including D910 on IBM i – Count Key Data (CKD) • Supported with local and remote replication products – TimeFinder – Symmetrix Remote Data Facility (SRDF) • Supported with internal data movement products – Virtual LUN VP mobility (VLUN) – FAST for Virtual Pools (FAST VP)
  • 13. 13EMC CONFIDENTIAL—INTERNAL USE ONLY. VP Compression Details • Requires Enginuity 5876 code (Seine) and SE 7.5+ • Pools enabled for VP compression at creation or by setting the attribute on an existing pool • Once enabled, a background task reserves capacity in the pool to temporarily uncompress data – This capacity is called the Decompress Read Queue (DRQ) – Capacity ranges between 76 and 3000 MB depending on pool size • Compression can be initiated – Manually using SYMCLI or Unisphere – FAST VP will automatically compresses infrequently used data
  • 14. 14EMC CONFIDENTIAL—INTERNAL USE ONLY. Considerations When Using VP Compression• Limit of 10 terabytes of compressed data per VMAX engine • Compression can be disabled when no longer needed • Disabling compression does not uncompress data – Data must be uncompressed before disabling compression – Space reserved for DRQ returned to pool • Allocated, but unwritten space will be reclaimed • Persistent allocations cannot be compressed • FTS Encapsulated devices cannot be compressed
  • 15. 15EMC CONFIDENTIAL—INTERNAL USE ONLY. Data Access Read Write • Uncompresses the track into reserved area in the pool • Space in the reserved area is controlled by a Least Recently Used (LRU) algorithm • LRU ensures that space is always available to uncompress a track • Recompression is not required • Written in uncompressed form to the thin device • If under FAST control, data will be compressed based on time of last access • Can be manually compressed
  • 16. 16EMC CONFIDENTIAL—INTERNAL USE ONLY. Migration Source State Target Compression Enabled Target Compression Disabled Compressi on Enabled Compressed tracks are migrated to the target as compressed tracks. Target pool: Utilized space increases by the compressed size and the free space decreases by the compressed size. Source pool: Utilized space decreases by the compressed size and the free space increases by the compressed size. Compressed device cannot be migrated to a pool with compression disabled. Compressed device must be uncompressed before it can be migrated. Compressi on Disabled Uncompressed tracks are migrated to the target pool as uncompressed tracks. Target pool: Utilized space increases by the uncompressed size and the free space decreases by the uncompressed size. Source pool: Utilized space decreases by the uncompressed size and the free space increases by the uncompressed Uncompressed tracks are migrated to the target pool. Target pool: Utilized space increases by the uncompressed size and the free space decreases by the uncompressed size. Source pool: Utilized space decreases by the uncompressed size and the free space increases by the uncompressed
  • 17. 17EMC CONFIDENTIAL—INTERNAL USE ONLY. Enabling/Disabling VP Compression - SYMCLI • To create a new pool with compression enabled – symconfigure –sid 78 -cmd “create pool 101_SATAR6, type = thin, vp_compression = Enable;” commit • To enable compression on an existing pool – symconfigure –sid 78 –cmd “set pool 101_SATAR6, type = thin, vp_compression = Enable;” commit • To disable compression on an existing pool – symconfigure –sid 78 –cmd “set pool 101_SATAR6, type = thin, vp_compression = Disable;” commit
  • 18. 18EMC CONFIDENTIAL—INTERNAL USE ONLY. Manual Compression - SYMCLI • SYMCLI Syntax symdev –sid 265 –file archive.txt compress symdev –sid 265 –devs 025:02A compress –stop symsg –sid 265 –sg ESXsg compress symdg –sid 265 –g ESXdg uncompress symcg –sid 265 –cg VMcg uncompress –stop • Stopping the compress action does not uncompress data that has been compressed – Manual intervention is required
  • 19. 19EMC CONFIDENTIAL—INTERNAL USE ONLY. Enabling Compression TDEV ED CBA Extents allocated for Decompress-Read-Queue (DRQ)Command issued to enable compression for thin pool DRQ: no data DRQ: no data DRQ: no data
  • 20. 20EMC CONFIDENTIAL—INTERNAL USE ONLY. Dir. Header Compression Flow TDEV Compressed ExtentED CBA E C A DRQ: no data DRQ: no data DRQ: no data User issues command to compress TDEVAllocate compressed extentEvaluate extent AStore compressed data for extent A and update pointers Reclaim uncompressed extent AEvaluate extent BZero-Reclaim extent B, which contains all zero data Evaluate extent CStore compressed data for extent C and update pointers Reclaim uncompressed extent CEvaluate extent DSkip extent D (less than 50% compressible)Evaluate extent EStore compressed data for extent E and update pointers Reclaim uncompressed extent E
  • 21. 21EMC CONFIDENTIAL—INTERNAL USE ONLY. Dir. Header Read Flow TDEV Compressed ExtentD E C A DRQ: no data DRQ: no data DRQ: no dataDRQ: C C > (DRQ) Host requests data from extent DExtent D is uncompressed, so the data is returned as usual Host requests data from extent CExtent C is compressed, so its data is uncompressed into an unused – or the least recently used – extent in the DRQ Extent C’s uncompressed data is returned to the host from the DRQ Note, extent C’s data remains compressed in the event the extent allocated from the DRQ is required to service another read from a compressed extent
  • 22. 22EMC CONFIDENTIAL—INTERNAL USE ONLY. Dir. Header Write Flow TDEV Compressed ExtentD E C A DRQ: no data DRQ: no data DRQ: no dataDRQ: C C > (DRQ) A Host writes to extent DExtent D is uncompressed, so write flow is handled normally Host writes to extent AExtent A is compressed, so a new extent must be allocated to decompress the data After extent A is decompressed, pointers are updated to reflect the data’s new location Write to extent A continues as normalNOTE: Extent A will not be automatically recompressed
  • 23. 23EMC CONFIDENTIAL—INTERNAL USE ONLY. Time to Compress • VP Compression was introduced in 5876 code (Seine) • FAST VP’s implementation of the feature is to automate VP Compression at the sub lun level for thin devices that are under Fast control. • The Time to Compress control parameter is what enables/disables the feature • Feature is set to disabled by default, the parameter defaults to “Never” = disabled • To enable the feature the time to Compress is set to a “time” value. Any FAST extents that are idle for greater than this value are candidates for automatic compression. • Even if the extents qualify for compression the data will only get compressed if the pool is enabled for VP Compression.
  • 24. 24EMC CONFIDENTIAL—INTERNAL USE ONLY. Time to Compress • For customers time to compress can be set to a min of 40 days and a max of 400 days • For testing the time to compress can be set to much lower values. • Every FAST performance move now decompresses the data first before moving it (even if compression is not active on the system) • For more detailed info on Time to Compress see pages 45 & 46 in “Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays”
  • 25. 25EMC CONFIDENTIAL—INTERNAL USE ONLY. Time to Compress • Enabling Compression on a pool • When creating the pool via symconfigure: – “create pool xxx, type=thin, vp_compression=ENABLE” • If pool already present via symconfigure: – “set pool xxx, type=thin, vp_compression=ENABLE” • Setting the Time to Compress • symfast –sid xxx set –control_parms –time_to_compress <NumDays> • Customers not allowed to set below 40 days • Inhouse can set to a minimum of 1 day via cli but need to put the following into the API options file “SYMAPI_MIN_TIME_TO_COMPRESS = 1” • Should enter this variable into options file both on host and Symm Service Processor
  • 26. 26EMC CONFIDENTIAL—INTERNAL USE ONLY. Compression Rate • The FAST VP Compression Rate determines the aggressiveness with which data is compressed • Can be configured between 1 and 10 with 5 as the default. The lower the value the more aggressive the rate of compression. • To set via Symcli: – symfast –sid xxx set –control_parms –fast_compression_rate <value>
  • 27. 27EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP with FTS
  • 28. 28EMC CONFIDENTIAL—INTERNAL USE ONLY. Federated Tiered Storage Overview • FTS allows external storage arrays to be used as back-end disks for Symmetrix VMAX arrays • LUNs on external arrays can be used by the Symmetrix as: – Raw storage space for the creation of Symmetrix devices – Data sources that can be encapsulated and the information made available to a host accessing the Symmetrix • Symmetrix presents the external storage as unprotected volumes – Data protection is provided by the external array • FTS is a free Enginuity feature
  • 29. 29EMC CONFIDENTIAL—INTERNAL USE ONLY. Components for FTS - 1 • DX Directors – Stands for DA eXternal and behaves just like a DA – Handles external LUs as though they are Symmetrix drives – Runs on Fiber Optic SLICs just like FA and RF emulations 8E0, 8E1 7E0, 7E1 DX Director Pair VMAX 40K Engine
  • 30. 30EMC CONFIDENTIAL—INTERNAL USE ONLY. Components for FTS - 2 • eDisks – Associated with an external SCSI logical unit – Accessible through the SAN – Belong to virtual, unprotected RAID groups – Also referred to as “external spindle” • External Disk Group – Virtual groups created to contain eDisks – Group numbers start with 512 • Virtual RAID Group – Created for each eDisk – Not locally protected in the Symmetrix – Relies on protection provided by the external array
  • 31. 31EMC CONFIDENTIAL—INTERNAL USE ONLY. FTS Virtualization - 1 • Two modes of operation for external storage – External provisioning uses storage as raw capacity, data is lost – Encapsulation allows preservation of data on external storage • Standard Encapsulation • Virtual Provisioning Encapsulation • External provisioning – External disk (spindle) is created and used as raw capacity for new Symmetrix devices – External disk groups have numbers starting with 512 – External disks are displayed as unprotected drives, RAID protection is expected to be provided by the remote array – Virtual Provisioning Data Devices (TDATs) can be created on external disks
  • 32. 32EMC CONFIDENTIAL—INTERNAL USE ONLY. FTS Virtualization - 2 • Standard Encapsulation – Creates an eDisk (spindle) for each external LUN and adds it to an external disk group – A Symmetrix device is also created at the same time – Access to user data on the device is permitted through the Symmetrix device • Virtual Provisioning Encapsulation – Creates an eDisk (spindle) for each external LUN and adds it to an external disk group – A data device and a fully allocated thin device are also created – This thin device can be used for data migration using VLUN migration for Virtual Provisioning
  • 33. 33EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP with FTS • FAST VP (and VLUN) and are fully supported with FTS • FTS tier was considered the lowest storage tier regardless of its actually technology and performance prior to 5876.159.102 and SE 7.4 (Fast Policy can have 4 Tiers :- EFD -> FC -> SATA -> external FTS tier) • Starting with that release (and SE 7.5), a FTS tier can be any tier in a FAST VP policy - a.k.a User Defined FTS. • When an external tier is created, a technology type (EFD, FC, or SATA) can be specified, in addition to the external location. By specifying a technology type, a related expectation of performance is associated with the tier. This will then affect the tiers ranking amongst the other 3 tiers when added to a FAST VP policy. • For the 6 possible tier types that can be included in a FAST VP policy, the rankings, in descending order of performance, are as follows:-  Internal EFD -> External EFD -> Internal FC -> External FC -> Internal SATA -> External SATA
  • 34. 34EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP with FTS • After an external tier has been created, the technology type can be modified. If the type is changed, the ranking of the tier will change within its Policy. As such, an external tier can be upgraded or downgraded within a Policy. (note: The technology type of a tier can only be modified for an external tier) • Enginuity executes an initial performance discovery of the tier when it is first added to a FAST VP policy. This is done to ensure the performance of the external tier is in line with the expectations of the external technology (EFD = 3ms / FC = 14ms / Sata ~ 20ms+). Subsequent lighter performance discoveries are done periodically to validate or incrementally adjust the previously discovered performance. • If the performance of an external tier falls below expectations, an event is triggered alerting users to this (event ID 1511). Users can resolve the discrepancy by either addressing the cause of the degraded performance on the external tier, or by lowering the expectations of the tier. • symaudit list -sid <Box#) -text , would detail a User if a Tier was underperforming (below, FTS Tier Ext_FC1 is an external Tier with FC drives but defined as EFD , FTS Tier Ext_FC2 is defined as FC :- – 03/14/13 18:15 Fast Other SE29b FAST Tier (Ext_FC1) performing worse than expected (LOW) Actual Response Time: 28.09 ms Expected Response Time: 3 ms (or less) – 03/14/13 18:25 Fast Other SE29d FAST Tier (Ext_FC2) performing worse than expected (LOW) Actual Response Time: 38.9 ms Expected Response Time: 14 ms (or less)
  • 35. 35EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP with FTS • FTS devices get added to the bin as Group Number 512 :- • A7 for an eDisk TDAT FTS device :- • We can create a Virtually Provisioned Pool of TDATs using the devices. The Mirror type for FTS devices is "NORMAL" (Unprotected) - only FTS can have an Unprotected Mirror/Raid type. Only externally provisioned data devices can be added to FAST VP tiers (encapsulated data devices cannot). • To add the Pool to a Tier we use :- – symtier -sid <Box #> create -name <Tier Name> -external -tgt_unprotected -technology FC -vp -pool <Pool Name> – 8D,,,FAST,LIST,TIER (Tier 8 / Pool A = External (Y) set as FC and unprotected) • Cli commands like symtier list / symfast list –fp –v / symcfg show –pool <Pool Name> -thin , details similar info.
  • 36. 36EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP with FTS • 98,FAST,SUMM (or 8D,,,FAST,LIST,POOL,RESP) - shows External Tier 8 / Pool A is set as FC (and color coded accordingly – white = EFD / blue = FC / purple = SATA , red plus white or blue or red for FTS) • To modify a Tiers technology (and performance expectation) :- – symtier -sid <Box #> modify -tier_name <Tier Name> -technology <EFD|FC|SATA> – Here I have changed Tier 8 from FC, as above, to EFD (note: its actual technology on the external array is FC). • 98,FAST,MOVE,READ,<SG Number> - this shows the Movement Policy and the associated Tier rankings in the Policy. Here the ranking is :- EFD -> External EFD -> FC -> SATA. So our External Tier is ranked 2nd because we defined it as EFD (ranked above the Internal FC and SATA Tier as we had defined it as EFD, although it actually is FC on the remote FTS array).
  • 37. 37EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP Allocation by Policy
  • 38. 38EMC CONFIDENTIAL—INTERNAL USE ONLY. Allocation by Policy • Allows thin devices to be allocated in any of the pools under the fast policy that the thin device is associated with. • Introduced in 5876 (Seine) • If preferred all the thin devices under fast control can be bound to one pool in the policy. When that pool fills up the allocations will automatically spill over into the other pools (in the policy). Criteria for choosing the pool to allocate in is: – If performance metrics are available for the extent, allocate from a pool in the appropriate tier – If performance metrics are not available, allocate from the bound pool – If bound pool is full, choose tier that has lowest capacity in the policy • Compliance is honored unless all other pools are full • Detailed description of the feature can be found in pages 41-43 of “Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP) for EMC Symmetrix VMAX Series Arrays”
  • 39. 39EMC CONFIDENTIAL—INTERNAL USE ONLY. Allocation by Policy • Feature is disabled by default • To enable via Symcli: – symfast –sid xxx set –control_parms – vp_allocation_by_fp ENABLE
  • 40. 40EMC CONFIDENTIAL—INTERNAL USE ONLY. Allocation Flow Allocation request Alloc by Policy Enabled? Try alloc from bound pool Extent has valid tier & in compliance Try alloc from all pools in extent‘s assigned tier Try alloc from pool Try alloc from all pools in tier Try alloc from pool Select tiers in policy from smallest to largest Failure All pools failed Failure All pools failed All tiers failed
  • 41. 41EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP allocation by policy – misc. • FAST VP controlled allocations will not obey PRC. – Allocations will violate PRC, since PRC only exists to protect against FAST movements and is not designed to block host allocations. – Therefore it will be possible to exhaust some of the higher-performing tiers (like EFD) if heavy new allocations occur on a system, which has a 100% capacity to policy match.
  • 42. 42EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP SRDF Coordination
  • 43. 43EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP SRDF Coordination • By default, FAST VP will act completely independently on each side of an SRDF link. Typically, the R1 and R2 devices in an SRDF pairing will undergo very different workloads - read/write mix for the R1 and writes only for the R2. As a result, decisions regarding data placement on each side of the link could also, potentially, differ. • Enginuity 5876 introduces SRDF awareness for FAST VP, allowing performance metrics to be periodically transmitted from R1 to R2, across the SRDF link. These R1 metrics are merged with the R2 metrics, allowing FAST VP promotion/demotion decisions for R2 data to account for R1 workload. • SRDF coordination can be enabled or disabled per storage group, with the default being disabled.
  • 44. 44EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP SRDF Coordination • Example :- RDF Device 1B40 98,FAST,STAT,MOVE,1B40,1,SHOW (showing Movement Policy Scores) on R1 side • 98,FAST,STAT,MOVE,1B40,1,SHOW on R2 side (noting here the Scores are different to the R1) • 98,FAST,STAT,PROF,1B40,1,SHOW (displays IO profiles and counts for the device) for the R1 here
  • 45. 45EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP SRDF Coordination • 98,FAST,STAT,PROF,1B40,1,SHOW (R2 device has different IO profiles and counts to the R1 above) • To Enable RDF Coordination (issue on R1 SG) : symfast -sid <Box#> modify -sg <SG Name> -rdf_coordination ENABLE -fp_name <Policy Name> • The Cli command symfast -sid <Box#> show -association -sg <SG Name> will show if RDF Coordination is currently Enabled or Disabled. Also can be verified at Inlines with 8D,,,FAST,LIST,ASSN (Flag of "R" for RDF Coordination)
  • 46. 46EMC CONFIDENTIAL—INTERNAL USE ONLY. FAST VP SRDF Coordination • Later the IO Profiles and Counts from the R2 side look like the R1 Side (R2 screen shot below). Also note the addition of an RDF profile for each extent. • Now the R2 Movement Policy Scores look like the R1 side (R2 screenshot below) • In this mode Tier allocations of R1 and respective R2 devices would look very similar (as seen with symfast -sid <Box#> list -association -sg <SG Name> -demand) - Allocation of the data for the R2 devices would be much closer to that of the R1 data. As the Policies maybe different on each array, FAST VP may not match the allocations completely.
  • 48. 48EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Symmetrix VMAX 20K • Customer is forced to use FAST VP to move data from pool VP_GREEN to pool VP_BLUE because of a known bug with VLUN migration (KB 92545) and was told to use FAST VP as a workaround until an upgrade to code 5876.229 can be scheduled, but FAST VP did not move data as what they wanted so remote support was engaged. • PSE Lab was engaged and worked for more than one month to help customer on this issue.
  • 49. 49EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • 23 Gb data OoC which is unable to be moved into FT_BLUE Tier (Only one thin pool VP_BLUE in this tier) • The storage group which is using in this FAST VP configuration is I_tiermove_sg and device 1D55 is the only one member of that SG.
  • 50. 50EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Preliminary Analysis (Why FAST VP cannot move data in this scenario):
  • 51. 51EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Initially FAST was implemented to move extents from one FC tier VP_GREEN to another FC tier VP_BLUE and this was not suitable as FAST doesn't cater for this type of movement with disks of the same type and technology. • Preliminary Conclusion and Suggestion: • The reason that why FAST VP cannot move data is that FAST doesn't cater for this type of movement with disks of the same type and technology. • The suggestion which was given by EMC is that create a new FAST VP policy in effort to move from the existing FC tier VP_GREEN to the EFD tier VP_RED and FC Tier VP_BLUE and then from this those two tiers back to the new destination FC tier VP_BLUE
  • 52. 52EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Customer followed the suggestion, but new problem was found: • A single FAST policy was created by EMC’s suggestion. FAST was enabled with just a single policy with the extents % set for 100/100/0. They wanted to have all the LUN extents that located in FC Pool VP_GREEN in a SG I_tiermove_sg moved to EFD VP_RED and then back down to VP_BLUE by changing policy later. The reason that why they doing this is because they cannot move data between VP_GREEN and VP_BLUE via FAST VP as they are both FC disks technology. • FAST VP did not move all the data to target Tier by the new policy, so we must find out why.
  • 53. 53EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 As we can see that device 1D55 still has some extents which is located in pool VP_GREEN
  • 54. 54EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Further Analysis(Why FAST VP stopped moving data again): • PSE dialed into the box and determined that not all extents had been moved is because the R/T of the EFD disks were on average not better than 50% of the response they were getting from their FC tiers hence some of the moves were blocked. • FAST VP will not promote data to the next pool unless the response time to be gained is greater than x% according to these rules: • EFD <= 50% FC • EFD <= 30% SATA • FC <= 50% SATA • Here you can see that POOL 1 which is the EFD pool is giving 4.8 msec response time and the FC pool 2 is giving 8.1 response time. EFD > 50% FC • As such FAST VP will not move data into the EFD pool.
  • 55. 55EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study –Westpac Banking, SR 58924478 • Solution for Response Time checking issue: • So as a workaround the PSE disabled the R/T so that this would not be the reason for blocking any remaining moves and they are now ok and have been moving the extents which located in VP_GREEN as expected to the EFD VP_RED and then back to the FC VP_BLUE 100% by adjusting the FP policy to 0/100/0. • Finally all the data tracks of device 1D55 have been moved to pool VP_BLUE. • The FC/EFD response time is only but one of the parameters that FAST uses to determine what extents get moved. Policy is basically used to act like VLUN Migration its hardly normal FAST workload. The R/T would need to be blocking FAST movement when they implement FAST in a real world scenario i.e. typical workload and as such engineering would not approve drive swaps just purely based on that gen2 has longer RT than Gen3. • The reason that why customer cannot perform VLUN Migration for this case: • Device 1D55 started off in VP_GREEN pool. If they use VLUN migrate to move data to VP_BLUE without any modifying, we will hit the bug which listed in KB 92545 . • They tried to create a FAST tier, assign the TARGET pool to in and retried the VLUN migrate but it failed because that the device 1D55 is already bound to VP_BLUE, not VP_GREEN.
  • 56. 56EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study–Capital Group , SR 59368480 • Symmetrix VMAX 20K • Customer reports that after removing a Thin Pool FC_T2_P1_49 from FAST VP, and did symmigrate to move data out of this pool into FC_T2_P1, he still sees new allocations in the pool FC_T2_P1_49 from TDEVs that are not bound to this pool. • PSE Lab and SSG was engaged for this issue.
  • 57. 57EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study–Capital Group , SR 59368480 • Symmetrix VMAX 20K • Customer reports that after removing a Thin Pool FC_T2_P1_49 from FAST VP, and did symmigrate to move data out of this pool into FC_T2_P1, he still sees new allocations in the pool FC_T2_P1_49 from TDEVs that are not bound to this pool. • PSE Lab and SSG was engaged for this issue.
  • 58. 58EMC CONFIDENTIAL—INTERNAL USE ONLY. Case Study–Capital Group , SR 59368480 • Root Cause: • This could be related to the fact that “any new device allocation task to the pool will be put in the task queue. If the pool is dis-associated from the tier while the device allocation tasks are still in the queue, the fast will complete the task regardless whether the pool is still under fast control. That's why there were new allocations to the pool after being removed from tier. However, there should be no more new allocations to the pool after the tasks in the queue have been completed”.

Editor's Notes

  1. Good day, everyone. My name is Kevin Wang, and I'm the Product Support Engineer from the Enterprise & Mid-range Systems Division (EMSD). Over the last year, I've been written some support materials to help TSE have a better understanding of tiered storage within Symmetrix, and it became clear to me that there's a lot of messaging that I want to get out to the rest of the world. So, over the next one hour, I hope to walk through and let you have a overview of the Solution Support and, more importantly, give you some useful information to understand FAST VP deeply.
  2. 6
  3. 7
  4. In building FAST VP, we collected a lot of performance data. Millions of I/O traces from over 3,500 storage arrays. What we learned gave us the roadmap for what would meet the performance needs. As shown here, some of the systems we examined have a very heavy skew: 95% of the workload was going to 5% of the LUNs. This was true for more than 10% of the arrays we examined. It was very easy to move this workload to a small area of EFD and put the rest on ATA – we took so much of the workload onto the EFD drives that the ATA drives were very capable of handling the rest. Almost half of the customers showed more moderate skew, where 90% of the workload went to 10% of the LUNs. These systems needed a little Fibre Channel capacity added to the mix to handle a bit more of the workload, since the I/O density was not high enough to make placing all of the hot data on EFD. Another third of the customers showed a much lower skew, where 80% of the workload went to 20% of the LUNs. These systems needed a bit more Fibre Channel capacity to handle the still fairly warm data. In all of these configurations, we were able to use 3% flash to take on the hottest I/O capacity. Some then used FC capacity to handle the remaining warm space, and ATA was able to handle the rest. These workloads made up over 94% of the systems that were measured. Based on this, EMC is planning for the 80/20 skew, and so we start with a storage profile of 3% EFD, 27% FC, and 70% ATA. Of course, Tier Advisor can be used to produce more detailed models of specific workloads. However, many customers are interested in a starting ‘safe’ configuration that can be used for a variety of workloads, both those that are on the systems today and those yet to be built. These statistics allow us to recommend this 3/27/70 profile as a well designed ‘safe’ configuration.
  5. 9
  6. 10
  7. Starting with the release of Enginuity 5876 Q4 2012 SR and Solutions Enabler 7.5, data from virtual provisioned volumes, or TDEVs, can be compressed to save space in the thin pool. Compression is supported on all TDEVs (FBA, IBM i, and CKD). VP Compression will work in combination with TimeFinder and SRDF replication products. VLUN and FAST VP can take advantage of the compression feature. Solutions Enabler and Unisphere for VMAX support the configuration, management and reporting of compression for thin pools and individual TDEVs.
  8. VP Compression requires Enginuity 5876 code (Seine). Compression is supported on existing data when Enginuity is upgraded to this version. Compression can be enabled as a pool attribute when creating a new thin pool. It can also be set on an existing pool and can be disabled if no longer needed. Once compression is enabled on a pool, a background process will run to reserve storage in the pool that will be used to temporarily uncompress data. No other compression processes can run until the background process completes. After the Decompress Read Queue has been created, data stored on thin devices can be compressed. Only allocations that have been written will be compressed. Any tracks that have not been written to will be reclaimed during the compression process, therefore, when a device is uncompressed, reclaimed allocations will no longer exist. Compression on a device is done as a background task, and can be initiated either manually or by FAST as part of its processing.
  9. Compressed data is limited to 10 TB per VMAX engine. Because all DA processors participate in compression operations, this limit is at total array limit and is not related to where the data actually sits within the VMAX. For example, a 4 engine VMAX can have 40 TB of compressed data in total. The data can be located physically anywhere in the array. If compression is no longer desired on a pool, compression can be disabled by the user. When this is done, all compressed allocations are uncompressed and the space reserved for the DRQ is returned to the thin pool as free space. Persistently allocated tracks of a TDEV cannot be compressed. The unset command can be run to remove the persistent setting so compression can be performed. The persistent allocation type can be set on volumes to prevent them from being compressed. Devices that are encapsulated using Federated Tiered Storage are not compressible.
  10. A read of a compressed track will temporarily uncompress the track into the reserved storage area maintained in the pool. The space in this reserved area is controlled by a Least Recently Used (LRU) algorithm, ensuring that a track can always be uncompressed and that the most recently utilized tracks will still be available in an uncompressed form. Recompression is not required since the data remains in its compressed form. Writes will always write the uncompressed allocations on the Thin device. Over time, if the data has not been accessed, compression may occur if the device is under FAST control. The device can also be manually compressed at any time.
  11. The symmigrate command will allow the migration of thin devices, both from and to pools that have compression enabled. The table displayed in this slide details how compression will factor into the thin device migrations. Prior to migrating any allocations, the total allocations for all relevant thin devices will be summed using the sizes outlined in the table, and must be less than the remaining free tracks in the target pool. If not, an error will be returned and no migrations will take place.
  12. The symconfigure syntax for Enabling VP compression while creating a new thin pool, or for an existing pool, are shown in this slide. VP compression can be disabled for a VP compression enabled thin pool as well.
  13. Manually compressing or uncompressing data with SYMCLI can be started or stopped on a device file using the –devs option, or on storage groups, device groups and composite groups. Examples of the syntax are shown on the slide. Stopping the compress action does not uncompress data that has been compressed. Manual intervention is required.
  14. Federated Tiered Storage (FTS) is a and free feature available with Enginuity 5876. This feature allows VMAX 10K, VMAX 20K and VMAX 40K arrays to use the storage capacity of an external array. Federated Tiered Storage (FTS) allows LUNs that exist on external arrays to be used to provide physical storage for Symmetrix VMAX array. The external LUNs can be used as raw storage space for the creation of Symmetrix devices in the same way internal Symmetrix physical drives are used. These devices are referred to as eDisks. Data on the external LUNs can also be preserved and accessed through Symmetrix devices. This allows the use of Symmetrix Enginuity functionality such as local replication, remote replication, storage tiering, data management, and data migration with data that resides on external arrays.
  15. An emulation, referred to as DX, (for DA eXternal) adapts the traditional DA emulation model to act on external logical units as though they were physical drives. The fact that a DX is using external LUNs instead of a DA using internal LUNs is transparent to other director emulations and to the Enginuity infrastructure in general. With respect to most non-drive-specific Enginuity functions, a DX behaves the same as a DA.
  16. An eDisk is a virtual external disk that is created when an external LUN is brought into the configuration. The terms “eDisk” and “external spindle” both refer to this external LUN once it has been placed in an external disk group and a virtual RAID group. External disk groups are virtual disk groups that are created by the user to contain eDisks. Exclusive disk group numbers for external disk groups start at 512. External spindles and internal physical spindles cannot be mixed in a disk group. An unprotected virtual RAID group gets created for each eDisk that gets added to the system. The RAID group is virtual because eDisks are not protected locally by the VMAX; they rely on the protection provided by the external array.
  17. FTS has two modes of operation depending on whether the external LUN will be used as raw storage space or has data that must be preserved and accessed through a VMAX device. External Provisioning allows the user to access LUNs existing on external storage as raw capacity for new Symmetrix devices. These devices are called externally provisioned devices. Encapsulation allows the user to preserve existing data on external LUNs and access it through Symmetrix volumes. These devices are called encapsulated devices. External Provisioning: When using FTS to configure an external LUN, Enginuity creates an eDisk and adds it to the specified external disk group. External disk groups are separate from disk groups containing internal physicals and start at disk group number 512. Because RAID protection is provided by the external array, eDisks are added to unprotected virtual RAID groups. Virtual Provisioning can be configured using external provisioning by creating data devices (TDATs) using an external disk group. Other than the fact that the TDATs are created on eDisks, the process for configuring VP is otherwise the same.
  18. There are two different options with encapsulation: With standard encapsulation the external spindle is created and added to the specified external disk group and unprotected RAID group. Symmetrix devices are also created at the same time, allowing access to the data that has been preserved on the external LUN. Virtual Provisioning encapsulation and standard encapsulation share the fact that the external spindle is created and added to the specified external disk group and to an unprotected RAID group. However, they differ in that with Virtual Provisioning encapsulation, data devices (TDATs) are then created and added to a specified thin pool. Fully, non-persistently allocated thin devices (TDEVs) are also created and bound to the pool. Extents are allocated to the external LUN through the TDAT.
  19. This concludes the training. Thank you for your participation. Please feel free to contact me if you have any questions. Kevin Wang (kevin.y.wang@emc.com)