Here we have an AFF system. This is a modern data center, so we have modern performance expectations.
As you add workloads, you start expanding its storage. Before long, your data footprint grows … as does the overall cost. As of now, you’re paying for high-performance flash, but an increasing percentage of your data is inactive ... which is a waste of flash’s potential.
With FabricPool, that cold, inactive data is automatically tiered off to the cloud tier as objects.
Performance Workloadson High-performance Flash Aggregates
Capacity WorkloadsMove to the Cloud
Combines performance and capacity tier
into one storage pool that manages data seamlessly
and reduces the total cost of ownership
Smaller granularity results better performance and functionality
Other tiering solutions are file based
Competitive difference
Everyone understands files
Simple concept /= simple use
(policy engine is a huge performance hit. Lots of warnings about enabling tiering and performance impacts)
Conflict resolution headaches. Two (or more) policies impact the same file. What happens then? Gets worse at scale … or with multiple tenants. Not impossible to solve, but require a significant amount of work by the storage admin
Average storage efficiency as reported by Active IQ. (dedupe, compression, compaction + clones)
Optimize use of SSD. Cold data = inefficient and expensive use of flash
10 – 15% snapshots. Snapshot show
1,023 snapshots
CapEx vs OpEx
Simplicity. Automated.
Customer story
Secondary data (data protection volumes that are SnapMirror [disaster recovery] or SnapVault [backup] destination targets) is frequently stored on secondary clusters that share a 1:1 or greater ratio with the primary data that they are protecting. (One baseline copy and multiple Snapshot copies.)
ONTAP Cloud: Simplicity. Automated. Expensive EBS to S3.
ONTAP Select: Remote/Back office; Cheap and deep archive
TCO calculator
Object storage is less expensive than traditional storage
Shift cost from CAPEX to OPEX
Massively simplifies lifecycle operations.
ONTAP 9.4 and ONTAP 9.5
9.4: GA in June
Why not less than 2 days? Background Storage Efficiency, Mirror/Vault schedules
Why not more than 63 days? Higher the number the less likely you will tier
Ops vs Storage Costs language
Other features:
Defrag (unreclaimed-space-threshold)
Compaction
Bypass CA Certs
Managed as a single volume
Server side since 9.2
9.5 brings client-side encryption
Other features
Aggregate tiering percentage
Performance is a major Competitive difference
Other tiering solutions are closer to a Cold archive where access is the except, not the rule
Uniquely positioned as a SAN/NAS gateways between ONTAP and the object store
Do not expect similar transparency from other solutions
We’ll talk more about it in a little bit, but ideally you’re scheduling your mirror and vault processes before data is tiered
We don’t want to have to read it from the cloud to back it up.
(Slower and possibly more expensive)
100TB volume 10TB FabricPool aggr
Ingress/Egress problematic
60GB FabricPool cache queue up data to be transferred
1:20 still applies
AWS
368TB/360TB per node in an HA pair
96TB aggr
Azure
252TB instance
48TB aggr
10Gb pipe
40MB v 2.56GB
Proactive sizing
Non-FabricPool aggregates
No need to permanently attach to aggregate
start
-object-store-name <name> -node <name>
Auto in 9.4
FlexGroup volumes in 9.5
>50% Primary, >90% Secondary
IDR and OSP both in 9.4