#4 Ability to create an indelible (can't be deleted) Snapshot copy on the primary based on the period of time defined.
Response to leveling the playing field with Pure
In order for recovery points to be valid, transactional applications will need to be quiesced prior to the creation of the Snapshot copy.
Will be CLI and REST API only in 9.12.1, System Manager to be added in a future version.
Requires a SnapLock license
SnapLocked Snapshot copies can be created on the primary data volumes or can be scheduled on secondary volumes for simple, indelible recovery points.
FabricPool-enabled tiers are not supported in the 9.12.1 release.
Please note, these are SnapLocked Snapshot copies. There is no means to delete the Snapshot copy prior to the expiration date and thus care needs to be taken when applying the SnapLock policy to the Snapshot copy.
#7 9.12.1 only delivers LockVault. Legal hold and event-based retention are to be delivered later.
In 9.11.1, you could lock a file or folder with SnapLock inside of a FlexGroup. In 9.12, NetApp LockVault allows you to lock an entire volume.
Compliance for a specific file versus protection from a rogue administrator.
LockVault allows customers to create a Snapshot copy on a FlexGroup volume and replicate that Snapshot copy into a SnapLock volume to be protected with a SnapLock policy. This provides a known good recovery point that is protected from all forms of attack.
For example: "I need to snap a datastore once a day and lock that image for 60 days, so I know I at least have 60 days of good recovery points that are indelible even from admins."
Pretty important for customers using FlexGroup volumes, but it is overall a minor feature.
#10
ONTAP 9.12.1 adds support for S3 access to file data at rest. Data can be modified, but S3 constructs such a buckets cannot be created. Data must first be ingested and managed as file data and then accessed as S3 objects. File structure will be flattened and merged into object names.
Use cases:
Allows the users/applications to do different workflows on the data without duplicating the data.
Object analytic services, such as OCR, to recognize the objects in images and videos and then tags those objects and creates metadata on what is actually in those images.
Medical imaging recognition with OCR to identify all “chest X-rays or MRIs."
Financial – Recognizing checks that are deposited through a phone camera by end users.
Some customers are doing this today but are currently doing it by copying the data (Cloud Sync can copy from a file share to an object store)
File/object multiprotocol allows them to do this at rest without making a copy way more efficient!
#12
SVM-DR versus SVM Data Mobility
SVM-DR stops and starts somewhere of its choosing, but it causes downtime.
SVM Data Mobility moves an SVM from one cluster to the other in the same data center.
Allows customers with small number of nodes in a cluster to move clients around without them being aware or scheduling an outage.
Ideal for service providers who need to move live SVMs from cluster to cluster in order to maintain SVM performance or to perform lifecycle operations on the ONTAP clusters.
An SVM move will cause SMB clients to disconnect and require reconnection, this is similar to the behavior when a controller failover occurs a brief drop in connection and then easy reconnection of the SMB shares.
If customers have specific needs for this feature, please submit an FE-FPVR.
#14 FlexGroup rebalance in valuable to any customer that uses FlexGroup volumes today and wishes to grow the FlexGroup by adding additional volumes and wants the data distributed across all volumes to preserve or increase performance.
The nature of how some data is loaded into FlexGroup volumes can also result in unequal loading of FlexGroup volumes; rebalancing can ensure that no volumes fill up and that the FlexGroup volume is equally distributed.
Ideal for high-performance file serving environments and large unstructured data sets.
Will be able to see the level of balance in System Manager.
People that would convert a FlexVol volume to a FlexGroup volume would typically do that because they reach a 100TB limit. This would allow them to add more volumes, but the original volume would still being close to 100TB.
Closes a competitive advantage that competition had over FlexGroup volumes.
#15 Early specs were 4 nodes max.
Now supports 12 nodes.
12-node cluster is as big as you can go on any cluster (SAN of any type).
NAS only can go up to 24 nodes.
I use this as an opportunity to reiterate the fact that we pioneered the NVMe movement and continue to be leaders in this space. NVMe is now being adopted and is growing, which is why we now support much larger clusters. I am now also mentioning that these larger clusters add simplicity where large environments like this would result in multiple different systems for others (Pure) to manage, likely resulting is trapped capacity and performance issues.
#16
Node scope means that only connections from the same node can share the same session ID and client ID. Spanning node is a future feature with no planned release at this time.
#17
Big companies are reaching their the limit, so are customers using Kubernetes.
Questions you tend to get here are “Are we changing the limit per node so that each of our hardware platforms has a maximum number of volumes per node?”
No, you still have to have enough nodes in your cluster to get to 10,000 or to get to 30,000, so that doesn't change.
There is no change to the maximum number of volumes per controller or aggregate.
The NetApp MetroCluster volume limit remains the same (for now).
#19 In NetApp Active IQ Unified Manager, this adds a topology view of MetroCluster with drill-downs.
Hardware failure alerts and events
View MetroCluster configurations
Overall health status
#21 At one point, DP was all that we had.
Now most customers have already converted DP to XDP.
The NetApp A-team said this should not be an issue at all because, basically, all customers already have XDP.
History: Back in time, SnapMirror initially had a replication engine such that whichever Snapshot copies I had on the primary site was exactly what I had on the secondary site. So, if I wanted you know so many hourlies and so many weeklies and so many monthlies, that was exactly what I had both the primary storage and secondary storage, which was great for disaster recovery purposes, but not great for backup or archive operations.
Then we produced SnapVault, which was a different replication engine. That allowed us to have different recovery points, so we could have hourlies and dailies on the primary and then weeklies monthlies and yearlies on the secondary. We didn't have to have the same Snapshot copies on the two sites; but again, it used a different replication engine.
It used to be a lower performance replication engine than what the mirrored one was, but that's gone away, so performance-wise, they're identical.
Because they are identical, we are doing away with the one that does the exact mirror (DP).
#27 Support features in ONTAP 9.12
Advanced network configuration
LIF creation and placement options
Advanced aggregate configuration
Flash Pool management
Storage pool management
Network enhancements
Broadcast domains and subnets management
SnapLock management
Configure
Set up compliance clock
Create SnapLock compliance or SnapLock enterprise volumes
Create mirrored relationships for volumes of same SnapLock type
9.7 introduced issues that oversimplified this, adding SnapLock management back into the UI
Dark theme
Great for people working in low-light environments, such as data centers
Hardware visualization
A great way to see how the cables are plugged in without physically looking at them.