This slide describes the hardware components of the Axiom architecture.
Be sure to point out that the competition does not do any of this. They have “two controllers do it all” architectures (except 3Par. They can scale to 8). This is the slide to start focusing on the large amounts of cache and the distributed RAID
Be sure to point out that the Axiom is 5-9’s architected. There is no single point of failure. Every other vendor (ex. EMC, HP, NetApp, Dell Equallogic) makes the 5-9’s claim. We have the same level of redundancy, if not better, than they do.
The Pillar Axiom is highly scalable and modular. Tried and tested, it is now in its 5th generation of software so quality has been honed through the development and delivery process.
Its modularity is built on 3 basic hardware components – all within a single “model” – you don’t need to buy another model as you scale hardware.
You can start with as small a system as 1 Slammer, 1 Brick, and 1 pilot. But you can scale to as much as 4 Slammers (8 active-active control units) and 64 bricks (832 drives or 1.6PB of capacity) with 1 pilot. You can intermix SATA, FC or SSD storage classes.
The Axiom is only one model that scales from 12TB all the way to 1.6PB. You don’t need to do forklift upgrades to new models since the system is built on a common modularity, a common software base, and a common storage pool. Axiom technology upgrades can be done non-disruptively in almost all cases – i.e. you can add capacity or upgrade technology without losing access to your data.
The “business value” differentiators are driven by technology differentiators built into the Pillar Axiom.
Pillar’s QoS (Quality of Service) is patented. It prioritizes application IO fulfillment and provides unique contention management that enables consolidation and co-existence of multiple application data types and workloads associated with Axiom storage. This translates to big benefits. QoS breaks the age-old FIFO paradigm - finally. Storage I/O resources are assigned according to the associated application’s business value – not “first come, first served”. Axiom’s QoS is a system-wide approach from I/O request to data placement striped across drives, RAID levels, and even down to the disk spindle level (inner band versus outer band on HDDs).
Modular “modern” architecture. The Axiom is modular, not monolithic. Because it is modular, you can scale without having to do “fork lift” upgrades that are highly disruptive. Capacity scaling from 12TB to 1.6PB. You can grow and rebalance your Axiom system storage pool based on your changing environment. Provides flexibility and elasticity. Unique.
Distributed RAID. Linear scaling of performance and capacity is achieved via unique distribution of RAID controllers – 2 in every brick – up to 128 in a system. Extreme performance even when drive rebuilds take place, since the system is architected to provide RAID controller rebuild power on a ratio of 1:6 drives. Other competitors use an archaic “2 controller” architecture across their entire system. When rebuilds occur, this old architecture can topple a competitor system. Not so with Axiom.
This is an example of how QoS should work in practice. In this example of a journey you might take on a plane, the more money you are willing to pay – because you deserve it – the more “service” you are going to get. It’s not just where you sit on the plane – which can be like using SSD (first), FC (business) or SATA (coach), but the other privileges you get in terms of how quickly you board and exit, seat comfort, number of attendants etc.
Similar to the QoS example seen before, the Axiom storage system provides a QoS model that not only allows you to optimally place data on drives, but allows you to provide the adequate resources to service the critical business application data.
The animation on the left shows how other vendors typically process I/O. Note that the important application (RED) IO will have to wait for other applications to receive their IO.
The Axiom prioritizes IOs via multiple queues with 5 priority levels. This means important applications receive IO service at higher rates since QoS allows higher priority IOs to be processed ahead of others. The Axiom will reduce IOs to lower priority applications so higher priority applications will get up to 6x more IOs executed over time.
Beyond the management of the service queues based on the importance of the IO, the Axiom provides an end to end QoS within the system to ensure that you are “treated” based on the service level you requested – just like the experience you would get in a journey on a plane.
The Slammers manage how much CPU and Cache it needs to allocate based on the priority to service the data, type of data and access patterns (sequential will allocate more cache).
The resource allocation has a ratio.
Pillar Axiom QoS allows you to provide a deterministic IO prioritization for the applications - instead of the archaic FIFO model and equal resource allocation
Here is how the LUNs will be striped based on QoS settings you select.
Pillar Axiom QoS allows you to provide a deterministic IO prioritization for the applications - instead of the archaic FIFO model and equal resource allocation
With QoS, you are allowed to have multiple service levels – just like the plane. If everyone was equal - like a low budget airlines – your service level (and your performance) suffers – but you expect it.
QoS enables multi-tenancy or the ability to provide multiple service levels. So, you can have a Tier 1 OLTP application, a business critical Exchange environment and a tertiary archive application all on the same storage system – with distinct QoS or performance levels.
The ability to store multiple application data on the same storage system (without storage silos) allows you to increase the utilization of the system – get all the seats full. This allows you to reduce you CapEx (less storage to buy) and OpEx (less storage to manage – administer, power, cooling) in the process.
The Pillar Axiom product provides pre-define application workload profiles that simplify real-world storage configuration and supports best-practice; different apps have varying data performance characteristics and profiles helps customers understand how various storage components should be setup to best support their application needs. i.e. different read/write intensive, large vs. small blocks, can change on the fly without changing storage infrastructure.
You can start really small (1 Slammer and a few Bricks) and scale the system as the need for capacity and performance grows with your business.
Unlike most storage systems out there, the performance and capacity scale is linear as you add these modular components.
You increased bandwidth, IOPS and ports by adding Slammers.
You increase IOPS and capacity by adding Bricks
Bring it together. To enable consolidation, you need a system that scales as additional applications/workload is added to system
Add slammers for bandwidth - MB/sec - streaming video from a website
Add bricks for additional IOPS - throughput
Add more bricks for IOP/capacity scale
This slide really shows the differentiation between us and all the “Two controllers do it all” companies. This is the slide where the customer’s light bulb goes off. They really see the difference in the basic architectures between us and everyone else. Any customer who is looking to scale beyond a few bricks should see the advantages of our architecture.
Point out:
-- we put two RAID controllers in each bick and have a 6:1 drive to controller ratio no matter how many drives we have. The competition will scale to hundreds of drives per controller. Some as many as 500 drives per RAID controller
-- use this slide to point out our drive rebuild scheme. It’s clear to see how we isolate the rebuild within a brick while the competition has to use 50% of their controllers to rebuild a drive
-- also point out that if a SATA drive fails, since the rebuild is isolated within that SATA brick, the performance of the LUNs residing on FC drives is not impacted. With the competition, a SATA failure is handled by one of the controllers (50% of the controllers). That controller probably owns 50% of the LUNs in that system which means that for the hours or days that the rebuild takes, all of the LUNs associated with that controller have degraded performance. It doesn’t matter if they’re FC or SATA LUNs
For all Oracle databases, HCC should be leveraged along with Axiom’s Application Profiles which are pre-built management profiles for Oracle applications.
OVM 2.0 is fully supported with a Storage Connect plug-in to allow you to manage Axioms and virtual server infrastructure from the same user interface.
Oracle Enterprise Manager can be used to manage Axioms using the plug-in developed by Axiom.
An example of how this vision translates to value is seen in Oracle’s Hybrid Columnar Compression which originated in the Exadata product and is now available in both ZFSSA and Pillar Axiom. Early benchmarks are projecting 3-5x faster plus clear efficiency gains versus primary competitors, NTAP and EMC.
HCC only works with Oracle storage, a distinct differentiator and a clear example of the value of integration.
Simple multi-site, multi-tier scenario shown, without shared QFS clients
Local site SAM QFS MDS creates a local disk archive (1) and tape archive (2) copy and uses a file transfer feature to create remote disk archive (3) and tape (4) archive copies
This basic architecture can scale to thousands of SAN clients, hundreds of file systems , billions of files, PB of disk cache, and most importantly, unlimited archive capacity
Next, we’ll take a look at core SAM functionality
---------------------------------------other notes if needed---------------------------------
Any 3rd party disk that is supported in Solaris can be used as a disk cache or disk archive
Similarly, many 3rd party tape drives and optical disk drives have been tested and qualified to serve as archive targets behind SAM
Many customers use 3rd party tools to move data (i.e., rsync - only replicates changed blocks) and parallel FTP to replicate disk cache across WAN, where it is ingested into remote site QFS file system like a new application workload
(DR) Remote Site (not SAM Remote)
QFS servers talking to each other using SAM-RFTD feature – Transfers only TAR file
Export SAMFSDMP for DR / full QFS file system restore
Requires remote file system only to enable transfer (for daemon to start up) and cache
SAM running on remote server to archive to tape
Works on 2 MDS and creates pipe between them
NFS across network using parallel FTP to remote server (multithreading) Over IP wire performance issues? Have throttling controls.
Local site can only see one copy at remote site
Remote site makes own copy to tape archive (local site doesn’t know about remote tape archive)