Your SlideShare is downloading. ×
Webinar: Untethering Compute from Storage
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Webinar: Untethering Compute from Storage

505

Published on

Enterprise storage infrastructures are gradually sprawling across the globe and consumers of data increasingly require access to remote storage resources. Solutions for mitigating the pain associated …

Enterprise storage infrastructures are gradually sprawling across the globe and consumers of data increasingly require access to remote storage resources. Solutions for mitigating the pain associated with this growth are out there, but performance varies. This Webinar will take a look at these challenges, review available solutions, and compare tests of performance.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
505
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Next let’s talk about the consolidated management benefit that Avere provides. An important part of this is our global namespace that allows joining many NAS systems, even ones from multiple heterogeneous vendors, into a single file system namespace. Rather than having each of your clients mount multiple different NAS systems, your clients can have a single mount point on Avere and then the Avere FXT edge filer maps all the NAS systems into a single global namespace. These core filers can be in the same data center as the Avere cluster or in a different geography connected over a WAN. There are some other GNS solutions available in the market (for example Acopia ARX) but ours is unique in that it is very easy to install in an existing NAS environment. (It’s also easy to remove if necessary.) We are also unique because we simultaneously provide both GNS and performance acceleration. Other GNS solutions have a negative impact on performance. With the GNS in place, we can provide transparent data migration. This enables moving data between core filers either for the purpose of load balancing or adding new storage or moving data to an archive. (CLICK) The data movement physical moves the data between storage systems but the data remains in the same logical location due to our GNS so the movement is non-disruptive to clients and applications. We also provide mirroring which enables implementing DR. (CLICK) In this example the /cust export is mirrored to both a primary storage system (NetApp in this case) and a secondary storage systems (EMC Isilon in this case) location in a remote DR site. FlashMirror creates a baseline copy of /cust from the primary to the secondary and then from that point forward all changes to /cust are made on both the primary and secondary.
  • Avere has delivered performance acceleration and scaling to many different customers in many different industries and applications. This includes applications such as VMware, Oracle, rendering, transcoding, software build, chip design and verification, seismic processing, financial simulations, genomic sequencing, and more. This is a place in the presentation where you may want to insert a customer case study from our library that is relevant to the audience of your presentation. In the standard presentation we used the SPEC SFS benchmark since it is the most relevant workload across the broad range of customers we sell to. In the world of file systems there is a well know benchmark called SPEC SFS that is used to compare the performance of NAS systems. All the NAS vendors use the benchmark and post their results on the website shown at the bottom of this slide. SPEC does a great job of providing a detailed, apples-to-apples comparison of NAS products running in a “typical” enterprise-class NAS environment. This slides compares the three top performance results on the SPEC site. Avere is the current record holder with almost 1.6 million ops/sec achieved on a 44-node FXT cluster. Note that this is not a max cluster from Avere. We used just enough nodes to achieve the top spot. Today we can go to 50 nodes per cluster and will go beyond this in the future. In second place is NetApp with a max 24-node cluster mode system. In third place is EMC/Isilon with a max 140-node system S-Series cluster. While achieving the highest performance was an important point for Avere, our primary point was the efficiency of our solution. Just look at the sizes of the systems. We are faster than NetApp and Isilon in just a fraction of the space. 2.5 racks and 6 feet wide for us. 14 feet wide for Isilon. 24 feet wide for NetApp. If you scan across the orange row you can see the details of our performance advantage and our higher efficiency. Avere is the highest performance, the lowest latency, the lowest cost, and use the least amount of space and power.
  • Transcript

    • 1. Breaking All The Rules:Untethering Compute from StorageBernie BehnTechnical Marketing EngineerAVERE SYSTEMS, INC5000 McKnight Road, Suite 404Pittsburgh, PA 15237(412) 635-7170averesystems.comApril 23, 2013
    • 2. Agenda• Application challenges of data separation anxiety• Poll: Have you ever… ?• Current methods to overcome these challenges• Poll: How do you … ?• The need for an Edge-Core Architecture• Why the Edge-Core Architecture works• The SPECsfs2008 benchmark test• Avere FXT 3800 Edge filer results• Poll: What do you think … ?• Wrap-up and Q&A2Proprietary & Confidential
    • 3. Data Separation Anxiety• Waiting for I/O is bad! Any process running on a CPU that has towait for file I/O is stalled• The further the CPU is from the data it needs, the larger the costof waste• Historical approach has been to put the data as close to the CPUas possible, usually on local storage• NAS file sharing has created the environment where productivitycan be increased by enabling parallel computing3Proprietary & ConfidentialThe Cost of I/OL1-cacheL2-cacheRAMDiskNetworkCloudat 2.4GHz3 cycles14 cycles250 cycles41 000 000 cycles120 000 000 cycles1 200 000 000 cycles
    • 4. Really? More efficient?• Sharing data leads to higher efficiency by enabling multipleprocessing elements to work in parallel• As access to the shared data takes longer, efficiency gains startto erode• Spinning up processing farms and/or users at remote sitesrequires local-like access to data to maintain efficiency4Proprietary & ConfidentialHQ NASComputeFarm HQUsersHQNew RemoteUsers and ComputeWANLow ThroughputHigh Latency
    • 5. Poll #1NAS protocols like CIFS and NFS were not designed norintended to effectively handle high-latency/low-bandwidth paths to the fileservers.• Q: Have you ever tried to access an NFS or CIFSfilesystem remotely over a Wide Area Network withhigh latency and low throughput?(Please respond within 30 seconds)5Proprietary & Confidential
    • 6. • Manual data replication using common tools: rsync/robocopy– Lots of management overhead to keep everything in sync• Filesystem container mirroring (SnapMirror, SyncIQ, ZFSsend/recv)– Often mirroring bulk sets of data, when only subsets of data areneeded• Just deal with it! It’s not that bad!– What kind of solution is that?• WAN optimization technologies at the network layer– Why optimize at the network layer when dealing with Filesystems?• Host your data in cloud-based storage– Does not solve the latency problem, actually makes it worseSolutions to this problem?6Proprietary & Confidential
    • 7. The Edge-Core NAS Architecture• Put an end to manual dataset copies and replicate-everywhere tasks• Use Core filers to centralize data storage management• Deploy Edge filers to provide efficient access to data7Proprietary & ConfidentialCloud Storage•Private NAS cloud•Amazon S3 (future)Primary HQ DatacenterCloud Computing•Amazon, Rackspace, SuperNAPRemote OfficeSecondary/Partner/Colo Datacenter
    • 8. Why Edge-Core solves these problems• Hybrid NAS– “Cheap and Deep” Core filers for cold and archival data– High performance to “hot dataset” delivered by the Edge filer• Edge filesystem capabilities with Local Directories– Handle all directory updates at the Edge• Clustering with linear scalability– Cluster sizes up to 50 nodes, 100+ GByte/sec read throughput• Data Management with FlashMove™ and FlashMirror™– Handle the moving target of where to stash your at-rest data8Proprietary & Confidential
    • 9. Consolidated Management9Proprietary & Confidential//sales /finance /support/pipe /staff /fy2012 /custNetApp:EMC/Isilon:HDS/BlueArc:Oracle/ZFS:Data Center (Core Filer:/export)Remote Site (Core Filer:/export)/mech/src/pipe/cust/fy2012/staffClientsAvere FXT Edge FilerWANGlobal Namespace• Simplify namespace mgmt *and* accelerate, scale perf• Single mount point for all Core filers & exports• Easy to add/remove Avere to/from NAS environment• Create junctions (e.g. /eng) for improved management/eng/hw /sw/mech /src/ on AvereSingle mount pointFlashMove• Non-disruptively move exports (e.g. /src) between Core FilersFlashMirror• Mirror write data to two locations for disaster recoveryGlobal Namespace/srcFlashMove/cust’FlashMirrorXLogical pathunchanged
    • 10. Poll #2Shuffling entire datasets, or subsets of data across theglobe can be a time and resource intensive activity. Italso leads to NAS sprawl and increased costs.• Q: How do you currently solve your problem ofgeographically dispersed access to remotefilesystems?(Please respond within 30 seconds)10Proprietary & Confidential
    • 11. Performance is King!• SPECsfs2008 NAS benchmark is designed toexercise all performance and scalability traits of fileservers11Proprietary & ConfidentialSFS Aggregate Results for 16 Client(s), Mon Mar 18 16:44:46 2013NFS Protocol Version 3-------------------------------------------------------------------------------------NFS Target Actual NFS Op NFS Op NFS Mean Std Dev Std Error PcntOp NFS NFS Logical Physical Op Response Response of Mean, ofType Mix Mix Success Success Error Time Time 95% Conf TotalPcnt Pcnt Count Count Count Msec/Op Msec/Op +-Msec/Op Time-------------------------------------------------------------------------------------getattr 26.0% 26.0% 124201594 124201594 0 0.54 6.67 0.00 2.3%setattr 4.0% 4.0% 19108469 19108469 0 2.30 27.70 0.00 1.6%lookup 24.0% 24.0% 114650346 114650346 0 0.74 10.41 0.00 3.0%readlink 1.0% 1.0% 4775340 4775340 0 0.56 8.36 0.00 0.1%read 18.0% 18.0% 85985497 105752765 0 18.77 65.52 0.00 58.6%write 10.0% 10.0% 47765007 57779912 0 14.02 61.99 0.00 24.4%create 1.0% 1.0% 4778818 4778818 0 11.39 76.50 0.01 2.0%remove 1.0% 1.0% 4771305 4771305 0 10.78 113.56 0.01 1.9%readdir 1.0% 1.0% 4775784 4775784 0 1.92 23.34 0.00 0.3%fsstat 1.0% 1.0% 4774482 4774482 0 0.54 8.01 0.00 0.1%access 11.0% 11.0% 52556425 52556425 0 0.54 6.29 0.00 1.0%readdirplus 2.0% 2.0% 9557286 9557286 0 14.26 168.35 0.01 4.7%----------------------------------------------------------------------------------------------------------------------------------| SPEC SFS 2008 AGGREGATE RESULTS SUMMARY |---------------------------------------------SFS NFS THROUGHPUT: 1592334 Ops/Sec AVG. RESPONSE TIME: 5.8 Msec/OpTCP PROTOCOL (IPv4)NFS MIXFILE: [ SFS default ]AGGREGATE REQUESTED LOAD: 1650000 Ops/SecTOTAL LOGICAL NFS OPERATIONS: 477700353 TEST TIME: 300 SecTOTAL PHYSICAL NFS OPERATIONS: 507482526PHYSICAL NFS IO THROUGHPUT: 1691608 Ops/secNUMBER OF SFS CLIENTS: 16TOTAL FILE SET SIZE CREATED: 198016272.0 MBTOTAL FILE SET SIZE ACCESSED: 59408280.0 - 60472775.2 MB (100.00% to 101.79% of Base)
    • 12. Comparing SFS08 MegaOp Solutions*EMC Isilon$10.7 / IOPSNetApp$5.1 / IOPS150msAvere$2.3 / IOPSThroughput(IOPS)Latency/ORT(ms)List Price $/IOPS DiskQuantityRackUnitsCabinets Product ConfigAvere FXT 3800 1,592,334 1.24 $3,637,500 $2.3 549 76 1.832-node cluster,cloud storage configNetApp FAS 6240 1,512,784 1.53 $7,666,000 $5.1 1728 436 12 24-node clusterEMC Isilon S200 1,112,705 2.54 $11,903,540 $10.7 3360 288 7 140-node cluster*Comparing the top SPEC SFS results for a single NFS file system/namespace (as of 08Apr2013). See www.spec.org/sfs2008 for more information.
    • 13. Avere FXT 3800 Edge filer Cloud Config13Proprietary & ConfidentialLoad Gen Client 01Load Gen Client 02Load Gen Client 03Load Gen Client 04Load Gen Client 05Load Gen Client 06Load Gen Client 07Load Gen Client 08Load Gen Client 09Load Gen Client 10Load Gen Client 11Load Gen Client 12Load Gen Client 13Load Gen Client 14Load Gen Client 15Load Gen Client 1672-port 10 GbEEthernet SwitchAvere Edge filer 17Avere Edge filer 18Avere Edge filer 19Avere Edge filer 20Avere Edge filer 21Avere Edge filer 22Avere Edge filer 23Avere Edge filer 24Avere Edge filer 25Avere Edge filer 26Avere Edge filer 27Avere Edge filer 28Avere Edge filer 29Avere Edge filer 30Avere Edge filer 31Avere Edge filer 32Avere Edge filer 01Avere Edge filer 02Avere Edge filer 03Avere Edge filer 04Avere Edge filer 05Avere Edge filer 06Avere Edge filer 07Avere Edge filer 08Avere Edge filer 09Avere Edge filer 10Avere Edge filer 11Avere Edge filer 12Avere Edge filer 13Avere Edge filer 14Avere Edge filer 15Avere Edge filer 16ZFS Core filer + shelf23 x 4TB SATASAS Expansion Shelf23 x 4TB SATASAS Expansion Shelf23 x 4TB SATA10GbE EthernetNetwork path of simulated latency10GbE Core filer with 150ms latencySoftware WAN simulation kernel moduleAvere Systems 32 Node SPEC SFS 2008 Configuration150ms RTT
    • 14. Poll #3Benchmarks are a great way to establish apples-to-apples comparisons of available solutions, however, notall workloads are alike. Truth: traditional NASarchitectures are not adept to handling large distancesbetween clients and the data they are trying to access.• Q: What do you think about 1.59 MegaOpsof SFS2008 NFS, at 1.24 milliseconds ORT using lessthan 2 racks of gear with the Core filer 150milliseconds away?(Please respond within 30 seconds)14Proprietary & Confidential
    • 15. Wrap-up / Q&A• If you are facing any of these challenges, do notdespair, there is now an easy and efficient solution!• Contact info@averesystems.com for more information• Thank you for attending!Q&A15Proprietary & Confidential

    ×