Panasas Storage Smooths Turbulence for ICME at Stanford University


Published on

An existing storage system hindered the compute performance of this research organization’s work in designing systems free of performance and safety issues related to turbulence. Their storage system often hung and limited the productivity of the cluster. A critical issue for a new system was installation and amount of time required for ease of integration. The fully integrated software/hardware solution to this problem included the Panasas Operating Environment and the PanFS parallel file system with the Panasas DirectFLOW protocol.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Panasas Storage Smooths Turbulence for ICME at Stanford University

  1. 1. Customer Success Story Stanford University“By leveragingthe object-based Stanford University Institute for Computational and Mathematical Engineeringarchitecture, we were The Stanford University Institute for Computational and Mathematical Engineeringable to completely (ICME) is a multidisciplinary organization designed to develop advanced numericaleliminate our storage simulation methodologies that will facilitate the design of complex, large-scaleI/O bottleneck.” systems in which turbulence plays a controlling role. Researchers at the Institute useSteve Jones high performance computing as part of a comprehensive research program to betterTechnology Operations Manager, predict flutter and limit cycle oscillations for modern aircraft, to better understand theStanford Institute for Computational impact of turbulent flow on jet engines, and to develop computational methodologiesand Mathematical Engineering that can be expected to facilitate the design of naval systems with lower signature levels. Their research can lead to improvements in the performance and safety of aircraft, reduction of engine noise, and new underwater navigation systems. Researchers at the Institute are using requirements of the system. As a result, the 164-node Nivation Linux cluster the storage system often hung and to tackle large-scale simulations. The limited the productivity of the cluster. Institute has recently benefited from “It’s imperative that our clusters be fullySUMMARY deploying Rocks software on several operational at all times,” said Steve large-scale clusters in support of its Jones, Technology Operations ManagerIndustry: Computer Aided Engineering work, including the Nivation cluster. at the Institute. “The productivity of our Rocks provides a collection of integrated organization is dependent upon eachTHE CHALLENGE software components that can be used cluster running at peak optimization.”An existing storage system hindered the to build, maintain, and operate a cluster.compute performance of this researchorganization’s work in designing systems Its core functions include installing the Adding a second NFS server was anfree of performance and safety issues Linux operating system, configuring the initial option, but was quickly dismissedrelated to turbulence. Their storage compute nodes for seamless integration, because of poor scalability andsystem often hung and limited theproductivity of the cluster. A critical and managing the cluster. increased management burden. Theissue for a new system was installation Institute needed a solution that couldand amount of time required for ease of The Challenge scale in both capacity and the number ofintegration. While deployment of Rocks software cluster nodes supported while providing helped the Institute maximize the exceptional random I/O performance.THE SOLUTION compute power of the Nivation cluster, “High performance is critical to ourThe fully integrated software/hardware an existing storage solution hindered success at the Institute,” said Jones.solution included the Panasas® overall system performance. Behind “We needed a storage solution thatOperating Environment and the PanFS™parallel file system with the Panasas the Nivation cluster, an NFS server was would allow our cluster to maximize itsDirectFLOW® protocol. initially deployed to store and retrieve CPU capabilities.” Finally, since Jones data. At first, the solution appeared is supporting several clusters by himself,THE RESULT capable of supporting the number of ease of installation and management • Elimination of storage bottleneck cluster nodes and their associated was essential. A critical goal was to data; however, it quickly became quickly install a storage system and • Maximized cluster utilization apparent that it could not meet the load immediately move data. • Ease of integration and management • Exceptional price/performance 1-888-panasas
  2. 2. Customer Success Story: Stanford UniversityThe SolutionThe Institute selected the Panasas® Storage Cluster to eliminate “With the object-basedthe I/O bottleneck hindering overall performance of the Nivation architecture, we are empowered tocluster. By offering an integrated SW/HW solution, the PanasasStorage Cluster was set up and in production within a matter build the largest, fastest clustersof hours. This was a key benefit for Steve Jones. “We don’t possible.”have a huge team and weeks of time to integrate discretesystems,” said Jones. “We do science here, not IT.” The Panasas Steve Jones Technology Operations Manager,Storage Cluster enables quick deployment by combining a Stanford Institute for Computationalnext-generation, distributed file system with an integrated and Mathematical Engineeringhardware solution that is perfectly tuned to deliver outstandingperformance. the direct disk to node-to-disk access offered by the DirectFLOW protocol provides a wealth of long-termIn order to achieve high data bandwidth, the Institute installed architectural opportunities for the Institute. “Previously,the Panasas DirectFLOW® protocol on each node in the cluster. we had to limit the growth of our clusters because of I/OThe direct node-to-disk access offered by the DirectFLOW issues,” said Jones. “With the object-based architecture,protocol allowed the Institute to achieve an immediate order of we are empowered to build the largest, fastest clustersmagnitude improvement in performance. “By leveraging the possible. We now have a shared storage resource that canobject-based architecture, we were able to completely eliminate scale in both capacity and performance.”our storage I/O bottleneck,” said Jones. The Panasas out-of-band DirectFLOW data path moves file system capabilities toLinux compute clusters, enabling direct and highly parallel dataaccess between the nodes and the Panasas Storage. Thisachieves up to 30X increase in data throughput for the mostdemanding applications.The ResultSince installing the Panasas Storage Cluster, the Instituteis able to maximize the productivity of the Nivation cluster,consistently running at 100% utilization. This allows theInstitute to deliver accelerated results to their customers.“Our user community is dependent upon our clusters todeliver results as quickly as possible,” said Jones. Panasashas been a key ingredient in maximizing the researchcapabilities of the Nivation cluster.Additionally, the ease of installation and management ofthe system cannot be overlooked. Jones independentlymanages several clusters, so a solution that is simple todeploy and administer is of critical importance. Finally,About PanasasPanasas, Inc., the leader in high-performance scale-out NAS storage solutions, enables enterprise customers to rapidly solvecomplex computing problems, speed innovation and bring new products to market faster. All Panasas solutions leverage thepatented PanFS™ storage operating system to deliver exceptional performance, scalability and manageability. PW-10-21300 | Phone: 1-888-PANASAS | © 2010 Panasas Incorporated. All rights reserved. Panasas is a trademark of Panasas, Inc. in the United States and other countries.