ALICE Supercomputer Runs Computationally Intensive Research with Panasas
Upcoming SlideShare
Loading in...5
×
 

ALICE Supercomputer Runs Computationally Intensive Research with Panasas

on

  • 430 views

The University of Leicester consolidated its high performance computing (HPC) capabilities into a private cloud infrastructure with Panasas ActiveStor, a high performance parallel storage system ...

The University of Leicester consolidated its high performance computing (HPC) capabilities into a private cloud infrastructure with Panasas ActiveStor, a high performance parallel storage system running over 10GbE and InfiniBand. The centralized supercomputer, Advanced Leicester Information and Computational Environment (ALICE), allows multi-disciplinary researchers to run computationally intensive research projects quickly and efficiently while resulting in significant cost savings for the university.

Statistics

Views

Total Views
430
Views on SlideShare
430
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    ALICE Supercomputer Runs Computationally Intensive Research with Panasas ALICE Supercomputer Runs Computationally Intensive Research with Panasas Document Transcript

    • Customer Success StoryUniversity of Leicester University of Leicester and the ALICE Supercomputer May 2011
    • University of Leicester and the ALICE Supercomputer “BackgroundDr. Chris Rudge had big challenges to overcome whenhe was tasked with creating a central supercomputingfacility at The University of Leicester – one of the firstuniversities in the UK to centralize its high performancecomputing (HPC) systems inside a private cloudinfrastructure. Most academics were used to owning HPC clusters have developed to atheir own HPC capabilities, or sharing departmental level where they are now professionalfacilities between small groups of colleagues. To products which we should expect tosucceed, Dr. Rudge had to deliver a system that “just work.” The complete solutionwas better than the custom HPC clusters that werescattered across the campus if departmental users were chosen by the University of Leicesterto accept the new centralized HPC facility. does exactly that with best of breed storage from Panasas. Ease ofIn challenging economic times, the new system wouldhave to deliver significant cost savings over manyindividual HPC storage systems. Another majorconsideration was that an increasing number of HPCresource users did not work in traditional “computationheavy” disciplines like physics, astronomy, chemistry andengineering. The new system therefore had to scale outperformance on a single system, but also be reliable andeasy to use, particularly for academics that did not havea background in using HPC clusters. “ management allows researchers to optimize their use of the system without having to spend valuable time simply keeping the system running. This is very important on an HPC system which supports a broad range of research across a diverse set of academic disciplines.Previously, Dr. Rudge was responsible for the largest Dr. Chris Rudge,HPC system at the university which was dedicated to University of Leicesterthe Physics & Astronomy Department. He was selectedto manage the project because he understood thechallenges of running a small HPC system within asingle department.“Academic departments typically struggle to secure The Solutionresearch funding to employ skilled IT support staffsimply to administer their HPC systems,” commented By consolidating its high performance storage andRudge. “Without dedicated support staff, HPC systems compute budget into a centralized private cloud, thedo not usually function well and researchers find it university avoided the cost of over-provisioning for peakdifficult to make best use of these expensive resources.” workloads as is typical within individual departments. The facility is far bigger than would be possible for anyBy centralizing HPC support, Dr. Rudge assembled a individual departmental system, allowing researchersteam of six to manage the new system, the Advanced to test new ideas with smaller problem sizes, and thenLeicester Information and Computational Environment, to do a small number of large runs that leverage alldubbed ALICE. The large team dramatically hastened of ALICE’s processing cores. The team worked withthe process of designing, procuring and installing the a leading HPC server provider to create a systemHPC infrastructure, and the final system was delivered with 256 compute nodes, two login nodes and twoon time and on budget, something that individual management nodes, all connected by a fast InfiniBanddepartments often fail to achieve with smaller systems. network fabric. Each computational node offers a pair of 1-888-panasas www.panasas.com 2
    • University of Leicester and the ALICE Supercomputerquad-core 2.67GHz Intel Xeon X5550 CPUs, providing to the powerful computer system while providing the2048 CPU cores for running jobs. The popular 64-bit flexibility to meet the diverse needs of many researchers.Scientific Linux operating system – a version of Redhat A high performance storage system was critical as Dr.Enterprise Linux – was chosen for the operating system. Rudge knew that while the compute nodes would have very similar performance regardless of supplier, storage“Perhaps one of the most important considerations was performance could materially differ. The need to storeselecting the accompanying data storage system. The data for HPC projects from all university departments inoptimal solution would need to quickly deliver data a global namespace required a solution that delivered all this, but at a very competitive cost. Panasas® ActiveStor™ Parallel Storage Panasas storage performance and The team developed a suite of benchmarks during manageability were the key reasons the tender process, including one designed to stress that the University of Leicester chose the storage and the network infrastructure linking the compute nodes to the storage. This critical benchmark Panasas for ALICE. HPC clusters was designed to run with varying degrees of parallelism have largely standardized on the use and on different numbers of nodes, ranging from running of x64 CPUs with dual socket nodes one job on a single node to running eight jobs on each and InfiniBand interconnect. The type of 32 nodes. The benchmark would find the limits of of storage and its integration into the bandwidth per node as well as identify any degradation complete system was seen as one in performance as the number of jobs per node and of the few areas where there would number of nodes running the benchmark increased. Two storage solutions, utilizing parallel file systems, were be differences between suppliers. shortlisted for benchmarking: Panasas ActiveStor, a Selecting the wrong product could high performance parallel storage system running over have potentially crippled the system 10GbE and InfiniBand, and an open source Lustre- – however fast the CPUs are – they based solution employing commodity hardware. At first are rendered useless if there is no glance the Lustre system seemed to offer the greatest data to process. ALICE is the largest flexibility and lowest cost, but further testing proved it “ HPC system ever purchased by the University and benchmarking during the tender process demonstrated clearly that Panasas offered the best performance both on a per node basis and also without degradation of the aggregate performance with access from multiple nodes. would not hold its own on the performance or simplicity fronts. As soon as the benchmark was scaled across several nodes, Panasas ActiveStor demonstrated dramatically higher parallel file system performance than the Lustre- based system. Dr. Rudge knew that although he could not predict how many jobs would run on the cluster at a time, he had seen an increasing number of data mining research tasks – jobs that would likely replicate the demands of the benchmark running over many nodes. Dr. Chris Rudge, Simple to use storage management was another University of Leicester consideration. Dr. Rudge was keen to ensure that the storage solution selected would allow ALICE to 1-888-panasas www.panasas.com 3
    • University of Leicester and the ALICE Supercomputermaximize time for research projects, commenting that, Conclusion“improved management will produce greater research The consolidation of its HPC system into a centralizedoutput per pound sterling, providing better value, even if cloud infrastructure allowed the University of Leicesterthe initial cost is the same.” to achieve significant cost savings while dramaticallyPanasas ActiveStor is a high performance storage increasing the peak performance capabilities of individualsystem that brings plug-and-play simplicity to large departments. Typically, ALICE serves around 300 jobsscale storage deployments while pushing aggregate at any one time, with perhaps 10 jobs occupying half theperformance to 150GB/s – the industry’s ultimate cluster, and the remaining jobs running on an average ofsystem throughput per gigabyte of storage. Dr. Rudge two to three CPUs. The users love the system’s abilityknew this scale-out storage system would provide the to run demanding algorithms quickly and efficiently; onecost effective performance, reliability, flexibility, and economist has run 300,000 one-hour jobs in the first sixcritical ease-of-use needed to support expansion within months of operation – the equivalent of more than 30his limited project budget. He built a plan that allowed years processing on a single CPU. The facility supportsfor scalable pay-as-you-grow capacity. In addition, with a very broad range of research projects under a simplefeatures like user quotas and per-user charge back, to manage global namespace: Economists model thethe Panasas HPC storage system was optimized for a world’s stock markets, Geneticists analyze genomeshared cloud environment. sequences, and Astrophysicists model star formation, for example. By any measure ALICE has been a fantastic Eventually it became clear that performance success.requirements and ease of use considerations meantthat there was no real alternative to Panasas ActiveStor.With research projects expected to place increasingdemands upon storage system performance, it wascritical that the storage infrastructure did not introducebottlenecks that would reduce ALICE’s ability to deliverresults quickly and efficiently. 05242011 1071 | Phone: 1.888.PANASAS | www.panasas.com © 2011 Panasas Incorporated. All rights reserved. Panasas is a trademark of Panasas, Inc. in the United States and other countries.