Your SlideShare is downloading. ×
MASSIVE sheds new light on complex research
MASSIVE sheds new light on complex research
MASSIVE sheds new light on complex research
MASSIVE sheds new light on complex research
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

MASSIVE sheds new light on complex research

232

Published on

The city of Melbourne in the Australian state of Victoria has developed …

The city of Melbourne in the Australian state of Victoria has developed
a reputation for innovations in the field of High Performance
Computing (HPC). Monash University (MU), CSIRO and the Australian
Synchrotron (AS) are research institutions at the forefront of this scientific
endeavor, and are home to a number of powerful HPC platforms.
MU, CSIRO and AS work with the Victorian Partnership for Advanced
Computing (VPAC) to run the Multi-modal Australian Sciences Imaging
and Visualization Environment, known as MASSIVE.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
232
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. IBM Systems and TechnologyCase Study MASSIVE sheds new light on complex research Delivering faster data analysis and visualization from hybrid architecture The city of Melbourne in the Australian state of Victoria has developed Overview a reputation for innovations in the field of High Performance Computing (HPC). Monash University (MU), CSIRO and the Australian The need Synchrotron (AS) are research institutions at the forefront of this scien- Monash University (MU) and Australian tific endeavor, and are home to a number of powerful HPC platforms. Synchrotron (AS) and their research part- ners needed a powerful, massively paral- MU, CSIRO and AS work with the Victorian Partnership for Advanced lelized HPC system to process imaging Computing (VPAC) to run the Multi-modal Australian ScienceS Imaging data and to increase the efficiency of and Visualization Environment, known as MASSIVE. data-collection tasks by enabling near real-time processing of collected data. The Victorian Government supported the establishment of MASSIVE The solution and the National Computational Infrastructure (NCI) supports Implementation of two IBM System x® MASSIVE as a specialized facility for researchers across Australia. iDataPlex® dx360 clusters with both intelligent Intel® Xeon® processors and NVIDIA GPUs at AS and MU. Growing data demands Australian scientists have access to a range of high-resolution imaging The benefit instruments, which includes the Imaging and Medical Beamline at Researchers now get near real-time pre- Australian Synchrotron. The MASSIVE partnership was formed to views and analysis of CT, MRI and elec- tron microscope scans, enabling them to help scientists extract the most from these instruments produce by pro- ensure that they are capturing all the data viding a powerful, massively parallel HPC system optimized to process they need within limited windows of time. imaging data. “The Australian scientific community is fortunate to have access to a range of amazing instruments, including the Australian Synchrotron,” explains Wojtek James Goscinski, Coordinator of the MASSIVE project. Over the past few years, there’s been a huge increase in the availability of imaging equipment, such as new MRI and CT facilities and new genera- tion instruments such as the Imaging and Medical Beamline at AS. Researchers use these facilities to perform high-resolution 3D scans of research samples, which can be anything from live organs to rock frag- ments. The 3-dimensional images that are produced—called a “data volume”—are enormous.
  • 2. IBM Systems and TechnologyCase Study “In the past, getting meaningful results from a complex series of scans could take weeks or even months to achieve,” says Goscinski. “Cutting “IBM offered a high down the time it takes to process such crucial data can have a real impact floating-point computa- on delivering new insights ahead of other research groups.” tional performance to The MASSIVE partners wanted to help researchers get the most out of power ratio for our IT increases in the prevalence and performance of imaging modalities. In spend—which impressed order to achieve this, the partners had to develop a HPC platform tai- lored to the specific demands of processing high-resolution data volumes. us during tender.” High performance, high visibility —Wojtek James Goscinski MASSIVE produced a detailed schematic of the HPC solution it required, and put it out to tender. “We needed a massively parallelized cluster, built on a hybrid GPU/CPU infrastructure,” says Goscinski. “As publicly funded institutions, we had a duty to ensure that the IT solution meets strict green IT requirements. IBM offered a high floating-point computational performance to power ratio for our IT spend—which impressed us during tender.” The full solution comprises two linked clusters—MASSIVE1 and MASSIVE2—managed as a single system. Each MASSIVE cluster has 42 IBM System x iDataPlex dx360 servers. Each of these iDataPlex nodes has two six-core Intel Xeon 5600 Series processors running at 2.66 GHz (for a total of 504 cores per cluster) and two NVIDIA M2070 GPUs (for a total of 84 GPUs per cluster). The Intel Xeon processors provide industry-leading performance com- bined with extreme energy-efficiency. iDataPlex is also a highly efficient solution, offering a unique half-depth form factor that maximizes the effect of cooling, enabling more processors to be packed reliably into a smaller space. The iDataPlex nodes can run Microsoft Windows Server or Linux depending on the individual requirements of a simulation, with volume reconstruction running in a Windows HPC environment and the core services on Linux. IBM General Parallel File System (GPFS™) provides high-performance parallelized access to data for both operating systems. “The CT scan reconstruction algorithms that are used by imaging scientists to create 3D volumes are well parallelized on a GPU,” explains Goscinski. “The reconstruction algorithms actually run so quickly that the challenge is getting data into the GPUs fast enough. We configured MASSIVE with an optimal ratio of GPUs to file-system performance, providing the best possible price-performance ratio.” 2
  • 3. IBM Systems and TechnologyCase Study He adds: “We also use the GPUs in more conventional ways to perform Solution components real-time or offline rendering for visualization projects. At the lower end of the workload spectrum, we also offer an interactive desktop envi- Hardware ronment running on a individual nodes, with a whole range of tools that ● IBM System x® iDataPlex® dx360 researchers can use to process their data.” class server ● IBM System x3650 class server ● Intel® Xeon® processors One of the most important functions of the MASSIVE system is its ● IBM System Storage® DS3500 Turbo ability to provide a real-time preview of scan data. “One of the major ● IBM System Storage SAN24B-4 Express inefficiencies in high-resolution imaging experiment was that it was ● IBM System Networking RackSwitch difficult to know if you’d captured your data correctly,” says Goscinski. G8124, G8100 “With the visualization capabilities of our IBM iDataPlex solution, we’ve ● Mellanox IS5200 QDR Switch given researchers the chance to check that they’re collecting all the data Software they want, allowing them to get maximum value from their allotted ● IBM General Parallel File System scanning slots.” (GPFS™) ● Extreme Cloud Administration Throughout the implementation, MASSIVE worked closely with Toolkit (xCAT) ● Linux the IBM team. “We found our local IBM representative to be highly motivated and professional,” says Goscinski. “The whole process went Services smoothly, and we’re very satisfied with the results.” ● IBM ANZ STG Lab Services Clear benefit MASSIVE’s resources are shared between Australian Synchrotron, CSIRO, Monash University and the VPAC. In addition, a portion of the total resources was also purchased by the National Computational Infrastructure – one of the three peak Australian supercomputing facilities—for leading researchers across Australia. Computing time is allocated to projects based on their scientific merit. While researchers can use hundreds of thousands of computing hours, some require only a few hours using a desktop interface, making MASSIVE a highly flexible HPC solution. “We support a lot of neuroimaging research, as well as engineering and materials science. Scientists who use microscopy and electron microscopy also use the system,” continues Goscinski. “We’re also getting increasing demand from researchers across Australia in the fields of molecular dynamics and astrophysics, who use MASSIVE’s parallelized GPUs to run complex simulations.” With MASSIVE’s support from the Victorian Government and the National Computational Infrastructure, Australian researchers benefit from the exchange of knowledge and ideas on the use and application of massively parallel HPC. “MASSIVE is a key specialized HPC resource in Australia,” says Goscinski. “Data generated from imaging modalities across the country can now be processed far more effectively, generating more significant insights, faster.” 3
  • 4. Future resolutions Inspired by the success of the new clusters, MASSIVE is now looking to extend their capacity. “We’re planning to scale up in a year’s time by doubling the size of MASSIVE2,” concludes Goscinski. “MASSIVE has established itself an invaluable tool for accelerating data analysis and visualization. By continuing our relationship with IBM, we’re confident the system will meet the processing demands of the future.” For more information Contact your IBM sales representative or IBM Business Partner, or visit us at: ibm.com/systems/x/hardware/idataplex For more information about Monash University visit: monash.edu.au © Copyright IBM Corporation 2012 IBM Corporation Systems and Technology Group Route 100 Somers, NY 10589 Produced in the United States of America March 2012 IBM, the IBM logo, ibm.com, GPFS, System Storage and System x are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at ibm.com/legal/copytrade.shtml Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, the Intel logo, Xeon and Xeon Inside are trademarks of Intel Corporation in the U.S. and/or other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.THE INFORMATION IN THIS DOCUMENT IS This document is current as of the initial date of publication and may be changedPROVIDED “AS IS” WITHOUT ANY WARRANTY, by IBM at any time. Not all offerings are available in every country in whichEXPRESS OR IMPLIED, INCLUDING WITHOUT ANY IBM operates.WARRANTIES OF MERCHANTABILITY, FITNESS FOR APARTICULAR PURPOSE AND ANY WARRANTY OR The client examples cited are presented for illustrative purposes only. ActualCONDITION OF NON-INFRINGEMENT. IBM products performance results may vary depending on specific configurations and operatingare warranted according to the terms and conditions of the conditions. It is the user’s responsibility to evaluate and verify the operation of anyagreements under which they are provided. other products or programs with IBM products and programs. Please Recycle XSC03104-USEN-00

×