Hpc Press Slides

853 views
759 views

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
853
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
19
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Hpc Press Slides

  1. 1. 4 Accelerate your business. Preserve your data. PLEASE EMBARGO INFORMATION UNTIL JANUARY 20, 2010 1
  2. 2. StorNext • Successful track record since 2001 (significant traction in last 3 years) • Over 4,000 customer installations • Over 50,000 File System Clients deployed • Over 400 Archive systems deployed • Over 100PB under management • Broad support of OS platforms and storage technologies • Windows, Linux, UNIX, Mac (OS platforms) • Quantum, Sun/STK, IBM, HP, Dell, Sony, Spectra Logic, Qualstar (Tape technologies) • Server and disk vendor agnostic • Utilized throughout rich data content industries • Broadcast, Cable & CCTV • Movie and Television Production, Post-Production & Graphics • Science and Engineering (Oil & Gas research, Satellite imaging) • High Performance Computing (Life Sciences- genome sequencing) • Audio and Video Surveillance and Mining • Government Data Mining projects • Internet Streaming • Pre-Press, Digital Printing • CAD/CAM 2
  3. 3. StorNext Data Management Software Overview • Simplifies data sharing via high-speed Fibre Channel (SAN) and NAS • Reduces storage costs and complexity by unifying storage tiers • Preserves choices across operating systems, disk vendors and tape vendors • Independently scales to thousands of nodes and PBs of data PRIMARY TIER FC/iSCSI SECONDARY TIER CIFS/NFS StorNext Optional SAN Clients StorNext Policy STORNEXT LAN CLIENTS FC/iSCSI Engine GigE TCP/IP Infiniband IP TAPE LIBRARY ARCHIVE VAULT Metadata Controller 3
  4. 4. StorNext Data Management Software StorNext File System • Heterogeneous shared file system presents pools of storage as local-attached • Purpose-built for high-performance ingest, distribution and computing environments • Optional policy engine for file-based replication and near-line deduplication PRIMARY TIER FC/iSCSI SECONDARY TIER StorNext Optional SAN Clients StorNext Policy FC/iSCSI Engine Metadata HA MDC Controller 4
  5. 5. StorNext Data Management Software StorNext Storage Manager • Delivers transparent tiered storage and archive for preservation • Directory-level policy engine data movement • Supports disk, NAS, VTL, MAID and tape target devices PRIMARY TIER FC/iSCSI SECONDARY TIER StorNext Optional SAN Clients StorNext Policy FC/iSCSI Engine TAPE LIBRARY ARCHIVE VAULT Metadata Controller 5
  6. 6. StorNext Data Management Software StorNext Distributed LAN Client (DLC) access across the LAN • Extends collaboration beyond the SAN by supporting thousands nodes • Highly resilient „clustering‟ via gateway servers • Load balancing for optimum performance PRIMARY TIER SCALES TO THOUSANDS OF STORNEXT DISTRIBUTED LAN CLIENTS Vendor Agnostic FC/iSCSI SECONDARY TIER StorNext DLC Optional Gateway Servers StorNext Policy FC/iSCSI Engine GigE TCP/IP Infiniband IP Vendor Agnostic TAPE LIBRARY ARCHIVE Metadata Controller 6
  7. 7. StorNext 4.0 – Key New Features Features Benefits Data Replication  Protection and Distribution File System Deduplication  Nearline Storage Distributed Data Mover  Increased SM Performance Web services-based GUI  Improved Usability 7
  8. 8. StorNext Data Management Software File-based replication for File System and Storage Manager environments • File system policy-based for protection, distribution and consolidation • Supports one-to-one, many-to-one and one-to-many StorNext environments • Asynchronous host-based IP and FC connectivity SOURCE TARGET SOURCE TARGET Remote Office Remote Office Remote Office WAN WAN Remote Office Remote Office Central Datacenter Central Datacenter Remote Office Remote Office Remote Office 8
  9. 9. StorNext Data Management Software File system deduplication • Policy-based file system deduplication for protection and nearline storage • Integrated with replication policy engine for flexibility • Directory-level deduplication based upon time, size and file type PRIMARY TIER Vendor Agnostic FC/iSCSI SECONDARY TIER StorNext StorNext SAN Clients Policy Engine Vendor Agnostic DEDUPLICATED REPOSITORY Metadata Controller Vendor Agnostic 9
  10. 10. StorNext Data Management Software StorNext Storage Manager Distributed Data Mover (DDM) • Turns Linux SAN clients into Storage Manager data movers • Improves throughput performance to tiers of storage, especially tape • Reduces the need for large, high performance Metadata Controller (MDC) servers PRIMARY TIER SAN Client FC/iSCSI DDM Server SECONDARY TIER FC/iSCSI StorNext SAN Clients DDM Server FC with DDMs MDC & DDM FC TAPE LIBRARY ARCHIVE DDM Server FC 10
  11. 11. StorNext Data Management Software Web Services based Graphical User Interface (GUI) • Improved Usability • 50% Page Reduction vs. v3.x • More user friendly for reduced complexity • Advanced configuration wizard simplifies setup and configuration • Improved Monitoring • File system operations • At-a-glance system health status • At-a-glance capacity utilization • Improved reporting and graphing capabilities • Web Services architecture (XML) • Improved interoperability with 3rd party applications based on web services architecture • Platform independent control of Storage Manager environments (Linux, Windows) • SNAPI support will eventually be replaced by a full Web Services API 11
  12. 12. Management Console View 12
  13. 13. StorNext Data Management Software Finish Projects Faster High speed, heterogeneous file sharing  Reduce Storage Costs Multi-tier storage and archiving  Prevent Complexity Transparent, automated data movement  Safeguard Critical Data Data protection, monitoring, alerting  Preserve Choice Platform and storage vendor independent  13
  14. 14. Customer Case Examples
  15. 15. StorNext User: Lime Pictures Media & Entertainment Key Challenges Many of the company‟s productions operating with tight deadlines and high frequency of output. HD brings its own challenges with a four-fold increase in content and bandwidth requirements. Project Objectives • Real time tapeless workflow by filming studio-based material directly onto the Storage Area Network (SAN) • Simultaneous workflow: making content available at the same time to all aspects of production “Reliability and usability have been transformed Quantum Solution/Benefits since we deployed the • Quantum StorNext File System and Storage Manager, Quantum Scalar i2000 tape library digital workflow solution. We haven‟t experienced • Get content to air faster and share content more efficiently and cost effectively any downtime and our • Effectively store and share content, and deploy the solution into existing workflow productivity has increased • Transform tapeless workflow reliability and usability significantly. Our confidence • Establish multi-tier archives, automatically moving data between the disk and in the system has certainly Scalar i2000 tape resources risen by 100 percent.” Why Quantum/StorNext -Mike Horan, • Proven in the market SAN Support Engineer, • Open architecture provides the flexibility to choose whichever Lime Pictures technologies desired • Offers rich acquisition, ingest, and production editing functionality • Provides resilience and reliability needed 15
  16. 16. StorNext User: Baylor College of Medicine HPC – Life Sciences Key Challenges Sequencing the 3 billion chemical building blocks that make up the DNA of the 24 different human chromosomes, this research center needed to access, share and manage hundreds of terabytes of data for analysis at any time. Project Objectives • Centrally manage complex heterogeneous environment of servers, networks and storage technology • Expand Baylor College‟s Human Genome Sequencing Center‟s data storage capabilities “By combining high-speed Quantum Solution/Benefits data sharing and cost- • Quantum StorNext File System and Storage Manager, Quantum Scalar i2000 tape effective content retention in library a single solution, StorNext • Enabled simultaneous access to huge volumes of data without impacting system users has enabled our researchers • Provided cost-effective content creation through automated data management to access the data they need quickly and easily and • Allowed centralized management of heterogeneous environment eliminated the significant • Protected prior investments by integrating legacy resources management overhead we • Provided scalable foundation to meet anticipated storage growth of up to 20 PB over incurred with our legacy next 2-3 years system.” Why Quantum/StorNext Geraint Morgan • Reputation in high-performance, multi-petabyte environments Director of Information • Support for existing storage hardware with no significant investment needed for Systems, additional hardware Baylor College • Easy to manage system 16
  17. 17. StorNext User: CERN HPC – Science & Engineering Key Challenges Collecting and processing large amounts of scientific data from the collision between billions of particles demands a fast, reliable and scalable IT infrastructure. Project Objectives • Manage high-volume scientific data • High-speed, shared workflow operations “Data is the most • Large-scale, multi-tier archiving precious commodity CERN has. Quantum Quantum Solution/Benefits StorNext is instrumental • Quantum StorNext File System in collecting that data • 20 InfoTrend Fibre Channel, 4 GB/s disk arrays; 80 servers running Linux quickly and reliably, thereby enabling the • High-speed, shared workflow operations with quicker file processing scientific community to • Processing massive amounts of data (1 PB of data/month) understand and exploit • Access data quickly and easily on all hosts new ideas and • Aggregated bandwidth, scalability of the number of client nodes discoveries.” • Direct access to the Fibre channel correction increasing performance -Pierre Vande Vyvre, • Affinity allowing direct data traffic to pre-determined disks (all machines operating at max performance) Project Leader, CERN Why Quantum/StorNext Offered significantly higher performance than competitor because it makes direct use of Fibre Channel connection 17

×