Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

High-Performance Storage for the Evolving Computational Requirements of Energy Exploration Solution Profile


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

High-Performance Storage for the Evolving Computational Requirements of Energy Exploration Solution Profile

  1. 1. Richer Data Requires Intelligent Storage Continued explosive growth of raw field data combined with the use of more compute-intensive analysis algorithms for oil and gas exploration are placing new demands on energy industry IT infrastructures. High-Performance Storage for the Evolving Computational Requirements of Energy Exploration SOLUTIONPROFILE Energy Demands Continue to Grow Much of the world is expected to increase energy con- sumption significantly over the next decade. As a result, the need to find new energy sources continues to grow. To meet this demand, the industry has turned to more sophisticated analysis tools. Exploration equipment, including new technologies such as downhole sensors, improves the chances of success and reduces time to discovery. Re-examining existing data using more advanced methods can also yield new results. These new technologies generate a vast amount of data that needs to be quickly analyzed and visualized. In particular, raw data from exploration tools is typically rendered and interpreted using sophisticated seismic imaging analysis software to produce detailed 3-D and 4-D models of the earth’s subsurface. Naturally, speed is essential when exploration organiza- tions try to determine the commercial viability of tapping new reservoirs to meet increasing energy demand. With today’s competition to find energy sources, the goal is to better interpret more data in less time. The key to accelerating analysis and quality decision- making is the ability to store rapidly expanding vol- umes of seismic data. Making data accessible to the appropriate parties for analysis is critical to avoid per- formance bottlenecks that can add days or weeks to the discovery process.
  2. 2. SOLUTION PROFILE Hitachi Data Systems delivers network attached storage solutions to meet these challenges. Their performance and advanced data management features address the issues of today’s data-intensive oil and gas exploration activities and strict project deadlines. Keep Pace With Evolving Requirements Advances in seismic imaging are helping organizations speed the discovery of new energy sources. In particular, new tech- nologies like reverse-time migration and waveform inversion are producing larger volumes of richer data. At the same time, getting actionable information with these techniques is much more compute-intensive. Today, more processing power is neces- sary to accommodate the growth in data volume and sophisticated seismic analysis algorithms. Traditionally, organizations have relied on advances in high-performance computing (HPC), which provided more computational muscle to work a problem. Specifically, organizations routinely would use clusters of multicore CPU Linux servers. Over time, energy industry imaging algo- rithms and their implementations have evolved faster than the available hardware. For example, algorithms can be selectively tuned to get a hardware-assisted boost by running on field-programmable gate arrays (FPGAs), graphics processing units (GPUs), or cell processors. This development has led to continued efforts to improve the algorithms and to take advantage of new processor technology. Tailoring the processing algorithms to run on these new platforms reduces cycle times by more than an order of magnitude, which constitutes a significant acceleration in the industry. Simply put, using such hardware accelerates seismic analysis computational workflows to a point where it is no longer the limiting factor in the workflow. Unfortunately, I/O performance bottlenecks can occur when traditional NAS and SAN storage systems host the data and feed the computational workflows. Such systems do not deliver the performance or scal- ability needed to support today’s energy exploration efforts. This lack of performance becomes especially evident during peak load times in a shared environment where multiple demands simultaneously stress the storage system. Attempts to accommodate the growing data sets and to boost performance of such sys- tems often fall short. The common approach is to simply add more storage devices, which adds complexity. Additional devices must be managed and significant amounts of time must be dedicated to moving and managing the data across the various devices. Oper- ating costs such as data center facilities charges and electricity to power and cool the systems also increase. What Is Needed A storage solution that can help accelerate discovery in oil and gas exploration must have several important characteristics. These characteristics include: High Performance In order to keep today’s HPC clusters sati- ated, a storage solution must offer high data throughput and I/O. Storage must also be able to accommodate many simultaneous reads against the same stored data set as calculations are distributed across hundreds of servers, or more. In addition, performance must remain consistent as the number of reads, systems and file sizes scale. In particular, there is a need for a storage infrastructure that uses a hardware assist with built-in parallelism to accelerate performance. Essentially, the storage infrastructure must match the changing performance require- ments of the applications running on the computational systems. Applications evolve. Just as the HPC systems have increased performance by using hardware acceler- ation (for example, FPGAs, GPU, and so forth) and multicore CPU clusters, so too must the storage solution. Most importantly, the storage solution must keep pace in multiple dimensions. It must, be able to address IOPS as well as throughput, and be able to independently scale in capacity as well as throughput. Massive Scalability New seismic imaging techniques can pro- duce data sets that easily run into hundreds of terabytes each, often multiple petabytes in aggregate. Whether it is used for data acquisition, processing or interpretation, any storage platform has to scale seamlessly to accommodate the rapidly growing data sets. Reduced Cost of Ownership A storage solution must be able to con- solidate petabytes of data onto a single platform with a common set of manage- ment tools. The use of industry-standard protocols increases interoperability and choice, while lowering costs. The solution Figure 1. High-Performance Solutions for Media Workflows
  3. 3. 3 Innovation is the engine of change, and information is its fuel. Innovate intelligently to lead your market, grow your company, and change the world. Manage your information with Hitachi Data Systems. must also be energy efficient to help rein in power consumption costs. Simplified Data Management Managing massive amounts of data is often difficult and can introduce significant delays that impact workflows. It can also make technology transitions very slow and diffi- cult. Substantial cost savings and improved efficiencies can be achieved by leaving the data in place. Instead, that data can be referenced through a virtualized redirection or use policy-based processes to automat- ically, without manual intervention, ensure that the data is in the right place. Multiple Operating System Support Today’s exploration equipment frequently examines and interprets data sets using different applications on various operat- ing systems, including UNIX, Linux and Microsoft® Windows® . As such, a storage solution must support standard protocols and accommodate applications running on all operating systems, even when they are accessing the same data set. Availability and Data Protection Beyond performance, the other major potential storage bottleneck is downtime, which can add days, or weeks to the dis- covery process. To avoid downtime, storage solutions must offer a set of high-availability features, such as redundant components and paths, multiple RAID levels and cluster failover across multiple nodes. Hitachi Data Systems as Your Technology Partner Hitachi Data Systems is a provider of high-performance network storage with a flexible architecture suited to the needs of the oil and gas industry. Today’s leading energy exploration organizations use our systems. Hitachi Data Systems network attached storage systems, whether part of our Hitachi Unified Storage (HUS) family or our Hitachi NAS Platform (HNAS) family, are built upon a patented hardware-accelerated architecture. Additionally, we offer low latency and simpli- fied data access through standard protocols. These capabilities dramatically accelerate seismic imaging analysis and oil and gas exploration. Hitachi NAS Platform systems are ideal for production-oriented, high-performance environments with reliability and data man- agement requirements. Using our network storage servers, organizations can remove storage I/O constraints and eliminate the need for specialized infrastructures. To that point, Hitachi Unified Storage, Hitachi Virtual Storage Platform (VSP) and Hitachi NAS Platform families of storage platforms are designed for unmatched scalability, performance and flexibility in the storage market (see Figure 1). A core ele- ment of the platforms is Hitachi NAS Silicon File System technology, which provides hardware acceleration for core file system functionality. It enables enterprise-class data management and protection in com- bination with industry-leading performance for shared network storage environments. Additionally, a comprehensive suite of man- agement, provisioning and disaster recovery tools contribute to a lower total cost of ownership and higher availability. Hitachi NAS Platform solutions support multiple protocols, including NFS, SMB and iSCSI. This protocol support enables seismic analysis applica- tions running on different operating systems to seamlessly access all data relevant to an exploration effort. Costly duplicate efforts can be avoided by promoting information sharing via fast, secure access to a central pool of files and databases that can scale up to multiple petabytes. HNAS provides unique functionality to vir- tualize and migrate data on a file level, by interfacing through the NFS protocol and incorporating the virtualized files in a global namespace. This virtualized file access allows end users to leave the data where it is. It enables access via HNAS, regardless of whether the data is stored on an HNAS system or a system from a different vendor or even an Open Source system like Lustre. The system simply has to provide an NFS- based access. End users can also choose to leverage the data migration functionality of LEARN MORE Hitachi Solutions for Oil and Gas
  4. 4. © Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or registered trademark of Hitachi Data Systems Corporation. Microsoft and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. SP-089-D DG April 2014 Corporate Headquarters 2845 Lafayette Street Santa Clara, CA 95050-2639 USA Regional Contact Information Americas: +1 408 970 1000 or Europe, Middle East and Africa: +44 (0) 1753 618000 or Asia Pacific: +852 3189 7900 or solid state, serial attached SCSI (SAS) and nearline (NL)-SAS disks. Additionally, our intelligent tiered storage lets you optimize storage efficiency by matching the storage media to the specific requirements of each supported workload. Policy-based manage- ment automatically and intelligently performs transparent data migration between the tiers. With these capabilities, Hitachi Data Systems networked storage solutions meet the performance and data management requirements in today’s energy exploration environments. In particular, our storage gives organizations a way to satiate com- putational workflows and automate data migration while ensuring minimal disruption for users and improved performance for applications. These capabilities are essential for meeting deadlines and sustaining prog- ress in finding new sources of energy. Next Steps For further information regarding Hitachi high-performance storage for energy explo- ration, contact your Hitachi Data Systems representative or visit HNAS to migrate the data by policy, which will automatically, and in the background, move the data to the correct place without manual intervention. This functionally dra- matically simplifies technology transitions and enables end users to extract value from existing infrastructure and maintain opera- tions while upgrading to new technology. For data that must be retained and re- examined over time, tiered storage is central to an effective data management strategy. With Hitachi storage systems, tiers can be built with different perfor- mance characteristics in mind or for better cost-effectiveness. Storage tiering by itself offers only limited advantages. The value is in the ability to transparently move data from tier to tier, keeping a single file system presentation to the hosts, end users and applications. This ability eliminates the need for changes, such as redirecting an appli- cation to a new drive or volume when a file is moved. Using Hitachi NAS Platform intelligent file tiering and automated data migration capabilities, online, nearline and archival data can reside on any combination of