Your SlideShare is downloading. ×
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with Advanced Data Storage Solution Profile

260

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
260
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Halliburton Landmark SeisSpace Software and Hitachi Storage Solution Match New Levels of Sophistication in the Energy Industry Match To meet growing worldwide demand, oil and gas exploration and production (E&P) organizations are under greater pressure to find new sources of energy. And while exploration has always been expensive, today’s programs are in increasingly hostile environments, making speed to discovery much more urgent. To expedite efforts, E&P organizations rely on sophisticated geophysical technologies. Some of these include reverse time migration, waveform inversion, and 3-D and 4-D downhole sensors to support making higher- quality decisions. Growing volumes of 3-D data must be analyzed in shorter time frames to make efficient use of resources deployed in the field. SOLUTIONPROFILE This explosive growth in data volumes presents new processing challenges. To get the most out of raw data, most E&P organizations have turned to 3-D seismic processing applications. Many look to the Landmark SeisSpace software, in particular. A division of Halliburton, Landmark designs seismic processing solutions to scale from field quality control up through full volume, real-time production processing. These software applications can be optimized for spe- cific processing throughput requirements. In particular, Landmark software is optimized for tasks, including: ■■ General quality control (QC), target investigations, and field-specific data integrity workflows. ■■ Conventional time processing, Kirchhoff calcula- tions, and amplitude versus offset (AVO). ■■ Production scale processing. ■■ Seismic coverage validation for illumination stud- ies, acquisition planning, and targeted imaging workflows. ■■ 3-D prestack time and depth migration, velocity analysis imaging, and finite difference forward modeling. Meet the Data Processing Workflow Challenges of Oil and Gas Exploration With Advanced Data Storage
  • 2. SOLUTION PROFILE Unlike conventional processing technolo- gies, Landmark offerings place a special emphasis on high-performance and inter- active processing algorithms for today’s high-performance computing environments. Organizations need a shared network data storage and management solution that scales to accommodate the growing data from seismic equipment and leverage paral- lel performance characteristics of Landmark SeisSpace. This solution will also provide performance to feed these high-throughput computational workflows. To better understand the potential per- formance gains the right storage solution can offer, Hitachi Data Systems joined with Landmark to set up a test bed to experiment with different system configurations. Exploiting the unique features of the network storage solution accelerated workflows, significantly cutting the processing time required to derive results. Additionally, the testing found that the Hitachi system could run both primary and secondary Landmark storage workloads at the same time. This was something no other vendor has been able to achieve. Test Environment When trying to match a suitable storage solution with a seismic analysis solution, it is important to keep in mind that you cannot rely on narrow benchmarks. Real-world application workloads are complex, and overall analysis throughput can vary greatly over time, depending on something as minor as an application’s configuration settings. With these issues in mind, we jointly exam- ined the challenges, nuances and potential benefits when integrating and optimizing seismic analysis systems. Storage and data management solutions and high-throughput workflows were also tested. In particular, the tests explored how to take advantage of specific application features to boost analysis workflows and reduce the typical processing times for analysis. The test searched for ways to exploit Landmark SeisSpace performance enhanc- ing capabilities. Landmark SeisSpace was designed with new parallel-distributed- memory architectures in mind. It supports the JavaSeis prestack format, which allowed for the development of algorithms that are suited for true volume processing. The key to these parallel efficiencies lies in the soft- ware’s ability to leverage the parallel memory I/O benefits of JavaSeis. Essential components to the feature are a storage filer and network throughput capacities that are unlikely to become over- whelmed by I/O transactions or storage speed from seismic processing. However, there is a requirement for an E&P organization to take advantage of the soft- ware’s enhancements. The organization must ensure a delicate balance of a com- puting system’s processing capabilities and the IT infrastructure’s bandwidth and IOPS. To evaluate the impact of fine-tuning a storage solution to match the performance capabilities of the software, Hitachi Data Systems set up a test bed infrastructure (see Figure 1). The initial setup consisted of a Hitachi NAS Platform (HNAS) 3090 cluster consisting of a single storage pool containing 180 x 600GB 15k SAS drives. There were 10GbE link aggregation control protocol (LACP) connections into a 10GbE switch and 2 Fibre Channel connections per HNAS 3090 node to the back-end storage. The testing compute nodes consisted of 32 Linux-based systems (CentOS v5.6) that were each 1GbE attached. Each compute node had 2 mounts, to a primary and secondary file system. Each file system resided on its own EVS (enterprise virtual server), which enabled easy migration of the mount points. With this configuration, 2 baseline testing runs were conducted. The 1st was a read/ write test against seismic shot data; the 2nd was a read/write/sort function against a similar subset of data. The 1st test run completed in approximately 55 minutes, and the 2nd ran in excess of 4 hours. These results were consistent with previous expe- riences, but with a performance edge over other storage vendors. Only Hitachi Data Systems with networked storage has demonstrated the ability to meet requirements for Landmark SeisSpace primary and secondary storage with a single solution. This solution allows an organization to consolidate its storage infrastructure. Figure 1. The test configuration with Hitachi NAS Platform 3090 met the performance requirements for both the primary and secondary storage for Halliburton Landmark SeisSpace software.
  • 3. 3 Innovation is the engine of change, and information is its fuel. Innovate intelligently to lead your market, grow your company, and change the world. Manage your information with Hitachi Data Systems. www.hds.com/innovate A number of configuration changes were then made to fine-tune the performance. The 1st change was to upgrade the existing HNAS 3090 networked storage system to the latest release of the HNAS system software v8. One thing that sets Hitachi Data Systems apart from competitors is our firmware approach, with hardware accelera- tion through field programmable gate arrays (FPGAs). This capability allows adminis- trators to change characteristics normally associated with hardware through a soft- ware upgrade. The HNAS system software also helps end users analyze data access patterns and then improve performance. At each stage of the testing, standardized performance reports were gathered against the primary and secondary file systems. The Landmark team adjusted networking parameters in the compute nodes. The NFS mount parameters were optimized for larger block sizes, which resulted in up to a 15% performance improvement. The parameters for sparse file system functionality were also adjusted. SeisSpace requires sparse file functions for accurate application reporting, which includes the capability to report the actual space used (sparseness) versus the assumed (thin pro- visioned) space utilization. In subsequent tests, performance results peaked at near the specified HNAS 3090 performance (72,921 IOPS; 1,100MB/sec throughput without the performance accelera- tor). At this point, all EVSs were also migrated to a single physical node to demonstrate the same performance, even without the failover ability of a 2nd cluster node. The original read/write shots test decreased from 55 minutes runtime to just over 20 minutes runtime (at 1,035MB/sec through- put), a 63% improvement (see Figure 2). The 2nd, a sort test, also yielded more than 60% performance improvements. The tests also demonstrated that the HNAS 3090 system (even as a single node) could run both sets of Landmark workload (pri- mary and secondary) simultaneously. No other vendor has been able to successfully maintain this performance. Hitachi NAS Platform Hitachi NAS Platform is an advanced, and integrated, network attached storage (NAS) solution. It is a powerful tool for file sharing as well as file server consolidation, data protection and business-critical NAS work- loads. With HNAS, you can solve challenges associated with data growth while achieving a low total cost of ownership (TCO). Features ■■ Powerful hardware-accelerated file system for multiprotocol file services, dynamic provisioning, intelligent tiering, virtualization and cloud infrastructure. ■■ High performance and scalability: up to 2GB/sec and 140,000 input/outputs per second (IOPS) per node up to 16PB of usable capacity. ■■ File-level virtualization in a global name- space isolates the user from technology or vendor dependencies. It also enables unified access to data stored on storage systems from other vendors or Open Source solutions like Lustre. ■■ Policy-based, universal file migration simplifies deploying new technology and migrating data, without impacting applica- tion workflows. ■■ Seamless integration with Hitachi SAN stor- age, Hitachi Command Suite and Hitachi Data Discovery Suite for advanced search and indexing across HNAS systems. Figure 2. After applying best practices configuration testing, performance was improved by 63% by using HNAS 3090. ■■ Integration with Hitachi Content Platform for active archiving, regulatory compli- ance and large object storage for cloud infrastructure. Benefits ■■ Simplifies your IT infrastructure by allow- ing you to consolidate NAS devices or file servers and migrate data by policy across multiple vendors and technologies. ■■ Reduces the complexity of storage man- agement and lowers your TCO. ■■ Significantly improves efficiency, agility and utilization across NAS environments through advanced virtualization and data protection capabilities. ■■ Offers exceptional performance and improves productivity for Halliburton Landmark SeisSpace environments. Figure 3. Highly scalable Hitachi Unified Storage 150 with Hitachi NAS Platform.
  • 4. © Hitachi Data Systems Corporation 2014. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. SP-090-C DG April 2014 Corporate Headquarters 2845 Lafayette Street Santa Clara, CA 95050-2639 USA www.HDS.com community.HDS.com Regional Contact Information Americas: +1 408 970 1000 or info@hds.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com To that point, Hitachi Data Systems net- worked storage platforms with Hitachi Unified Storage infrastructure (see Figure 3) and Hitachi Virtual Storage Platform (VSP) are designed for massive scalability, perfor- mance, flexibility and enterprise-class data management. Additionally, a comprehensive suite of management, provisioning and disas- ter recovery tools contribute to a lower TCO. All Hitachi Data Systems network storage solutions support multiple industry-standard protocols, including NFS, CIFS and iSCSI. Therefore, seismic analysis applications running on different operating systems can seamlessly access all data relevant to an exploration effort. This capability avoids costly duplicate efforts by promoting infor- mation sharing via fast, secure access to a central pool of files and databases that can scale up to multiple petabytes. For data that must be retained and re-examined over time, tiered storage is central to an effective data management strategy. Hitachi storage system tiers can be architected with different performance characteristics in mind or for optimal cost- effectiveness. Storage tiering by itself offers only limited advantages. What really provides ■■ Responds rapidly to changing demands as data sets grow and new analysis meth- ods are deployed. ■■ Optimizes the use of different storage technologies through intelligent and trans- parent tiering. ■■ Implements and enforces data reten- tion policies without manual intervention through automated data management. Hitachi Data Systems as Your Technology Partner Hitachi Data Systems is a provider of network stor- age solutions for Landmark SeisSpace, with a flexible architecture well suited to the needs of the oil and gas industry. Leading energy E&P organizations in the market today use Hitachi systems. For high-performance oil and gas exploration environments, Hitachi solutions are ideal. Using Hitachi Data Systems network storage systems, organizations have been able to remove storage I/O constraints and eliminate the need for specialized infrastructures. value is the ability to transparently move data from tier to tier or across vendors, keep- ing a single file system presentation to the hosts, users and applications. This approach eliminates the need for changes, such as redirecting an application to a new drive or volume when a file is moved. Using Hitachi Data Systems intelligent tiered storage, online, nearline and archival data can reside on any combination of solid-state, SAS and NL-SAS disks. Additionally, this intelligent tiered storage allows organizations to optimize storage efficiency by matching the storage media to the specific requirements of each supported workload. Policy-based manage- ment automatically and intelligently performs transparent data migration between the tiers. With these capabilities, Hitachi Data Systems networked storage solutions meet the performance and data management requirements in today’s energy exploration environments. In particular, we provide a way to satiate computational workflows and automate data migration with minimal disruption for users and optimized perfor- mance for applications. Innovation of this kind is essential to sustain progress in find- ing new sources of energy. LEARN MORE Hitachi Solutions for Oil and Gas

×