High-Performance Storage for the Evolving Computational Requirements of Energy Exploration Solution Profile
Richer Data Requires Intelligent Storage
Continued explosive growth of raw field data combined
with the use of more compute-intensive analysis
algorithms for oil and gas exploration are placing new
demands on energy industry IT infrastructures.
High-Performance Storage for the Evolving Computational
Requirements of Energy Exploration
Energy Demands Continue to Grow
Much of the world is expected to increase energy con-
sumption significantly over the next decade. As a result,
the need to find new energy sources continues to grow.
To meet this demand, the industry has turned to more
sophisticated analysis tools. Exploration equipment,
including new technologies such as downhole sensors,
improves the chances of success and reduces time
to discovery. Re-examining existing data using more
advanced methods can also yield new results.
These new technologies generate a vast amount of
data that needs to be quickly analyzed and visualized.
In particular, raw data from exploration tools is typically
rendered and interpreted using sophisticated seismic
imaging analysis software to produce detailed 3-D and
4-D models of the earth’s subsurface.
Naturally, speed is essential when exploration organiza-
tions try to determine the commercial viability of tapping
new reservoirs to meet increasing energy demand. With
today’s competition to find energy sources, the goal is
to better interpret more data in less time.
The key to accelerating analysis and quality decision-
making is the ability to store rapidly expanding vol-
umes of seismic data. Making data accessible to the
appropriate parties for analysis is critical to avoid per-
formance bottlenecks that can add days or weeks to
the discovery process.
Hitachi Data Systems delivers network
attached storage solutions to meet
these challenges. Their performance and
advanced data management features
address the issues of today’s data-intensive
oil and gas exploration activities and strict
Keep Pace With Evolving
Advances in seismic imaging are helping
organizations speed the discovery of new
energy sources. In particular, new tech-
nologies like reverse-time migration and
waveform inversion are producing larger
volumes of richer data. At the same time,
getting actionable information with these
techniques is much more compute-intensive.
Today, more processing power is neces-
sary to accommodate the growth in data
volume and sophisticated seismic analysis
algorithms. Traditionally, organizations have
relied on advances in high-performance
computing (HPC), which provided more
computational muscle to work a problem.
Specifically, organizations routinely would
use clusters of multicore CPU Linux servers.
Over time, energy industry imaging algo-
rithms and their implementations have
evolved faster than the available hardware.
For example, algorithms can be selectively
tuned to get a hardware-assisted boost by
running on field-programmable gate arrays
(FPGAs), graphics processing units (GPUs),
or cell processors. This development has
led to continued efforts to improve the
algorithms and to take advantage of new
Tailoring the processing algorithms to run
on these new platforms reduces cycle times
by more than an order of magnitude, which
constitutes a significant acceleration in the
industry. Simply put, using such hardware
accelerates seismic analysis computational
workflows to a point where it is no longer
the limiting factor in the workflow.
Unfortunately, I/O performance bottlenecks
can occur when traditional NAS and SAN
storage systems host the data and feed the
computational workflows. Such systems
do not deliver the performance or scal-
ability needed to support today’s energy
exploration efforts. This lack of performance
becomes especially evident during peak
load times in a shared environment where
multiple demands simultaneously stress the
Attempts to accommodate the growing data
sets and to boost performance of such sys-
tems often fall short. The common approach
is to simply add more storage devices, which
adds complexity. Additional devices must be
managed and significant amounts of time
must be dedicated to moving and managing
the data across the various devices. Oper-
ating costs such as data center facilities
charges and electricity to power and cool the
systems also increase.
What Is Needed
A storage solution that can help accelerate
discovery in oil and gas exploration must
have several important characteristics.
These characteristics include:
In order to keep today’s HPC clusters sati-
ated, a storage solution must offer high data
throughput and I/O. Storage must also be
able to accommodate many simultaneous
reads against the same stored data set as
calculations are distributed across hundreds
of servers, or more. In addition, performance
must remain consistent as the number of
reads, systems and file sizes scale.
In particular, there is a need for a storage
infrastructure that uses a hardware assist with
built-in parallelism to accelerate performance.
Essentially, the storage infrastructure must
match the changing performance require-
ments of the applications running on the
computational systems. Applications evolve.
Just as the HPC systems have increased
performance by using hardware acceler-
ation (for example, FPGAs, GPU, and so
forth) and multicore CPU clusters, so too
must the storage solution. Most importantly,
the storage solution must keep pace in
multiple dimensions. It must, be able to
address IOPS as well as throughput, and
be able to independently scale in capacity
as well as throughput.
New seismic imaging techniques can pro-
duce data sets that easily run into hundreds
of terabytes each, often multiple petabytes
in aggregate. Whether it is used for data
acquisition, processing or interpretation, any
storage platform has to scale seamlessly to
accommodate the rapidly growing data sets.
Reduced Cost of Ownership
A storage solution must be able to con-
solidate petabytes of data onto a single
platform with a common set of manage-
ment tools. The use of industry-standard
protocols increases interoperability and
choice, while lowering costs. The solution
Figure 1. High-Performance Solutions for Media Workflows
Innovation is the engine of change,
and information is its fuel. Innovate
intelligently to lead your market, grow
your company, and change the world.
Manage your information with
Hitachi Data Systems.
must also be energy efficient to help rein in
power consumption costs.
Simplified Data Management
Managing massive amounts of data is often
difficult and can introduce significant delays
that impact workflows. It can also make
technology transitions very slow and diffi-
cult. Substantial cost savings and improved
efficiencies can be achieved by leaving the
data in place. Instead, that data can be
referenced through a virtualized redirection
or use policy-based processes to automat-
ically, without manual intervention, ensure
that the data is in the right place.
Multiple Operating System Support
Today’s exploration equipment frequently
examines and interprets data sets using
different applications on various operat-
ing systems, including UNIX, Linux and
. As such, a storage
solution must support standard protocols
and accommodate applications running on
all operating systems, even when they are
accessing the same data set.
Availability and Data Protection
Beyond performance, the other major
potential storage bottleneck is downtime,
which can add days, or weeks to the dis-
covery process. To avoid downtime, storage
solutions must offer a set of high-availability
features, such as redundant components
and paths, multiple RAID levels and cluster
failover across multiple nodes.
Hitachi Data Systems as Your
Hitachi Data Systems is a provider of
high-performance network storage with a
flexible architecture suited to the needs of the
oil and gas industry. Today’s leading energy
exploration organizations use our systems.
Hitachi Data Systems network attached
storage systems, whether part of our Hitachi
Unified Storage (HUS) family or our Hitachi
NAS Platform (HNAS) family, are built upon a
patented hardware-accelerated architecture.
Additionally, we offer low latency and simpli-
fied data access through standard protocols.
These capabilities dramatically accelerate
seismic imaging analysis and oil and gas
Hitachi NAS Platform systems are ideal
for production-oriented, high-performance
environments with reliability and data man-
agement requirements. Using our network
storage servers, organizations can remove
storage I/O constraints and eliminate the
need for specialized infrastructures.
To that point, Hitachi Unified Storage,
Hitachi Virtual Storage Platform (VSP) and
Hitachi NAS Platform families of storage
platforms are designed for unmatched
scalability, performance and flexibility in the
storage market (see Figure 1). A core ele-
ment of the platforms is Hitachi NAS Silicon
File System technology, which provides
hardware acceleration for core file system
functionality. It enables enterprise-class
data management and protection in com-
bination with industry-leading performance
for shared network storage environments.
Additionally, a comprehensive suite of man-
agement, provisioning and disaster recovery
tools contribute to a lower total cost of
ownership and higher availability.
Hitachi NAS Platform
solutions support multiple
protocols, including NFS,
SMB and iSCSI. This
protocol support enables
seismic analysis applica-
tions running on different operating systems
to seamlessly access all data relevant to an
exploration effort. Costly duplicate efforts
can be avoided by promoting information
sharing via fast, secure access to a central
pool of files and databases that can scale
up to multiple petabytes.
HNAS provides unique functionality to vir-
tualize and migrate data on a file level, by
interfacing through the NFS protocol and
incorporating the virtualized files in a global
namespace. This virtualized file access
allows end users to leave the data where it
is. It enables access via HNAS, regardless
of whether the data is stored on an HNAS
system or a system from a different vendor
or even an Open Source system like Lustre.
The system simply has to provide an NFS-
based access. End users can also choose
to leverage the data migration functionality of