SlideShare a Scribd company logo
1 of 32
Seismic Data Interpretation
Page | I of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Seismic Data Interpretation
An Internship report submitted in partial fulfillment of requirements for Masters of
Business Administration (Oil and Gas Management) July, 2016
Under the guidance of
Internal Guide: External Guide:
Mr. Vibhav Prasad Mathur Mr. Debasis Nayak
Assistant Professor, UPES Senior technical consultant,
Greenojo Consulting Private
Limited
Submitted by
Honey Sharma
SAP id: 500044339
Enrollment Number: R020215109
Master of Business Administration (Oil and Gas Management)
2015-17
College of Management & Economic Studies, UPES
Seismic Data Interpretation
Page | II of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Student Declaration
I hereby declare that this submission is my own work and that, to the best of my knowledge and
belief, it contains no material previously published or written by another person nor material
which has been accepted for the award of any other degree or diploma of the university or other
institute of higher learning, except where due acknowledgment has been made in the text.
Honey Sharma
SAP id: 500044339
Enrollment Number: R020215109
Master of Business Administration (Oil and Gas Management)
2015-17
College of Management & Economic Studies, UPES
Seismic Data Interpretation
Page | III of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Acknowledgement
In the process of carrying out any research, I was helped, motivated and guided by my mentor
Mr. Debasis Nayak, my consultant Mr. Sabyasachee Panda, they guided my research and
prototype development and educated me about the scope and work around in the process of
prototype development. Special mention to Miss Sobhana Mohapatra, who provided her
guidance and boosted my confidence and motivation in tougher times.
Honorary mention to our esteemed internal university mentor Mr. Vibhav Prasad Mathur who
has been the source of inspiration and has provided the will power to never quit. And last but not
the least Dr. Geo Jos Fernandez, without his assistance my project report would not have
finished.
Thank you everyone for your input to complete my project research and develop this prototype
that might perhaps develop into a solution and would contribute to quench the thirst for energy of
a country like ours, India.
Seismic Data Interpretation
Page | IV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Certificate
This is to certify that the summer internship report entitled Seismic Data Interpretation submitted
by Honey Sharma to UPES for partial fulfillment of requirements for Masters of Business
Administration (Oil and Gas Management) is a bonafide record of the internship work carried
out by him under my supervision and guidance. The content of the report, in full or parts have
not been submitted to any other Institute or University for the award of any other degree or
diploma.
Mr. Debasis Nayak
Senior Technical Consultant
Greenojo Consulting Private Limited
College of Management & Economic Studies, UPES
Seismic Data Interpretation
Page | V of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Certificate
This is to certify that the summer internship report entitled Seismic Data Interpretation submitted
by Honey Sharma to UPES for partial fulfillment of requirements for Masters of Business
Administration (Oil and Gas Management) is a bonafide record of the internship work carried
out by him under my supervision and guidance. The content of the report, in full or parts have
not been submitted to any other Institute or University for the award of any other degree or
diploma.
Mr. Vibhav Prasad Mathur
Assistant Professor
College of Management & Economic Studies, UPES
Seismic Data Interpretation
Page | VI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Table of Content
Sr. No Content Description Page Number
1 List of Tables VII
2 List of Figures VIII
3 List of Variables IX
4 Executive summary X
5 Introduction XI
6 Literature review XIII
7 Background of the study and objectives XIV
8 Research methodology XVI
9 Data Analysis XXII
10 Conclusion XXIX
11 Bibliography
12 Appendices
Seismic Data Interpretation
Page | VII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
List of Tables
Sr. No Content Description Page Number
1
Literature review table containing details of the research
papers and the .pdf referred.
XIII
Seismic Data Interpretation
Page | VIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
List of Figures
Sr. No Content Description Page Number
Fig-1 Process Flow-chart XII
Fig-II Functional matrix flow chart-Part01 XVII
Fig-III Functional matrix flow chart-Part02 XVIII
Fig-IV Functional architecture XIX
Fig-V Technical Architecture XX
Fig-VI IBM Bluemix login page XXII
Fig-VII IBM Bluemix user interface screenshot XXIII
Fig-VIII IBM Bluemix catalog page XXIII
Fig-IX Dash DB credentials creation page screenshot XXIV
Fig-X Dash DB homepage screenshot XXIV
Fig-XI Data loading homepage for dash DB XXV
Fig-XII Preview page for loaded data table XXV
Fig-XIII Target window screenshot for dash DB XXVI
Fig-XIV Final window for dash DB account creation XXVI
Fig-XV
Screenshot of R coding console depicting integration of R
with dash DB
XXVII
Fig-XVI Screenshot of R console for object creation XXVII
Fig-XVII Screenshot of R console for scatter plot function XXVII
Fig-XVIII Scatter plot graph XXVIII
Seismic Data Interpretation
Page | IX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
List of Variables
Sr. No Stated Variable Description Page Number
1
Seismic Data Interpretation
Page | X of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Executive Summary
“Today with the launch of seismic data interpretation software solution, a new era of the oil
and gas upstream industry has began”. With the software integrated with cloud cost of
processing the raw seismic data has decreased by “X%” and has provided a level playing
field to everyone in the business be it a giant or a start-up in this sphere of the industry –
The New York times, Monday, August 12, 2017. This will be the headline of every paper in the
world once this prototype ventures into the market. To be precise it is actually a revolution in the
field of seismic data interpretation which will bring down the cost of processing the raw seismic
data by, “X million US$”. Accessible to almost everyone in the industry for a fraction of a cost
this software will enable the industry to explore the territory beneath the earth surface with more
precision and accuracy and may add another decade or two to the, “hydrocarbon age” before
migration to a new source of energy.
With the cost of seismic data processing ever increasing in the times of falling crude oil prices
seismic data interpretation delivers what it claims to achieve, by the synchronization of
technology with analytics. Raw, unstructured data is processed by an open source seismic
processing and interpretation tool. After processing of the raw data and removal of unwanted
information this processed file is then stored in the ASCII format. This is the point from where;
“seismic data interpretation” carves out its own identity different from the conventional seismic
data processing tools. These processed file parameters are extracted in the ASCII format and
converted into the desired format (XLS, CSV, XLSX) so that can be fed as an input to an
analytical tool. Cloud provides the infrastructure for the storage of a mammoth amount of data
and also to speed up the analytical process. Analytical tool is then integrated with the cloud and
analysis is done on the data on the real time basis. Parameters are analyzed against a common
variable and the graph so plotted is then interpreted with the help of another presentation tool.
To summarize seismic data interpretation in a crisp one liner, it is “Business process as a
service” (BPaaS). Complete seismic data processing process is tweaked and pushed online onto
the cloud and the parameters so extracted are presented in the form of a graph. This process can
be customized according to the requirement of the user, such as processing, storage, analysis and
interpretation. Charges are applied on per GB of data processed or drawn right from the input of
the data to the point of output as desired by the user. This solution can be further developed and
many unexplored territory can be discovered, for example the quantum of the hydrocarbon
reserve beneath the earth’s surface, location of the sweet spot that can provide maximum rate of
extraction of the hydrocarbon present in the rock structure, interpretation of the deepwater
horizon seismic data are some of the few spheres that can be explored in future.
Seismic Data Interpretation
Page | XI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
I. Introduction
“The Stone Age did not end due to lack of stone, and the Oil age will end long before the
world runs out of oil.” So said Sheikh Ahmed Zaki Yamani, former Saudi Arabian oil
minister in a statement delivered to media.
Sixteen years later Yamani’s words define the troubled state of oil and gas industry. It is
evident that the industry is going through one of its phenomenal transformations which will
change the industry as we know it. The dip in oil prices below US$40 (at the end of 2015)
makes it very difficult for the producers to sustain and bear the cost of production. This also
indicates the rampant supply amidst the weak global demand on account of slow economic
growth.
Oil and gas exploration (the upstream) sector has taken the worst of the hit, with the
enormous cost US$X of processing the raw segy data and low return on investment oil and
gas exploration sector has come to a standstill. With Seismic Data Interpretation prototype, I
propose the solution to this arduous problem. A solution that not only x% less expensive that
the traditional method, but also equips the interpreters with more powerful angle to make
more accurate predictions about the unidentified hydrocarbon reserves beneath the surface of
the earth.
With the advent of the prevailing oil crisis, I was adamant to provide a solution to this
problem. And when my consultant advised me to work in this direction during the interview
session this idea began to take more solid form. With several months of preparation and two
months of rigorous churning of thoughts of me, my mentor and of course my consultant we
were able to carve out the plan of action into a work breakdown structure. In the beginning
the base was build up understanding the structure of segy file, proceeded by the steps
involved in the refining of the raw unstructured data, then identification of the vital
parameters, extraction of these parameters, analysis with this parameters and finally the
presentation.
Significant research work has been has been done in the proposed field of interpretation and
analysis of the seismic data by Mr. Daniel Patel, Mr. Christopher Giertsen, Mr. John
Thurmond, Mr. John Gjelberg, and Mr. M. Eduard Groller, ¨ Member, IEEE in their research
paper, “The Seismic Analyzer: Interpreting and Illustrating 2D Seismic Data”. Similar efforts
have been made by Mr. Kulbhushan and Miss Monalisa Mitra in their research paper,
“Basics of Land Seismic Data Interpretation”. My attempts are not two cross roads with their
individual researches or to extend their individual findings but to build my own idea with my
own thoughts, findings, and comprehension. To provide a new sphere, a new outlook to
seismic data interpretation and processing.
Seismic Data Interpretation
Page | XII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Flowchart –
(Fig-I)
Study of the structure of the seismic
data from the field/ seismic surveys
List out the steps involved in the
processing of the raw unstructured
data
Comprehending the application of
these steps on the raw seismic data
Listing out the necessary steps and
the workarounds if required
If the steps are
necessary
Refining of raw seismic data
Extraction of these parameters
If found critical
Analysis of Parameters
Presentation of the
achieved relation through
graph
YES
NO
YES
YES
NO
YES
Seismic Data Interpretation
Page | XIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
II. Literature Review
To successfully complete the prototype building process the first and foremost requirement
was to apprehend the seismic data processing and to understand the chronological steps to be
followed so that these steps can be inculcated into the prototype functioning as well. To
apprehend this I referred various research papers and text books (digital format) of the
renowned authors such as Jon F. Claerbout’s, “Fundamental of Geophysical Data
processing”, Ozdogan Yilmaz’, “Seismic data Processing”.
While exploring this reference materials for the knowledge of seismic data processing, I
embarked upon the fact that amplitude and frequency are the two most important attributes
that shape the information stored in the seismic data. As these are the characteristics features
which change considerably mapping the under the earth topography. Hence the detail
understanding of these two attributes was required to map back the changes brought about in
the seismic waves received at the receiver geophones. For this purpose of the project I
referred to the following research papers and text books. Hongliu Zeng’s, “Frequency-
Dependent Seismic Stratigraphy for High-Resolution Interpretation of Depositional
Sequences”, Steve Henry’s, "Understanding Seismic Amplitudes”.
The last but not the least was the decoding of the seismic interpretation process and
understanding of the steps involved in the process. This is to bring to the notice of the readers
of this report that interpreting the changes in the seismic data is a very general term as it is
understood and practiced at will amongst the geologists and geophysicists. There are no
standard procedures for this process. I referred some text books to provide my readers a
general chronology of the steps that could be followed to interpret the changes in the seismic
data. These steps are not standard and may be company specific or G&G team specific. I
provide you these steps to have a general understanding of the process. To derive this
chronology if referred Laurence R. Lines and Rachel T. Newrick’s, “Fundamental of
Geophysical Interpretation”.
Seismic Data Interpretation
Page | XIV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
III. Background of the study and Objectives
The oil industry, with its hikes and falls is suffering from the deepest downturn since the
1990’s if not earlier. Price of crude oil though has recovered from hitting the low of
US$26.21 in February this year to US$47 for Brent and US$46 for WTI, but the prices are
still low as to what is required of to drill a profitable well. Most of the consumer industries
have enjoyed the dispensation period of the low crude oil price, but several petroleum
industries have suffered huge losses. These losses have the rippling effects which further
harm the economy of the country. Oil and gas giants are the ones who have taken up worst of
the hits due to involvement of the sophisticated machinery in drilling and extraction of oil (at
fixed price) and selling that same crude at lower prices (market determined) and thus
incurring heavy losses. Share price of some of the integrated oil and gas giants have tumbled
due to this recession with Exxon Mobil share falling 8.2%, Chevron falling 13.63%, British
Petroleum falling 12.21%, Total 17.08%, Philips falling 8.57% to name a few.
And even with the crude oil market getting stable, price of oil and gas exploration would
remain the same. This stands out as one of the biggest business problems in today’s oil and
gas industry i.e. How to reduce the cost of seismic data processing in the times of falling
crude oil prices and also reduce the risk in hydrocarbon exploration and production?
With the introduction of seismic data interpretation prototype solution this problem can be
answered. Seismic Data Interpretation does not follow the conventional protocol of
processing the raw seismic data from the survey and then projecting the refined seismic
image to the team of geologists and geophysicist. My objective is, “to provide a cheap and
more reliable solution as compared to the conventional seismic data interpretation
process”.
This milestone is achieved by tweaking the existing seismic data processing and
interpretation process in such a fashion that the resources needed to be deployed to interpret
the processed image are drastically reduced, processing cost is plummeted, and huge volume
of data is processed quickly and with more reliability to delineate a hydrocarbon reserve.
Input was taken in the form of raw, unstructured data which was fed into the open source
seismic interpretation software (OpendTect 6.0), thus saving the cost incurred from buying a
licensed software. These processed files defined some critical parameters which were then
pushed and stored into cloud and then integrated with analytical software (R 3.3.1) for
analysis and the relation plotted into a graph and presented to the user, who can then take a
decision to drill or not to drill. This process thus saves the cost paid to a geologist or
geophysicist. Further the time required for analysis is reduced by storing the huge amount of
data to cloud and the prediction of the co-ordinates to be drilled in order to find the
hydrocarbon reserve is more reliable and involves less risk of failure.
Seismic Data Interpretation
Page | XV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
In the designing and execution of this proposed solution, I embarked upon various hurdles
which were either resolved or a work around was introduced to keep the progress of the
solution building within the time limits. My research began with the study and understanding
of the seismic data in the form of segy file. To understand what are the components of the
segy file which can be exploited for stored geological information. Then I had to comprehend
what possible noises or unwanted information might have crept into this presented geological
information which might lead to the erroneous prediction of the co-ordinates of drilling. This
study was accompanied by the study of seismic survey and study of the seismic exploration,
seismic processing and seismic interpretation in parallel. Further to this objective I studies
the steps involved in processing and cleansing of this data. And finally how is interpretation
done of the processed seismic file to the geologists and geophysicist. Comprehending the
commands and operation of tools was followed later on, after we began our operations on the
input data as per the guidance of technical architecture.
Achieving this arduous objective of developing a prototype to cater to the defined business
problem was not an easy task altogether. And it required the assistance of my mentor and my
consultant for the concerned project. Path to this destination was full of research problems
but, if I had to define the most problem amongst the lot that would be, “to define the
hierarchy of steps to be followed in processing of the raw unstructured data”.
Conception of the seismic processing laid the foundation on which the functional matrix was
structured and then the functional architecture later on. Hence this was the most critical
problem for my research and prototype building.
Seismic Data Interpretation
Page | XVI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
IV. Research Methodology
Sr.No. Traditional Research Methodology 5-P Model
1 Research Design Proof of concept
2 Hypothesis Formulation Point of view
3 Methods of data collection NA
4 Analytical tools used Prototype
5 Scope of study Point of view
6 Testing of Hypothesis Pilot
7 Limitations of the study NA
8 -- Package
Applied research was adopted by me and the entire research was broken down into the 5P
model. These five models are enumerated as follows:-
 Point of View/Analysis – In the first stage of prototype development functionality of
the subject was defined and the hierarchy of steps to be followed was outlined. To
achieve the objective outlined in the prototype solution a functionality design was
established.
 Proof of concept/Design – This was the second stage of prototype building and once
the foundation was laid by the functional matrix, a functional architecture was built
on that foundation to define the activity flow.
 Prototype/Develop – With the functional matrix and functional architecture in place
a prototype was developed in this stage. Prototype software which could process a
test seismic file and produce the claimed results. Also defining new avenues for the
future development of the software solution.
 Pilot/Testing – This stage was marked by the testing of the prototype and refining, to
produce a more scalable software solution.
 Package/Release – Release of the software to the market with a predefined business
model and earning of the revenue from the same. This was the last but not the least
stage of the prototype development.
A). Point of view/Analysis –
Rigorous study was done to understand the structure of the seismic data that is obtained from
the field trace or observer’s log. Study comprised of the various application or operation to
be done on the raw unstructured data to extract the valuable information. These steps include
first the editing of the raw data, removal of noise, performing geometric correction, velocity
analysis and enhancing of the seismic resolution by enhancing frequency.
Seismic Data Interpretation
Page | XVII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
Then these steps were jotted down in the form of flowchart and the process description was
described in detail in the adjoining sheet similarly the steps involved in the interpretation of
the seismic data was done and the flowchart was prepared for the same with the process
description done in the next sheet.
This analysis was done to comprehend what operations are needed to be done before the data
is actually presented to geologists and geophysicists for interpretation. So that same functions
could be incorporated in the prototype and the software could be able to deliver the same
with precision.
Seismic Data Interpretation
Page | XVIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
(Fig-II)
Seismic Data Interpretation
Page | XIX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
(Fig-III)
Seismic Data Interpretation
Page | XX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
B). Proof of concept/Design –
(Fig-IV)
Based on the functional matrix the hierarchy of steps was decided and the how the steps should
be delivered, the plan of action was framed. Design was divided into four layers the
infrastructure layer, the database layer, the application layer, and the presentation layer. Seismic
data in the form of raw data will be present in the storage which is then processed by the tool in
application layer further processing will be done on this raw data then converted into required
format and exported into an analysis tool and further presented in the presentation layer.
Technology associated with the design is presented in the technical architecture below. It
specifies the technology associated with the particular technology with each layer that we
followed in our prototype development.
But this is to bring to the notice of my readers that it is the functionality which remains
permanent and technology can be modified with the affordability and the availability.
Seismic Data Interpretation
Page | XXI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
(Fig-V)
C). Prototype/ Development –
Development of the full fledged prototype began with an idea, to decrease the cost of processing
the seismic data so that overall profitability could be enhanced. As in the Economics we study
profit earned by reducing the cost of production becomes the soul of your business architecture
which no rival can copy.
This business idea than took the same of, “Scope of work” and the skeleton of a definite solution
to a definite business problem was fabricated. This outlined the tools to be used at different
stages and milestones to be achieved as the internship program progresses. Objective to be
achieved was clearly defined.
With the scope of work outlined entire work to be done in the span of two months was broken
down into a work breakdown structure. With the time span allotted to different deliverables and
the time limit set a pace was set for the prototype development. Time for review meetings was
also set so as to analyze the synchronization of the progress with that of outlined objective.
Further to the approval of the work breakdown structure functional matrix defining the
functionality of the prototype was designed and approved. With each footstep explained in detail
and agreed upon to be included and acted upon by mutual discussion of the mentee, mentor, and
the consultant.
Based on the foundation of functional matrix, a functional architecture was designed to project
how the functionality defined in the functional matrix is to be achieved. Technical architecture
defined the technology to be used at every step in conformance to the functional architecture.
Workarounds were decided accordingly in case a particular technology did not seem viable over
the definite time span of two months. Operations were performed as defined by functional matrix
Seismic Data Interpretation
Page | XXII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
and guided by functional architecture with the assistance of tool mentioned in technical
architecture. A lot of time, brainstorming, intellect and sweat were put into the project and with
the dedicated hard work a prototype was developed over the span of two months. And it took
great courage and determination for a three member team, the mentee, the mentor and the
consultant to achieve this difficult task.
D). Pilot/Testing –
Pilot phase is the presentations of the software to the customer and conditioning its functions
according to the requirement of the user. There can be customer who might possess their own
seismic interpretation and processing tool and would like to integrate the process with their own
tool. Similarly different user may not like to see the graph instead would like to see a processed
seismic image to be presented to their geologists and geophysicist team.
These customizations can be incorporated within the value network and modifications can be
done to the fundamental design architecture of the prototype. Charges applied can be breakdown
into processing fee, customization fee, installation fee, and utilities such as storage and
presentation. Charges will be applicable on per GB of data input and per unit of processing time
consumed for a user. Also charges could be applicable for every upload and download of the
data.
E). Packaging –
This prototype could be sold as a complete business process or business process as a solution
(BPaaS). With and integrated seismic data processing and analytical tool, along with added
services of storage and presentation to different customers.
Seismic Data Interpretation
Page | XXIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
V. Data Analysis
The essence of my prototype solution lies in the analysis of the parameters extracted from the
processed seismic files and then predicting the most fruitful co-ordinates to drill from that
analysis. Until the transcend of this boundary, seismic data interpretation does not deliver
anything different from what others software’s have to offer in the market. It is this special
ability of this prototype which makes it stand out from the crowd.
Data analysis begins with the data acquisition and storage for further analysis. Processed
seismic files from OpendTect6.0 are exported in the ASCII format and then manually
converted into excel files in XLS format. These files are then loaded into a dash DB account
on IBM Bluemix (cloud). Steps for creating an online dash DB account on cloud are as
follows:-
 First login into your IBM Bluemix account with your username and password
credentials
(Fig-VI)
You will be guided to an interface window as shown below. This basically enlightens a user
about the services offered by IBM Bluemix as a cloud.
Seismic Data Interpretation
Page | XXIV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
(Fig-VII)
 From there go to the catalogs and select the dash DB icon from the available options.
(Fig-VIII)
 Next window will prompt you to enter the credentials like space, app, service name,
credentials name, and the entry plan. Choose as you like and click, “create”.
Seismic Data Interpretation
Page | XXV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
(Fig-IX)
 dash DB would be created in your account where you can perform various operations
such as storage, analysis etc. You need to push data into cloud because seismic data
from a particular survey is in peta bytes. And if you have to process data for multiple
clients the data volume is unimaginable hence cloud is the only option for fast results.
For loading data into the cloud you need to launch the dash DB and load your data
into the dash DB. Data to be loaded must be in XLS, XLSX or CSV format.
(Fig-X)
Seismic Data Interpretation
Page | XXVI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
To load your data various options are available which are highlighted in the screenshot
above. In my research I loaded the data from the desktop, but as the scalability of the solution
is enhanced data can be loaded from one cloud to another.
 After you specify the location of the data to be loaded next window will prompt you
to select the file form your local storage location, characteristics of the row and select
preview.
(Fig-XI)
 Next window will show you the preview of the table loaded, preview will be of 10
rows. Carefully examine the data any discrepancy in the data and click next.
(Fig-XII)
Seismic Data Interpretation
Page | XXVII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
 Next window will prompt you to select the target where the data is to be loaded, i.e.
you want your data to be stored into an existing table or you want to create a new
table. Choose the option according to your requirement in my research I had to create
a new table for data.
(Fig-XIII)
 As soon as you hit next, the forthcoming window will show the demo of the column
and the name of the table that will be created this is the cue to change the column
name or the name of the table as you like or is required for further analysis.
(Fig-XIV)
 Once table name, column name and the serial number is fixed click finish to complete
the process. In the similar fashion you can load more data tables for analysis. This
finishes one phase of the analysis the second phase is the connecting the dash DB
with R and carrying out further analysis.
Seismic Data Interpretation
Page | XXVIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
To set a connection in R the steps are as follows:-
 Open R studio and install the iamdbr plug-in from the packages console so that IBM
Bluemix can be integrated with R and analysis can be done with data in real time.
(Fig-XV)
 You need to form objects in R where data is called into these objects from cloud on
real time basis and analysis is carried out the code for the same is as follows. For my
analysis I have merged two data sets into one common data so that graph can be
plotted and the change of parameters (power1, power2) could be examined against a
common parameter (Frequency) and a comparative analysis could be done.
(Fig-XVI)
 Finally a scatter plot graph was generated but first you have to install the scatter plot
package from the installation package.
(Fig-XVII)
Seismic Data Interpretation
Page | XXIX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
This will present a graph in viewer which can be interpreted by a geologist or a geophysicist
as a comparative study of two parameters for two different seismic files against a common
parameter. This is to bring to your notice that a petroleum engineer who is familiar with the
seismic waves and their behavior below the surface of the earth can interpret the graph so
formed. A geologist or a geophysicist is not a necessity.
(Fig-XVIII)
(Fig-XIX)
Seismic Data Interpretation
Page | XXX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
VI. Conclusion
Don’t pray for tasks equal to your powers, pray for powers equal to your task – Phillips
brooks. This was the quote I could think off, when I concluded my research for seismic data
interpretation. Future of seismic data interpretation looked dull and hazy in the light of
falling crude oil prices. Crude oil prices have taken a dip from a peak of $115USD to
$35USD. This collapse in the oil prices has lead to a drop in the investment in the oil and gas
industry. With the investment falling from $700 billion USD in 2014 to $550 billion USD in
2015, sharp decline in other sectors has also contributed to overall slow economic growth.
Also price of the seismic data processing has been on the rise for past three years to 2015
with 1.5% increment annually. With the demand for energy on the rise, especially in the
undeveloped economies oil and gas exploration activity is a necessary burden needed to be
carried on the shoulders of the industry.
With the study of the scholarly articles conclusion was raised that oil and gas exploration
industry is in a desperate need of a solution that can provide fast and accurate processing of
the seismic data at comparatively low cost. Therefore my study should be guided in the
direction of enhancing the seismic data processing method and for which one must have
thorough understanding of the, “seismic data processing” process. The scholarly articles
followed by me for my research educated me about the necessary operations needed to be
done on the raw data to improve its quality. Further my study enlightened me about the
integration of conventional data processing activities with the analytical tools so as to
enhance the probability of discovering the probability of hitting a hydrocarbon rich rock
structure.
Limitations in the study due to non-availability of time and intellect were to extract and
analyze some of the many parameters available which affect the seismic data processing.
More the parameters extracted and analyzed higher is the accuracy and reliability of analysis
in the analytical tool. Also guidelines to explain the graph so generated of the parameters so
analyzed was missing, it could have facilitated and familiarize the geologists and the
geophysicists and facilitated in the interpretation.
In future this solution might see the light as an integration of open source seismic data
processing tool with analytical tool. Also it can enhance the scope of work by predicting the
quantum of hydrocarbon reservoir beneath the surface and also predicting the sweet spot to
drill to have maximum hydrocarbon recovery.
Seismic Data Interpretation
Page | XXXI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
VII. Bibliography
Reference Author
Name
Description Key Leanings Remark
AAPG Data
pages
Hongliu
Zeng
Frequency-Dependent Seismic
Stratigraphy for High-
Resolution Interpretation of
Depositional Sequences
Effect of
frequency on
seismic resolution
Changes in the
frequency of
wave when it
encounters
hydrocarbon
reserve
Fundamental of
Geophysical
Data Processing
- pdf
Jon F.
Claerbout
Fundamental of Geophysical
Data Processing
Comprehension
of various data
processing steps
Application of
various
processes such
as
deconvolution,
NMO
correction,
DMO
correction etc.
www.academia.e
du
Laurence
R. Lines
and
Rachel T.
Newrick
Fundamental of Geophysical
Interpretation
Comprehension
of steps involved
in data
interpretation
Understanding
discontinuity
and
interpretation
steps
Google drive
Ozdogan
Yilmaz
Seismic Data Processing
Flowchart of
steps involved in
seismic data
processing
Study of
hierarchy of
steps
AAPG Data
pages
Steve
Henry
Understanding Seismic
Amplitudes
Factors affecting
amplitude of
seismic waves
Understanding
of below the
earth
phenomenon
that change
seismic
amplitude
www.academia.e
du
Friedrich
Nietzche
Common techniques for
quantitative seismic
interpretation
Processes
involved in
seismic data
processing
Process and
common pitfalls
in conventional
interpretation
Seismic Data Interpretation
Page | XXXII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES
VIII. Appendices
In seismology, waveform cross correlation has been used for years to produce high-precision
hypocenter locations and for sensitive detectors. Because correlated seismograms generally
are found only at small hypocenter separation distances, correlation detectors have
historically been reserved for spotlight purposes. However, many regions have been found to
produce large numbers of correlated seismograms, and there is growing interest in building
next-generation pipelines that employ correlation as a core part of their operation. In an effort
to better understand the distribution and behavior of correlated seismic events, we have cross
correlated a global dataset consisting of over 300 million seismograms. This was done using
a conventional distributed cluster, and required 42 days. In anticipation of processing much
larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a
Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test
dataset. We found that fundamental algorithmic transformations were required to achieve the
maximum performance increase. Whereas in the original IO-bound implementation, we went
to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were
able to greatly increase the parallelism of our algorithms by performing a tiered series of very
fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce
jobs required reading and writing large amounts of data. But, because IO is very fast, and
because the fine-grained computations could be handled extremely quickly by the mappers,
the net was a large performance gain.

More Related Content

Viewers also liked

Viewers also liked (9)

Hipertensi¢n arterial
Hipertensi¢n arterialHipertensi¢n arterial
Hipertensi¢n arterial
 
Hội thảo, Công ty tổ chức hội nghị khách hàng chuyên nghiệp tại hcm
Hội thảo, Công ty tổ chức hội nghị khách hàng chuyên nghiệp tại hcmHội thảo, Công ty tổ chức hội nghị khách hàng chuyên nghiệp tại hcm
Hội thảo, Công ty tổ chức hội nghị khách hàng chuyên nghiệp tại hcm
 
Hotel room types
Hotel room typesHotel room types
Hotel room types
 
PROFILE FOR NICOLE
PROFILE FOR NICOLEPROFILE FOR NICOLE
PROFILE FOR NICOLE
 
Paris and Normandy
Paris and NormandyParis and Normandy
Paris and Normandy
 
SOFTWARE
SOFTWARESOFTWARE
SOFTWARE
 
20121210_RAS_EN_lr_versie_4
20121210_RAS_EN_lr_versie_420121210_RAS_EN_lr_versie_4
20121210_RAS_EN_lr_versie_4
 
Fracciones (2)
Fracciones (2)Fracciones (2)
Fracciones (2)
 
McHenry Material Stream Assessment Report
McHenry Material Stream Assessment ReportMcHenry Material Stream Assessment Report
McHenry Material Stream Assessment Report
 

Similar to Project Report - Final Draft

Traffic volumestudy
Traffic volumestudyTraffic volumestudy
Traffic volumestudyAMOD GUPTA
 
Download-manuals-general-sapr-october1998
 Download-manuals-general-sapr-october1998 Download-manuals-general-sapr-october1998
Download-manuals-general-sapr-october1998hydrologywebsite1
 
IRJET - A Framework for Tourist Identification and Analytics using Transport ...
IRJET - A Framework for Tourist Identification and Analytics using Transport ...IRJET - A Framework for Tourist Identification and Analytics using Transport ...
IRJET - A Framework for Tourist Identification and Analytics using Transport ...IRJET Journal
 
Wind Data USA 2017 brochure
Wind Data USA 2017 brochureWind Data USA 2017 brochure
Wind Data USA 2017 brochureHeather Smith
 
Wind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINALWind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINALDominic Coyne
 
Analysis of Rework and Rescheduling in Construction Project using SPSS Software
Analysis of Rework and Rescheduling in Construction Project using SPSS SoftwareAnalysis of Rework and Rescheduling in Construction Project using SPSS Software
Analysis of Rework and Rescheduling in Construction Project using SPSS Softwareijtsrd
 
PMCD Fall 2016 Newsletter
PMCD Fall 2016 NewsletterPMCD Fall 2016 Newsletter
PMCD Fall 2016 NewsletterSandeep Raju
 
Research report new..
Research report new..Research report new..
Research report new..Saurabh Singh
 
IRJET- Data Analytics and Visualization through R Programming
IRJET-  	  Data Analytics and Visualization through R ProgrammingIRJET-  	  Data Analytics and Visualization through R Programming
IRJET- Data Analytics and Visualization through R ProgrammingIRJET Journal
 
Cover Letter for Highland Park 07September2016.pdf
Cover Letter for Highland Park 07September2016.pdfCover Letter for Highland Park 07September2016.pdf
Cover Letter for Highland Park 07September2016.pdfJustin Sutton
 
Download-manuals-general-sapr-march2003
 Download-manuals-general-sapr-march2003 Download-manuals-general-sapr-march2003
Download-manuals-general-sapr-march2003hydrologywebsite1
 
Presentation Wind Energy 2016
Presentation Wind Energy 2016Presentation Wind Energy 2016
Presentation Wind Energy 2016RSM Germany
 
Maintenance system improvement report
Maintenance system improvement reportMaintenance system improvement report
Maintenance system improvement reportKomal Koya
 
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated Analytics
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated AnalyticsHydrogen Production Plant Cost Analysis 2021-2026 | Syndicated Analytics
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated AnalyticsChinkiTyagi
 
Seismic applications throughout_life_reservoir_or_2002
Seismic applications throughout_life_reservoir_or_2002Seismic applications throughout_life_reservoir_or_2002
Seismic applications throughout_life_reservoir_or_2002Fands-llc
 

Similar to Project Report - Final Draft (20)

Traffic volumestudy
Traffic volumestudyTraffic volumestudy
Traffic volumestudy
 
Download-manuals-general-sapr-october1998
 Download-manuals-general-sapr-october1998 Download-manuals-general-sapr-october1998
Download-manuals-general-sapr-october1998
 
IRJET - A Framework for Tourist Identification and Analytics using Transport ...
IRJET - A Framework for Tourist Identification and Analytics using Transport ...IRJET - A Framework for Tourist Identification and Analytics using Transport ...
IRJET - A Framework for Tourist Identification and Analytics using Transport ...
 
Airline Analysis of Data Using Hadoop
Airline Analysis of Data Using HadoopAirline Analysis of Data Using Hadoop
Airline Analysis of Data Using Hadoop
 
Project Report
Project ReportProject Report
Project Report
 
Wind Data USA 2017 brochure
Wind Data USA 2017 brochureWind Data USA 2017 brochure
Wind Data USA 2017 brochure
 
Wind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINALWind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINAL
 
Analysis of Rework and Rescheduling in Construction Project using SPSS Software
Analysis of Rework and Rescheduling in Construction Project using SPSS SoftwareAnalysis of Rework and Rescheduling in Construction Project using SPSS Software
Analysis of Rework and Rescheduling in Construction Project using SPSS Software
 
PMCD Fall 2016 Newsletter
PMCD Fall 2016 NewsletterPMCD Fall 2016 Newsletter
PMCD Fall 2016 Newsletter
 
Call For Papers
Call For PapersCall For Papers
Call For Papers
 
Research report new..
Research report new..Research report new..
Research report new..
 
IRJET- Data Analytics and Visualization through R Programming
IRJET-  	  Data Analytics and Visualization through R ProgrammingIRJET-  	  Data Analytics and Visualization through R Programming
IRJET- Data Analytics and Visualization through R Programming
 
Cover Letter for Highland Park 07September2016.pdf
Cover Letter for Highland Park 07September2016.pdfCover Letter for Highland Park 07September2016.pdf
Cover Letter for Highland Park 07September2016.pdf
 
Download-manuals-general-sapr-march2003
 Download-manuals-general-sapr-march2003 Download-manuals-general-sapr-march2003
Download-manuals-general-sapr-march2003
 
Presentation Wind Energy 2016
Presentation Wind Energy 2016Presentation Wind Energy 2016
Presentation Wind Energy 2016
 
Final sw handbook 180514
Final sw handbook 180514Final sw handbook 180514
Final sw handbook 180514
 
Maintenance system improvement report
Maintenance system improvement reportMaintenance system improvement report
Maintenance system improvement report
 
Big Data For Flight Delay Report
Big Data For Flight Delay ReportBig Data For Flight Delay Report
Big Data For Flight Delay Report
 
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated Analytics
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated AnalyticsHydrogen Production Plant Cost Analysis 2021-2026 | Syndicated Analytics
Hydrogen Production Plant Cost Analysis 2021-2026 | Syndicated Analytics
 
Seismic applications throughout_life_reservoir_or_2002
Seismic applications throughout_life_reservoir_or_2002Seismic applications throughout_life_reservoir_or_2002
Seismic applications throughout_life_reservoir_or_2002
 

Project Report - Final Draft

  • 1. Seismic Data Interpretation Page | I of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Seismic Data Interpretation An Internship report submitted in partial fulfillment of requirements for Masters of Business Administration (Oil and Gas Management) July, 2016 Under the guidance of Internal Guide: External Guide: Mr. Vibhav Prasad Mathur Mr. Debasis Nayak Assistant Professor, UPES Senior technical consultant, Greenojo Consulting Private Limited Submitted by Honey Sharma SAP id: 500044339 Enrollment Number: R020215109 Master of Business Administration (Oil and Gas Management) 2015-17 College of Management & Economic Studies, UPES
  • 2. Seismic Data Interpretation Page | II of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Student Declaration I hereby declare that this submission is my own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which has been accepted for the award of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgment has been made in the text. Honey Sharma SAP id: 500044339 Enrollment Number: R020215109 Master of Business Administration (Oil and Gas Management) 2015-17 College of Management & Economic Studies, UPES
  • 3. Seismic Data Interpretation Page | III of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Acknowledgement In the process of carrying out any research, I was helped, motivated and guided by my mentor Mr. Debasis Nayak, my consultant Mr. Sabyasachee Panda, they guided my research and prototype development and educated me about the scope and work around in the process of prototype development. Special mention to Miss Sobhana Mohapatra, who provided her guidance and boosted my confidence and motivation in tougher times. Honorary mention to our esteemed internal university mentor Mr. Vibhav Prasad Mathur who has been the source of inspiration and has provided the will power to never quit. And last but not the least Dr. Geo Jos Fernandez, without his assistance my project report would not have finished. Thank you everyone for your input to complete my project research and develop this prototype that might perhaps develop into a solution and would contribute to quench the thirst for energy of a country like ours, India.
  • 4. Seismic Data Interpretation Page | IV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Certificate This is to certify that the summer internship report entitled Seismic Data Interpretation submitted by Honey Sharma to UPES for partial fulfillment of requirements for Masters of Business Administration (Oil and Gas Management) is a bonafide record of the internship work carried out by him under my supervision and guidance. The content of the report, in full or parts have not been submitted to any other Institute or University for the award of any other degree or diploma. Mr. Debasis Nayak Senior Technical Consultant Greenojo Consulting Private Limited College of Management & Economic Studies, UPES
  • 5. Seismic Data Interpretation Page | V of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Certificate This is to certify that the summer internship report entitled Seismic Data Interpretation submitted by Honey Sharma to UPES for partial fulfillment of requirements for Masters of Business Administration (Oil and Gas Management) is a bonafide record of the internship work carried out by him under my supervision and guidance. The content of the report, in full or parts have not been submitted to any other Institute or University for the award of any other degree or diploma. Mr. Vibhav Prasad Mathur Assistant Professor College of Management & Economic Studies, UPES
  • 6. Seismic Data Interpretation Page | VI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Table of Content Sr. No Content Description Page Number 1 List of Tables VII 2 List of Figures VIII 3 List of Variables IX 4 Executive summary X 5 Introduction XI 6 Literature review XIII 7 Background of the study and objectives XIV 8 Research methodology XVI 9 Data Analysis XXII 10 Conclusion XXIX 11 Bibliography 12 Appendices
  • 7. Seismic Data Interpretation Page | VII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES List of Tables Sr. No Content Description Page Number 1 Literature review table containing details of the research papers and the .pdf referred. XIII
  • 8. Seismic Data Interpretation Page | VIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES List of Figures Sr. No Content Description Page Number Fig-1 Process Flow-chart XII Fig-II Functional matrix flow chart-Part01 XVII Fig-III Functional matrix flow chart-Part02 XVIII Fig-IV Functional architecture XIX Fig-V Technical Architecture XX Fig-VI IBM Bluemix login page XXII Fig-VII IBM Bluemix user interface screenshot XXIII Fig-VIII IBM Bluemix catalog page XXIII Fig-IX Dash DB credentials creation page screenshot XXIV Fig-X Dash DB homepage screenshot XXIV Fig-XI Data loading homepage for dash DB XXV Fig-XII Preview page for loaded data table XXV Fig-XIII Target window screenshot for dash DB XXVI Fig-XIV Final window for dash DB account creation XXVI Fig-XV Screenshot of R coding console depicting integration of R with dash DB XXVII Fig-XVI Screenshot of R console for object creation XXVII Fig-XVII Screenshot of R console for scatter plot function XXVII Fig-XVIII Scatter plot graph XXVIII
  • 9. Seismic Data Interpretation Page | IX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES List of Variables Sr. No Stated Variable Description Page Number 1
  • 10. Seismic Data Interpretation Page | X of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Executive Summary “Today with the launch of seismic data interpretation software solution, a new era of the oil and gas upstream industry has began”. With the software integrated with cloud cost of processing the raw seismic data has decreased by “X%” and has provided a level playing field to everyone in the business be it a giant or a start-up in this sphere of the industry – The New York times, Monday, August 12, 2017. This will be the headline of every paper in the world once this prototype ventures into the market. To be precise it is actually a revolution in the field of seismic data interpretation which will bring down the cost of processing the raw seismic data by, “X million US$”. Accessible to almost everyone in the industry for a fraction of a cost this software will enable the industry to explore the territory beneath the earth surface with more precision and accuracy and may add another decade or two to the, “hydrocarbon age” before migration to a new source of energy. With the cost of seismic data processing ever increasing in the times of falling crude oil prices seismic data interpretation delivers what it claims to achieve, by the synchronization of technology with analytics. Raw, unstructured data is processed by an open source seismic processing and interpretation tool. After processing of the raw data and removal of unwanted information this processed file is then stored in the ASCII format. This is the point from where; “seismic data interpretation” carves out its own identity different from the conventional seismic data processing tools. These processed file parameters are extracted in the ASCII format and converted into the desired format (XLS, CSV, XLSX) so that can be fed as an input to an analytical tool. Cloud provides the infrastructure for the storage of a mammoth amount of data and also to speed up the analytical process. Analytical tool is then integrated with the cloud and analysis is done on the data on the real time basis. Parameters are analyzed against a common variable and the graph so plotted is then interpreted with the help of another presentation tool. To summarize seismic data interpretation in a crisp one liner, it is “Business process as a service” (BPaaS). Complete seismic data processing process is tweaked and pushed online onto the cloud and the parameters so extracted are presented in the form of a graph. This process can be customized according to the requirement of the user, such as processing, storage, analysis and interpretation. Charges are applied on per GB of data processed or drawn right from the input of the data to the point of output as desired by the user. This solution can be further developed and many unexplored territory can be discovered, for example the quantum of the hydrocarbon reserve beneath the earth’s surface, location of the sweet spot that can provide maximum rate of extraction of the hydrocarbon present in the rock structure, interpretation of the deepwater horizon seismic data are some of the few spheres that can be explored in future.
  • 11. Seismic Data Interpretation Page | XI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES I. Introduction “The Stone Age did not end due to lack of stone, and the Oil age will end long before the world runs out of oil.” So said Sheikh Ahmed Zaki Yamani, former Saudi Arabian oil minister in a statement delivered to media. Sixteen years later Yamani’s words define the troubled state of oil and gas industry. It is evident that the industry is going through one of its phenomenal transformations which will change the industry as we know it. The dip in oil prices below US$40 (at the end of 2015) makes it very difficult for the producers to sustain and bear the cost of production. This also indicates the rampant supply amidst the weak global demand on account of slow economic growth. Oil and gas exploration (the upstream) sector has taken the worst of the hit, with the enormous cost US$X of processing the raw segy data and low return on investment oil and gas exploration sector has come to a standstill. With Seismic Data Interpretation prototype, I propose the solution to this arduous problem. A solution that not only x% less expensive that the traditional method, but also equips the interpreters with more powerful angle to make more accurate predictions about the unidentified hydrocarbon reserves beneath the surface of the earth. With the advent of the prevailing oil crisis, I was adamant to provide a solution to this problem. And when my consultant advised me to work in this direction during the interview session this idea began to take more solid form. With several months of preparation and two months of rigorous churning of thoughts of me, my mentor and of course my consultant we were able to carve out the plan of action into a work breakdown structure. In the beginning the base was build up understanding the structure of segy file, proceeded by the steps involved in the refining of the raw unstructured data, then identification of the vital parameters, extraction of these parameters, analysis with this parameters and finally the presentation. Significant research work has been has been done in the proposed field of interpretation and analysis of the seismic data by Mr. Daniel Patel, Mr. Christopher Giertsen, Mr. John Thurmond, Mr. John Gjelberg, and Mr. M. Eduard Groller, ¨ Member, IEEE in their research paper, “The Seismic Analyzer: Interpreting and Illustrating 2D Seismic Data”. Similar efforts have been made by Mr. Kulbhushan and Miss Monalisa Mitra in their research paper, “Basics of Land Seismic Data Interpretation”. My attempts are not two cross roads with their individual researches or to extend their individual findings but to build my own idea with my own thoughts, findings, and comprehension. To provide a new sphere, a new outlook to seismic data interpretation and processing.
  • 12. Seismic Data Interpretation Page | XII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Flowchart – (Fig-I) Study of the structure of the seismic data from the field/ seismic surveys List out the steps involved in the processing of the raw unstructured data Comprehending the application of these steps on the raw seismic data Listing out the necessary steps and the workarounds if required If the steps are necessary Refining of raw seismic data Extraction of these parameters If found critical Analysis of Parameters Presentation of the achieved relation through graph YES NO YES YES NO YES
  • 13. Seismic Data Interpretation Page | XIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES II. Literature Review To successfully complete the prototype building process the first and foremost requirement was to apprehend the seismic data processing and to understand the chronological steps to be followed so that these steps can be inculcated into the prototype functioning as well. To apprehend this I referred various research papers and text books (digital format) of the renowned authors such as Jon F. Claerbout’s, “Fundamental of Geophysical Data processing”, Ozdogan Yilmaz’, “Seismic data Processing”. While exploring this reference materials for the knowledge of seismic data processing, I embarked upon the fact that amplitude and frequency are the two most important attributes that shape the information stored in the seismic data. As these are the characteristics features which change considerably mapping the under the earth topography. Hence the detail understanding of these two attributes was required to map back the changes brought about in the seismic waves received at the receiver geophones. For this purpose of the project I referred to the following research papers and text books. Hongliu Zeng’s, “Frequency- Dependent Seismic Stratigraphy for High-Resolution Interpretation of Depositional Sequences”, Steve Henry’s, "Understanding Seismic Amplitudes”. The last but not the least was the decoding of the seismic interpretation process and understanding of the steps involved in the process. This is to bring to the notice of the readers of this report that interpreting the changes in the seismic data is a very general term as it is understood and practiced at will amongst the geologists and geophysicists. There are no standard procedures for this process. I referred some text books to provide my readers a general chronology of the steps that could be followed to interpret the changes in the seismic data. These steps are not standard and may be company specific or G&G team specific. I provide you these steps to have a general understanding of the process. To derive this chronology if referred Laurence R. Lines and Rachel T. Newrick’s, “Fundamental of Geophysical Interpretation”.
  • 14. Seismic Data Interpretation Page | XIV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES III. Background of the study and Objectives The oil industry, with its hikes and falls is suffering from the deepest downturn since the 1990’s if not earlier. Price of crude oil though has recovered from hitting the low of US$26.21 in February this year to US$47 for Brent and US$46 for WTI, but the prices are still low as to what is required of to drill a profitable well. Most of the consumer industries have enjoyed the dispensation period of the low crude oil price, but several petroleum industries have suffered huge losses. These losses have the rippling effects which further harm the economy of the country. Oil and gas giants are the ones who have taken up worst of the hits due to involvement of the sophisticated machinery in drilling and extraction of oil (at fixed price) and selling that same crude at lower prices (market determined) and thus incurring heavy losses. Share price of some of the integrated oil and gas giants have tumbled due to this recession with Exxon Mobil share falling 8.2%, Chevron falling 13.63%, British Petroleum falling 12.21%, Total 17.08%, Philips falling 8.57% to name a few. And even with the crude oil market getting stable, price of oil and gas exploration would remain the same. This stands out as one of the biggest business problems in today’s oil and gas industry i.e. How to reduce the cost of seismic data processing in the times of falling crude oil prices and also reduce the risk in hydrocarbon exploration and production? With the introduction of seismic data interpretation prototype solution this problem can be answered. Seismic Data Interpretation does not follow the conventional protocol of processing the raw seismic data from the survey and then projecting the refined seismic image to the team of geologists and geophysicist. My objective is, “to provide a cheap and more reliable solution as compared to the conventional seismic data interpretation process”. This milestone is achieved by tweaking the existing seismic data processing and interpretation process in such a fashion that the resources needed to be deployed to interpret the processed image are drastically reduced, processing cost is plummeted, and huge volume of data is processed quickly and with more reliability to delineate a hydrocarbon reserve. Input was taken in the form of raw, unstructured data which was fed into the open source seismic interpretation software (OpendTect 6.0), thus saving the cost incurred from buying a licensed software. These processed files defined some critical parameters which were then pushed and stored into cloud and then integrated with analytical software (R 3.3.1) for analysis and the relation plotted into a graph and presented to the user, who can then take a decision to drill or not to drill. This process thus saves the cost paid to a geologist or geophysicist. Further the time required for analysis is reduced by storing the huge amount of data to cloud and the prediction of the co-ordinates to be drilled in order to find the hydrocarbon reserve is more reliable and involves less risk of failure.
  • 15. Seismic Data Interpretation Page | XV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES In the designing and execution of this proposed solution, I embarked upon various hurdles which were either resolved or a work around was introduced to keep the progress of the solution building within the time limits. My research began with the study and understanding of the seismic data in the form of segy file. To understand what are the components of the segy file which can be exploited for stored geological information. Then I had to comprehend what possible noises or unwanted information might have crept into this presented geological information which might lead to the erroneous prediction of the co-ordinates of drilling. This study was accompanied by the study of seismic survey and study of the seismic exploration, seismic processing and seismic interpretation in parallel. Further to this objective I studies the steps involved in processing and cleansing of this data. And finally how is interpretation done of the processed seismic file to the geologists and geophysicist. Comprehending the commands and operation of tools was followed later on, after we began our operations on the input data as per the guidance of technical architecture. Achieving this arduous objective of developing a prototype to cater to the defined business problem was not an easy task altogether. And it required the assistance of my mentor and my consultant for the concerned project. Path to this destination was full of research problems but, if I had to define the most problem amongst the lot that would be, “to define the hierarchy of steps to be followed in processing of the raw unstructured data”. Conception of the seismic processing laid the foundation on which the functional matrix was structured and then the functional architecture later on. Hence this was the most critical problem for my research and prototype building.
  • 16. Seismic Data Interpretation Page | XVI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES IV. Research Methodology Sr.No. Traditional Research Methodology 5-P Model 1 Research Design Proof of concept 2 Hypothesis Formulation Point of view 3 Methods of data collection NA 4 Analytical tools used Prototype 5 Scope of study Point of view 6 Testing of Hypothesis Pilot 7 Limitations of the study NA 8 -- Package Applied research was adopted by me and the entire research was broken down into the 5P model. These five models are enumerated as follows:-  Point of View/Analysis – In the first stage of prototype development functionality of the subject was defined and the hierarchy of steps to be followed was outlined. To achieve the objective outlined in the prototype solution a functionality design was established.  Proof of concept/Design – This was the second stage of prototype building and once the foundation was laid by the functional matrix, a functional architecture was built on that foundation to define the activity flow.  Prototype/Develop – With the functional matrix and functional architecture in place a prototype was developed in this stage. Prototype software which could process a test seismic file and produce the claimed results. Also defining new avenues for the future development of the software solution.  Pilot/Testing – This stage was marked by the testing of the prototype and refining, to produce a more scalable software solution.  Package/Release – Release of the software to the market with a predefined business model and earning of the revenue from the same. This was the last but not the least stage of the prototype development. A). Point of view/Analysis – Rigorous study was done to understand the structure of the seismic data that is obtained from the field trace or observer’s log. Study comprised of the various application or operation to be done on the raw unstructured data to extract the valuable information. These steps include first the editing of the raw data, removal of noise, performing geometric correction, velocity analysis and enhancing of the seismic resolution by enhancing frequency.
  • 17. Seismic Data Interpretation Page | XVII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES Then these steps were jotted down in the form of flowchart and the process description was described in detail in the adjoining sheet similarly the steps involved in the interpretation of the seismic data was done and the flowchart was prepared for the same with the process description done in the next sheet. This analysis was done to comprehend what operations are needed to be done before the data is actually presented to geologists and geophysicists for interpretation. So that same functions could be incorporated in the prototype and the software could be able to deliver the same with precision.
  • 18. Seismic Data Interpretation Page | XVIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES (Fig-II)
  • 19. Seismic Data Interpretation Page | XIX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES (Fig-III)
  • 20. Seismic Data Interpretation Page | XX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES B). Proof of concept/Design – (Fig-IV) Based on the functional matrix the hierarchy of steps was decided and the how the steps should be delivered, the plan of action was framed. Design was divided into four layers the infrastructure layer, the database layer, the application layer, and the presentation layer. Seismic data in the form of raw data will be present in the storage which is then processed by the tool in application layer further processing will be done on this raw data then converted into required format and exported into an analysis tool and further presented in the presentation layer. Technology associated with the design is presented in the technical architecture below. It specifies the technology associated with the particular technology with each layer that we followed in our prototype development. But this is to bring to the notice of my readers that it is the functionality which remains permanent and technology can be modified with the affordability and the availability.
  • 21. Seismic Data Interpretation Page | XXI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES (Fig-V) C). Prototype/ Development – Development of the full fledged prototype began with an idea, to decrease the cost of processing the seismic data so that overall profitability could be enhanced. As in the Economics we study profit earned by reducing the cost of production becomes the soul of your business architecture which no rival can copy. This business idea than took the same of, “Scope of work” and the skeleton of a definite solution to a definite business problem was fabricated. This outlined the tools to be used at different stages and milestones to be achieved as the internship program progresses. Objective to be achieved was clearly defined. With the scope of work outlined entire work to be done in the span of two months was broken down into a work breakdown structure. With the time span allotted to different deliverables and the time limit set a pace was set for the prototype development. Time for review meetings was also set so as to analyze the synchronization of the progress with that of outlined objective. Further to the approval of the work breakdown structure functional matrix defining the functionality of the prototype was designed and approved. With each footstep explained in detail and agreed upon to be included and acted upon by mutual discussion of the mentee, mentor, and the consultant. Based on the foundation of functional matrix, a functional architecture was designed to project how the functionality defined in the functional matrix is to be achieved. Technical architecture defined the technology to be used at every step in conformance to the functional architecture. Workarounds were decided accordingly in case a particular technology did not seem viable over the definite time span of two months. Operations were performed as defined by functional matrix
  • 22. Seismic Data Interpretation Page | XXII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES and guided by functional architecture with the assistance of tool mentioned in technical architecture. A lot of time, brainstorming, intellect and sweat were put into the project and with the dedicated hard work a prototype was developed over the span of two months. And it took great courage and determination for a three member team, the mentee, the mentor and the consultant to achieve this difficult task. D). Pilot/Testing – Pilot phase is the presentations of the software to the customer and conditioning its functions according to the requirement of the user. There can be customer who might possess their own seismic interpretation and processing tool and would like to integrate the process with their own tool. Similarly different user may not like to see the graph instead would like to see a processed seismic image to be presented to their geologists and geophysicist team. These customizations can be incorporated within the value network and modifications can be done to the fundamental design architecture of the prototype. Charges applied can be breakdown into processing fee, customization fee, installation fee, and utilities such as storage and presentation. Charges will be applicable on per GB of data input and per unit of processing time consumed for a user. Also charges could be applicable for every upload and download of the data. E). Packaging – This prototype could be sold as a complete business process or business process as a solution (BPaaS). With and integrated seismic data processing and analytical tool, along with added services of storage and presentation to different customers.
  • 23. Seismic Data Interpretation Page | XXIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES V. Data Analysis The essence of my prototype solution lies in the analysis of the parameters extracted from the processed seismic files and then predicting the most fruitful co-ordinates to drill from that analysis. Until the transcend of this boundary, seismic data interpretation does not deliver anything different from what others software’s have to offer in the market. It is this special ability of this prototype which makes it stand out from the crowd. Data analysis begins with the data acquisition and storage for further analysis. Processed seismic files from OpendTect6.0 are exported in the ASCII format and then manually converted into excel files in XLS format. These files are then loaded into a dash DB account on IBM Bluemix (cloud). Steps for creating an online dash DB account on cloud are as follows:-  First login into your IBM Bluemix account with your username and password credentials (Fig-VI) You will be guided to an interface window as shown below. This basically enlightens a user about the services offered by IBM Bluemix as a cloud.
  • 24. Seismic Data Interpretation Page | XXIV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES (Fig-VII)  From there go to the catalogs and select the dash DB icon from the available options. (Fig-VIII)  Next window will prompt you to enter the credentials like space, app, service name, credentials name, and the entry plan. Choose as you like and click, “create”.
  • 25. Seismic Data Interpretation Page | XXV of Seismic Data Interpretation, Submitted by Honey Sharma of UPES (Fig-IX)  dash DB would be created in your account where you can perform various operations such as storage, analysis etc. You need to push data into cloud because seismic data from a particular survey is in peta bytes. And if you have to process data for multiple clients the data volume is unimaginable hence cloud is the only option for fast results. For loading data into the cloud you need to launch the dash DB and load your data into the dash DB. Data to be loaded must be in XLS, XLSX or CSV format. (Fig-X)
  • 26. Seismic Data Interpretation Page | XXVI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES To load your data various options are available which are highlighted in the screenshot above. In my research I loaded the data from the desktop, but as the scalability of the solution is enhanced data can be loaded from one cloud to another.  After you specify the location of the data to be loaded next window will prompt you to select the file form your local storage location, characteristics of the row and select preview. (Fig-XI)  Next window will show you the preview of the table loaded, preview will be of 10 rows. Carefully examine the data any discrepancy in the data and click next. (Fig-XII)
  • 27. Seismic Data Interpretation Page | XXVII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES  Next window will prompt you to select the target where the data is to be loaded, i.e. you want your data to be stored into an existing table or you want to create a new table. Choose the option according to your requirement in my research I had to create a new table for data. (Fig-XIII)  As soon as you hit next, the forthcoming window will show the demo of the column and the name of the table that will be created this is the cue to change the column name or the name of the table as you like or is required for further analysis. (Fig-XIV)  Once table name, column name and the serial number is fixed click finish to complete the process. In the similar fashion you can load more data tables for analysis. This finishes one phase of the analysis the second phase is the connecting the dash DB with R and carrying out further analysis.
  • 28. Seismic Data Interpretation Page | XXVIII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES To set a connection in R the steps are as follows:-  Open R studio and install the iamdbr plug-in from the packages console so that IBM Bluemix can be integrated with R and analysis can be done with data in real time. (Fig-XV)  You need to form objects in R where data is called into these objects from cloud on real time basis and analysis is carried out the code for the same is as follows. For my analysis I have merged two data sets into one common data so that graph can be plotted and the change of parameters (power1, power2) could be examined against a common parameter (Frequency) and a comparative analysis could be done. (Fig-XVI)  Finally a scatter plot graph was generated but first you have to install the scatter plot package from the installation package. (Fig-XVII)
  • 29. Seismic Data Interpretation Page | XXIX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES This will present a graph in viewer which can be interpreted by a geologist or a geophysicist as a comparative study of two parameters for two different seismic files against a common parameter. This is to bring to your notice that a petroleum engineer who is familiar with the seismic waves and their behavior below the surface of the earth can interpret the graph so formed. A geologist or a geophysicist is not a necessity. (Fig-XVIII) (Fig-XIX)
  • 30. Seismic Data Interpretation Page | XXX of Seismic Data Interpretation, Submitted by Honey Sharma of UPES VI. Conclusion Don’t pray for tasks equal to your powers, pray for powers equal to your task – Phillips brooks. This was the quote I could think off, when I concluded my research for seismic data interpretation. Future of seismic data interpretation looked dull and hazy in the light of falling crude oil prices. Crude oil prices have taken a dip from a peak of $115USD to $35USD. This collapse in the oil prices has lead to a drop in the investment in the oil and gas industry. With the investment falling from $700 billion USD in 2014 to $550 billion USD in 2015, sharp decline in other sectors has also contributed to overall slow economic growth. Also price of the seismic data processing has been on the rise for past three years to 2015 with 1.5% increment annually. With the demand for energy on the rise, especially in the undeveloped economies oil and gas exploration activity is a necessary burden needed to be carried on the shoulders of the industry. With the study of the scholarly articles conclusion was raised that oil and gas exploration industry is in a desperate need of a solution that can provide fast and accurate processing of the seismic data at comparatively low cost. Therefore my study should be guided in the direction of enhancing the seismic data processing method and for which one must have thorough understanding of the, “seismic data processing” process. The scholarly articles followed by me for my research educated me about the necessary operations needed to be done on the raw data to improve its quality. Further my study enlightened me about the integration of conventional data processing activities with the analytical tools so as to enhance the probability of discovering the probability of hitting a hydrocarbon rich rock structure. Limitations in the study due to non-availability of time and intellect were to extract and analyze some of the many parameters available which affect the seismic data processing. More the parameters extracted and analyzed higher is the accuracy and reliability of analysis in the analytical tool. Also guidelines to explain the graph so generated of the parameters so analyzed was missing, it could have facilitated and familiarize the geologists and the geophysicists and facilitated in the interpretation. In future this solution might see the light as an integration of open source seismic data processing tool with analytical tool. Also it can enhance the scope of work by predicting the quantum of hydrocarbon reservoir beneath the surface and also predicting the sweet spot to drill to have maximum hydrocarbon recovery.
  • 31. Seismic Data Interpretation Page | XXXI of Seismic Data Interpretation, Submitted by Honey Sharma of UPES VII. Bibliography Reference Author Name Description Key Leanings Remark AAPG Data pages Hongliu Zeng Frequency-Dependent Seismic Stratigraphy for High- Resolution Interpretation of Depositional Sequences Effect of frequency on seismic resolution Changes in the frequency of wave when it encounters hydrocarbon reserve Fundamental of Geophysical Data Processing - pdf Jon F. Claerbout Fundamental of Geophysical Data Processing Comprehension of various data processing steps Application of various processes such as deconvolution, NMO correction, DMO correction etc. www.academia.e du Laurence R. Lines and Rachel T. Newrick Fundamental of Geophysical Interpretation Comprehension of steps involved in data interpretation Understanding discontinuity and interpretation steps Google drive Ozdogan Yilmaz Seismic Data Processing Flowchart of steps involved in seismic data processing Study of hierarchy of steps AAPG Data pages Steve Henry Understanding Seismic Amplitudes Factors affecting amplitude of seismic waves Understanding of below the earth phenomenon that change seismic amplitude www.academia.e du Friedrich Nietzche Common techniques for quantitative seismic interpretation Processes involved in seismic data processing Process and common pitfalls in conventional interpretation
  • 32. Seismic Data Interpretation Page | XXXII of Seismic Data Interpretation, Submitted by Honey Sharma of UPES VIII. Appendices In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. This was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data. But, because IO is very fast, and because the fine-grained computations could be handled extremely quickly by the mappers, the net was a large performance gain.