NEC Backup as a Service reduces administrative tasks, helping it departments...
Bobs paper
1. The Digital Oilfield
Excerpt from a White Paper presented by Bob Molnar-Global Sales Manager for Offshore
Software Solutions at the Digital Oilfields Symposium September 2014 London, UK
Traditional monitoring systems in the Digital Oilfield today are moving towards a
more globalized customer-centric solution portfolio. Such innovative thinking has brought a
more holistic approach to managing all aspects of available areas of digitization; wherein
electronic whiteboards replace erasable markers and sticky-notes, portable tablet computers
contain entire databases and are carried around in lieu of clipboards and armloads of drawings
and user manuals. Communication is no longer limited to voice and paper, but extends to
CCTV, RFID tracking systems, webcams for “on the move” activities and over-the-shoulder
remote assistance via Wi-Fi from thousands of miles distant; all designed to assist the crew
and maintenance staff in equipment repair and systems management more effectively and
quickly.
Today with ultra-high resolution photography and duplicating databases thousands of
miles distant, live CCTV becomes far less necessary as we can now see the same images of a
piece of equipment on a remote offshore drill ship and simultaneously a control room ten
thousand miles distant and resolve 30 minutes of dialogue in only a few seconds with a 20
kilobyte message leaving the live feed to a remote CCTV camera only for emergencies. We
can exchange hand-drawn images and sketches, view notes and update checklists without
exceeding a few hundred kilobytes in data transmission requirements. Update drawings,
schematics and BOM’s live on the system from either end of the network, and synchronize
the changes overnight without increasing peak-time bandwidth or network traffic! Replenish
parts, exchange ideas, keep detailed logs, and replay a recent activity for review and analysis.
All of this with selectable preferred language at every operator station to improve
communications and comprehension amongst your crews. See an instant snapshot of fit-for-
purpose systems views based on personally selected criteria; giving a real feel for the
condition of equipment, process or function in a key stakeholder view. See a system state as
red or green- both locally or remotely and choose to view underlying system details instantly
for only the most immediate issues without the need to interpret first. Minimize security
issues whereby data is transmitted only to another node of its own unique designation and
database frameworks are never exchanged. View any product or system through a single
global interface no matter who the manufacturer and control access to your system by any
third-party service and support provider by isolating access to their own equipment. Conduct
three-way service calls between support engineering, manufacturer’s representative and ship’s
crew all potentially separated in different parts of the globe, and make more informed
decisions faster. Such solutions do exist today.
With market demands forcing the oil and gas industry to go into far more remote and
hostile environments, “new oil” supports the drive behind development of solutions to help
lower the increasing daily cost of exploration, production and processing in the oil field. The
digital oilfield concept drives innovation and optimization across the entire market, but most
profoundly in the offshore sector. As our ability to gather data in huge volumes from a variety
2. of sources and in many logically different forms increases; we are faced with challenges that
evoke not only a clearly defined view of systems and processes, but moreover and usually
slighted – the need to understand how to interface with key stakeholders including finance,
engineering, operations and local crews in more effective and meaningful ways. Perhaps this
comprehensive viewpoint only now just identified as being at a level of significance that is
not as well understood as it needs to be and upon which we must then refocus.
With better and faster wireless technologies readily available to the industry,
companies are asking themselves how they can identify tangible value from new emerging
BIG DATA management technology, not just in a remote setting but distributed effectively
around the globe, no matter when, where, or what it might be.
Across the complexity of integrated systems monitoring challenges, we encounter numerous
stakeholder-driven KPI’s which are founded from a mix of fundamental yet vastly different
underlying motivators, to be precise, we can break these down into some essential subsets;
• Availability of a remote asset
• Support of those assets
• Sustained value of the asset
• Performance of the asset
• Safety of the asset
• Cost of the asset
• Integration of the asset within the larger operation
There are certainly many more, but these are the essence of the core drivers for technology
development and solutions applications.
The digital oilfield concept has long been of interest to all the major oil companies and
certainly to those smaller businesses whom in the past have tried to compete with the big
Nationals on a more regional level- whereby any means of productivity improvement was
considered of value in terms of scale…only because a more compact and smaller operation is
highly visible with KPI’s being of utmost importance to fundamental survival of the business.
On the other hand though, costs have historically been prohibitive and the communications
technologies are only now catching up with our ability to sense, gather and share large
amounts of data.
There have been a lot of products developed over the years which are still being touted
as a solution for today’s fast-moving and hugely expensive Oil & Gas extraction process and
tend to have created a lingering cause for questioning of validity and hesitation in
approaching modern solutions. Unfortunately we are used to these outdated product
shortcomings such as the lack of ability to “show” Key Performance Indicators (KPI’s) in any
real measurable form to these mega-companies. For example take an average deep water
drillship chartered by a National oil company with operational cost value of $500,000/day;
with a potential 1% Non-Productivity Time (NPT) reduction over a KPI measured period of
one year and find that approximately $2 million is recovered annually for each percent of
NPT recovered and turned into productive time (drilling). With measurable NPT’s of
anywhere from 2% to 20% across any given fleet during any one measured period and the
consideration that 85% of that NPT is pointedly equipment failure with the largest proportion
3. of that being sub-sea equipment, and we now see an emerging direction in the recovery
process challenges of providing a significant cost reduction and performance enhancement
program that can be made financially tangible to Key Stakeholders.
The digital oilfield concept remains a widely varied and somewhat vague focus,
piggybacking on some modern IT automation solutions where systems and equipment
monitoring promises to enhance all the aspects of oil and gas operations, from exploration and
production to environment and workplace safety.
The underlying theme however is to get the big picture across an entire globally distributed
operation by eliminating and replacing old-school methods with modern high-tech proven
systems and thus optimize productivity while at the same time providing the kind of data
needed to show recovery of non-productive time as a comparable value across individual
assets and the digital oilfield in collaboration.
Many comments have been raised in the industry with justification for this hesitation
in the acceptance of collaborative monitoring systems, but the most evident remains the lack
of clearly defined outcomes in a system designed unfortunately not for the user, but by and for
those still motivated by antiquated conventions of corporate design, suggesting that “if it
works for our equipment then we have a viable product to offer the industry”.
Maybe some limited truth to this statement but it lacks the key involvement of a design by the
user totally. And if the solution is in fact for the user, why is the user not defining the
solution?
However as the historic costs associated with products for monitoring, assessment and
archiving data are now being offset by OEM designers integrating full-performance packaged
solutions like the GE Insight Management System; a suite of high-tech software and hardware
kit which names its roots from the military applications in Naval Damage Control and Critical
Systems Monitoring. Because of its origins, the key concept of simplicity and user
visualization rather than forms of presentation requiring interpretation, under extreme
conditions of a critical nature to provide “eyes on” extension of the operators own senses were
adopted. With machines having more sophisticated on-board logic centers in their
fundamental construct, the idea of extracting outcomes as opposed to raw data contributes to
the simplification of monitoring systems in general.
We must not forget the reasons for equipment and process monitoring in the first
place-to give us the ability to streamline any activity for the sake of increased safety,
performance, speed, cost reduction and greater understanding. There is a need therefore in
maintaining the appropriate balance between man and machine automation to be very careful
not to overstep the control we allow to make decisions without human intervention – even in
the case of a system shutting down to preserve itself, could prove costly or disastrous
depending on all other integrated systems in the affected process; the correct decision after all,
might very well include emotion or a “gut feeling” in its outcome, and likely often a
collaboration with other individuals. Certainly we have a need to retain and improve the
ability we have to make decisions and will therefore always be part of the equation in
man/machine interface; after all, it is called man/machine interface, not machine decision
making.
One example of this technology is the Insight Management System (IMS), designed
and built within General Electric’s Energy Management Power Conversion Marine division
and first released in 2011. The IMS represents a comprehensive solution to the entire digital
4. oilfield providing total global fleet wide management solution which includes a Primary Asset
Management (fleet) module, a Local (on-board) Asset Management system (including
consumables and parts management, an Alarm Response & Visualization system, an
Emergency Response System, a Condition-Based Monitoring module (including third party
systems), Equipment Visualization for remote repair support and an advanced Productivity
Improvement Toolkit which contains an EHS-focused Work Process Manager. This system
sits on top of all other existing monitors and sensors providing a visualization, manipulation
and interface on multiple levels with filtering capability based on each individuals needs to
evoke where possible, the simplest form of indication- that being either red or green. Essential
data which is delta (signals that changed from a previous measurement), is transmitted in real-
time around the asset, and if necessary likewise to the remote nodes.
However, with continuing limited bandwidth, coverage issues and communications
latencies and the need for real-time data remote from an asset is actually pretty rare. In almost
100% of cases real time data is valuable only at the location it was collected and waiting a
hundred or so seconds for remote data to arrive for analysis purposes somewhere else is fine;
if a process is time critical (instantaneous) there will be no time for long-winded evaluation &
interpretation before intervention is necessary locally to prevent an undesirable outcome.
Thereby a local simplified display of “good or bad” state needs be provided to the crew for
focus and understanding quickly the state of critical process.
For the sake of simplification the Insight system incorporates two interesting functions; one is
the local and remote libraries, the other again local and remote are the data historians, these
disparate connected asynchronous systems allow for bidirectional “lazy-loading” of data
which is not real-time critical, thus utilizing available variable bandwidth when it
synchronizes. Another feature is of course the ability to tunnel a connection to a remote asset
or assets to acquire data as it is generated in real-time. This achieves a saving in
communication costs and data clogging of networks when the final and perhaps most
important module is supplied- the machine agnostic predictivity suite known as PREDIX™.
A well-proven solution developed by GE Aviation which provides remote on-line monitoring
of jet engines whilst in flight and on the ground 24/7/365. The PREDIX™ solution provides
accurate real-time machine condition monitoring based on optimal operational characteristics
of a machine or system made up of multiple sensor signals both directly from -and around the
machine- giving a histogram of correct operation under all operating circumstances. It is
against this “base line” set of data that ongoing operation is constantly compared and
evaluated for deviations alluding to possible incorrect operation – ultimately leading to failure
avoidance.
GE’s ability to provide these advanced PREDIX™ services remediate the need to schedule
unnecessary maintenance or time based parts renewal with more accurate real state evaluation
and spare parts management, providing a measurable reduction in maintenance overheads and
increased availability of critical assets.
At the heart of the digital oil field is wireless networking, which provides the high-
speed communications backbone that makes the majority of recent innovations possible. In
many quarters, the introduction of maritime wireless systems is considered an enhancement to
the conventional satellite very-small-aperture terminal (VSAT) used for ship to shore
communications and represents a major investment across existing fleets, therefore must be
utilized better, not replaced at huge expense. Increased wireless connectivity in the form of
5. offshore asset WiFi brings the benefit of remote monitoring and control right into the hands of
the operator and his support teams, literally as portable devices work both on and off line, the
enhanced functionality of a portable device used for example on the topside or engine room
during a maintenance repair, can for example prove more valuable in one use that the entire
cost of deploying the ship-wide WiFi to support it.
Although there was a recently misconceived notion that the need for a wireless
solution to replace the traditional VSAT due to what was deemed slow, expensive and lacking
in bandwidth needed to run intensive software programs between assets and shore side control
centers, innovations in software design has led to the ability to extract data differently and
more efficiently- we don’t actually need to replace the technology, we need to work with it
right from the conception and development stages. The problem of data intensive
communications is really down to the fact that software designers are not communications
experts and therefore the traditional resultant massive data exchange is usually addressed after
the fact.
From a pure production perspective, increased wireless connectivity in the offshore
industry brings the benefit of remote monitoring and control, with onshore specialists able to
continuously supervise key production processes to aid in local crew troubleshooting potential
problems at an earlier stage and output more accurate resource management.
It is a popular misconception that data from deployed assets should provide a real-time (or
near real-time) link between platforms and onshore engineers. Most analytical data can again,
be somewhat latent in its arrival at shore for detailed historical operational analysis. When
necessary a real-time “snapshot” can be taken from the remote operation to compare with the
historical data to give proper up to date machine condition. As we are now able to manage
multiple applications from a single screen locally or thousands of miles from the offshore
assets, the way we see data is primarily important. As de-manning and lesser qualified
resources are available due to an ageing workforce and high technology solutions, it is even
more important to provide offshore crew with invaluable in-hand support tools. Contextual
help, videos, images, diagrams and even location of tools necessary for a job are carried with
the crew in portable WiFi connected ultra-rugged computers traditionally used in the military.
Local work and expertise is then in hand. When greater expertise is needed, the on board
webcams, VOIP, email, and free drawing tools can be used to improve communications with
a remote location “live” and thus increase productivity and reduce the all-important non-
productive time and corresponding cost overruns.
As GE Intelligent Platforms global industry manager for oil and gas Mayank Mehta
put it in a November 2012 article for E&P: "With the decrease in costs of servers and storage
equipment, it is easy to fall into the trap of data and information overload. A balance between
'not enough' and 'too much' is crucial for ensuring that decision support systems have the
relevant data." As such, implementing the digital oilfield eco-system is a formidable task of
integration and optimization that requires the top-down reorganization of workflows. That
some oil and gas companies are intimidated by the process is understandable, but with the
deployment of the right technology and data management techniques, information overload is
something that can be tackled.
At the platform itself, important information can be cherry-picked from data streams
using pre-processing and data reduction techniques. These algorithmic techniques pick
6. actionable information out of the data scrum so that only relevant material makes it back to
remote control centers, while also saving costs on data storage facilities.
"The sheer breadth of data can do as much to paralyze decision-making as liberate it."
"Developing successful data reduction methods that significantly reduce data volumes by
pinpointing pertinent data but that do not throw away information that might be of value is
complex and costly.
Offshore rigs, support vessels and their ancillary operations produce reams of data
every day. The digital oil field at its core is about mining big data and sharing it to drive
innovation and improve performance across all operations. But extracting the data itself is not
enough - the real revolution is in how this information is presented. GE Power Conversion has
developed a software solution for “whole fleet” monitoring, and whilst gathering large
numbers of discrete points of data what is now coined as ‘big data’ resolves many interpretive
algorithms locally where they are needed and sends only the necessary delta outcomes to
remote monitoring locations globally. This reduces the amount of data transmitted over
limited bandwidth and unreliable connections by an order of magnitude. Further savings are
realized by filtering data locally and sending in real-time only data which is time sensitive,
leaving the bulk of monitored data to “lazy-load” over time, for evaluation and archiving
later. Since modifications and changes are inevitable and usually daily, the need to populate
these changes into a monitoring database becomes labor intensive if not managed in a
distributed manner. The GE IMS will synchronize all changed system data bidirectionally
from a local historian on a deployed asset (rig or ship) with a shoreside central server
historian or in a cloud distributed server archive. In this way changes to drawings, BOMs,
workflows, performance, inventories, equipment status and all around generalized changes
can be implemented to the system from the less costly location – usually the shoreside server
– which then “lazy loads” back to the deployed asset.
Oil companies constantly strive to improve safety for oil field workers, boost well
performance and respond more immediately to the market - but it all ultimately depends on
data and how it is used," clearly communicating it to the right people and giving them the
tools to quickly act on that information."
Insight uses a series of modules within the system architecture which address all of the
key stakeholders’ needs - a product made expressly for the client that categorically considers
all the day to day needs and systems integration requirements right across the field. In the
digital oilfield, there are three proprietary phases to consider refining and simplifying;
1) It should provide immediate and completely refined visual output from hundreds,
even thousands of discrete and analogue sensing points outcomes as simply as possible like
for example Red or GREEN. For in the eyes of the crew and support staff, what else matters
first and foremost? If an indication provided to the user as a generalized good or bad rendition
can be interpreted immediately and for the sake of simplicity and fast reaction to manageable
unfolding situations – eliminates all the green from needing interpretation, because green is
good, why would we look at it as a point of resolution for an alarm or out of spec operation?
That way, half of all the stuff requiring attention is divided away to expose only the “bad”
stuff – which then can be “drilled down” into the underlying alarms causing this red
indication.
2) With typically multiple alarm states and activations, this myriad of information
needs also to be divided into critical and non-critical buckets so that a simplistic “weighting”
7. of these alarms can be provided in order to then give the individually tailored user views a
“gut feeling” of either degrading or improving condition- thus supporting an action plan with
a more clearly defined set of underlying circumstances.
3) Connecting into networks with machines and people, a big problem today with the
attempted movement of Big Data across not only distance, but also dissimilar and varied
databases. The lack of ordering data before it is moved is a massive error in design; most
companies collecting terabytes of data find that it is useless at the ultimate destination as
repositories are filled with unsorted bits and bytes of data from many different sources and
with no embedded way of collating organizing and ultimately viewing this data – let alone
understanding its meaning.
Raw data collection is kind of like drawing a map; if you draw a bunch of lines on a
paper indicting roads, you have no order, no scale and no ability to interpret the map, no page
numbers, not even a direction or point of reference. All of these things are necessary to
correctly draw a useful map, and we all appreciate the value of a well- drawn map- when we
are looking for some unknown destination! Data is the same; before it is useful it requires
order, scale, direction, reference, destination and an ability to search it for a desired outcome.
As an example of just how big this problem of data mismanagement really is- one can find
numerous companies today popping up everywhere offering data organization and ordering
services for big businesses that have collected lots of data with no rhyme or reason in the
source collection or filtering and ordering of data in their storage facilities. The data as it sits
is currently useless and conclusively expensive to store and maintain, yet no one wants to
throw it away for fear it may contain value to someone at some point. There are problems
with this approach as the collected data exposed is often critical and proprietary, therefore
vulnerable to exposure inadvertent or otherwise and the benefits need to be weighed against
the potential years of financial and security commitments which need to be engaged. Perhaps
the first question should help determine why it was being collected in the first place and then
evaluate the benefits of keeping and sorting it. It may well be in this cast that a fresh start in
data collection and management is the better solution- as any old data currently collected and
stored inadequately, remains of little or no use anyway!
Security remains another concern as hackers and potential industry spy’s today will
pry into networks and communications systems for the sheer fun of accomplishing a breech to
a system proffered to be secure; the fame of the underworld of unethical activity. Cyber
security then becomes an issue depending on the methods used for transmitting data whether
it is encryption, firewalls, multi-sync / multi-channel comms networks, or just plain physical
memory devices hand-carried to their destination. All of this pends consideration based on the
need and intended purpose of the data being collected in the first place. Remember;
• Resolve data locally to reduce quantities
• Provide only what outcome is necessary
• Send it only where it is specifically needed
• In the time it can actually be used
• Exactly for its intended purpose
8. All data collection activities should be reviewed both internally by any company departments
intending to use the data and by an appointed expert service like the GE SSI data management
system team in order to provide a value/cost proposition for data collection.
Data travelling through bandwidth limited satellite systems, security-sensitive internet
clouds, intranet systems -limited in their accessibility- and manually packed and shipped to be
carried away in email or on a storage media device, data has become the greatest baggage to
transport even in spite of its physically small size. As we are moving this data around and
delivering outcomes - we must remember it has to be done in a timely and understandable
fashion in order to make the data actually increase in value as it moves around an
infrastructure. Until now we seem to have been moving data around quite inefficiently for
purpose, in fact to the point where it can actually lose value in transit.
We need to rethink the need and impetus behind data collection now that the buzz
around big data is settling down- evoking startling realizations that even a decade ago were
not foreseen. Big Data is perhaps a misnomer in the digital oilfield today as the term was
originally intended to reference massive numerical demographic and genealogical statistics
collected by interests in forensic evaluation and predictive associations in large populations.
In our world of the digital oilfield the term often refers simply to the use of predictive
analytics or other certain advanced methods to extract value from data, and seldom to a
particular size of data set. So indeed we need to differentiate between volumes of data and
complexity of data or its manipulation and transit.
Perhaps it is time to rethink the entire aspect of data and man, and realize that we must
provide a better way for our machines to tell us how they “feel” and then to order that
information at the source to provide a simple means of evaluation, comparison and
conclusion, quickly and effectively. We could start with improving how we ‘hear’ machines
and then understanding machine symptoms better, leading to diagnosis- not only so we can
predict degradation and failure points but to better learn how to provide the necessary support
and maintenance tools to avoid costly downtime being scheduled because of a blind fear of
machine status due to a lack of information or the right information at the right time to the
right person.
We have a responsibility and a need in helping this emerging less experienced work
force of crews manning todays digital oilfield now and in the future; our contributions in the
present only have fulfillment if they carry some value into the future. Wasting a generation of
personal experience and old-school knowledge that will continue to diminish as we retire
from these vocations unless preserved and replicated in our intelligent machines would be
tragic. We have the capability today to cost effectively integrate modest intelligence into our
machines that will tell us not only how they feel, but also what is hurt or broken and with
remote live technical support, finally how to fix them no matter where they are in the world or
where we might be at any time- without the need for searching disparate systems and old
closets for information long forgotten.
Bob Molnar
Sales Manager, Offshore Software Solutions
Concept Designer & Senior Development Lead
Insight Management System
DIRAD Naval damage control system