This document discusses the operational and economic benefits of remote production monitoring compared to traditional hand readings or field data capture. It outlines a spectrum of production data gathering from hand readings to SCADA systems. Remote monitoring provides benefits like increased data frequency, integration with other systems, alarms, and productivity gains. The economics of remote monitoring are evaluated based on potential increases in production and reductions in downtime and spill costs. Case studies show examples of potential returns on investment from remote monitoring systems.
The IP4000 is an affordable and cost effective way to solve problems from the operations center, or even from a Smartphone. The primary feature of the device allows remote power control to two AC devices using either a web browser or MSN messenger. This device is a perfect tool for IT personnel supporting routers and network equipment in remote sites. The IP4000 provides the ability to either manually control power remotely over the internet, OR automatically by an internet monitoring feature that pings up to six IP addresses.
The IP4000 is an affordable and cost effective way to solve problems from the operations center, or even from a Smartphone. The primary feature of the device allows remote power control to two AC devices using either a web browser or MSN messenger. This device is a perfect tool for IT personnel supporting routers and network equipment in remote sites. The IP4000 provides the ability to either manually control power remotely over the internet, OR automatically by an internet monitoring feature that pings up to six IP addresses.
Google Vs. Exxon: Who Will Win? - Energy Digital Summit 2014Jennifer Nguyen
This presentation was written by Kirk Coburn, Founder & Managing Director for SURGE Ventures. Kirk Coburn was invited to present as a keynote speaker for the inaugural Energy Digital Summit in June 2014. He presented on the controversial subject of social media, innovation in energy, and the disruptive behavior of the shale revolution.
This is the first edition of the Deloitte Outlook for oilfield services. The forward-looking report is based on in-depth interviews with 12 executives of oilfield services companies. Its purpose is to obtain companies’ views of their current business environment and where they think the market is heading, both in the short and long term.
Don Pearson and Travis Cox from Inductive Automation, Arlen Nipper, the president/CTO of Cirrus Link Solutions and co-inventor of MQTT, and Gregory Tink, managing owner of The Streamline Group address the improvements to data access to help solve business challenges as well as explore the digital oilfield.
“The Digital Oilfield” : Using IoT to reduce costs in an era of decreasing oi...Karthikeyan Rajamanickam
Executive Summary:
oWe decided to create this point-of-view after seeing many abstract presentations and esoteric concepts on Digital Oilfield, IoT, Big Data and Analytics.
oThis is our attempt to bring a practical implementation view to IoT by combining Digital Oilfield and IoT.
oHere, we also envisage sharing our IoT experience and lessons learnt in implementing Digital Oilfield solutions around IoT.
oThe following comprise our fundamental business case for Finance:
oPRODUCTION FORECAST
oFAULT COMPARTMENTS
oWELL LOCATION OPTIMIZATION
Les "systèmes intelligents" constituent la nouvelle génération de systèmes embarqués, qui, en s'appuyant sur les caractéristiques de robustesse et de déterminisme de leurs aînés, se connectent au cloud afin d'enrichir l'expérience utilisateur, qu'il s'agisse d'entreprises (collectant des données ou surveillant des systèmes par exemple), de particuliers (à la maison ou dans un contexte médical, ou bien dans la voiture) ou bien d'autres machines (dans le cas de systèmes automatisés à grande échelle). Le cloud et particulièrement Windows Azure fourni les vecteurs de communication et les moyens de stocker massivement des données et de les traiter, déchargeant ainsi les installations locales et donc rendant le déploiement de ses systèmes plus simple. Cette session, riche en exemples concrets, présentera la stratégie qui est celle de Microsoft autour du futur des systèmes embarqués, et leur connexion au cloud, ainsi que les technologies et les partenariats mis en oeuvre pour accélérer ces déploiements de systèmes intelligents. avec un exemple qui parlera à tous: le futur de la voiture, avec Windows Embedded Automotive!
This presentation was given as part of the April 21, 2010 Northwest Clean Energy Resource Team meeting on Smart Grid Technology in Northwest Minnesota.
The DAN or Data Access Network is a newly emerging “best practice” for passive monitoring of mission critical networks that solves real access problems, improves network performance and uptime, and saves capital, operation and maintenance costs. A DAN is a combination of out-of-band data access switching plus passive monitoring instrumentation to enable required security, compliance, forensics review, application performance, VoIP QoS, uptime and other network management tasks. Data is acquired from multiple SPAN ports or taps and multicast to multiple tools, aggregated to a few consolidated tools, and filtered or divided across many instances of the same tools. The DAN may be thought of as a “data socket” providing immediate access for ad hoc tool deployment without impact to the production network and outside of the scope of configuration management policies. Data Access Networking is a concept whose time has come due to a recent confluence of factors including enhanced fiduciary responsibilities, heightened threats to network security, real convergence of voice, video and data networks, plus greater economic dependency on network uptime and performance. This Podcast recommends the DAN as a solution to those who suffer real problems like too many tools and not enough span ports, too many links to monitor and not enough money to deploy distributed tools, or too much traffic that threatens to overflow even the highest capacity tool. For more details, visit http://www.gigamon.com.
Google Vs. Exxon: Who Will Win? - Energy Digital Summit 2014Jennifer Nguyen
This presentation was written by Kirk Coburn, Founder & Managing Director for SURGE Ventures. Kirk Coburn was invited to present as a keynote speaker for the inaugural Energy Digital Summit in June 2014. He presented on the controversial subject of social media, innovation in energy, and the disruptive behavior of the shale revolution.
This is the first edition of the Deloitte Outlook for oilfield services. The forward-looking report is based on in-depth interviews with 12 executives of oilfield services companies. Its purpose is to obtain companies’ views of their current business environment and where they think the market is heading, both in the short and long term.
Don Pearson and Travis Cox from Inductive Automation, Arlen Nipper, the president/CTO of Cirrus Link Solutions and co-inventor of MQTT, and Gregory Tink, managing owner of The Streamline Group address the improvements to data access to help solve business challenges as well as explore the digital oilfield.
“The Digital Oilfield” : Using IoT to reduce costs in an era of decreasing oi...Karthikeyan Rajamanickam
Executive Summary:
oWe decided to create this point-of-view after seeing many abstract presentations and esoteric concepts on Digital Oilfield, IoT, Big Data and Analytics.
oThis is our attempt to bring a practical implementation view to IoT by combining Digital Oilfield and IoT.
oHere, we also envisage sharing our IoT experience and lessons learnt in implementing Digital Oilfield solutions around IoT.
oThe following comprise our fundamental business case for Finance:
oPRODUCTION FORECAST
oFAULT COMPARTMENTS
oWELL LOCATION OPTIMIZATION
Les "systèmes intelligents" constituent la nouvelle génération de systèmes embarqués, qui, en s'appuyant sur les caractéristiques de robustesse et de déterminisme de leurs aînés, se connectent au cloud afin d'enrichir l'expérience utilisateur, qu'il s'agisse d'entreprises (collectant des données ou surveillant des systèmes par exemple), de particuliers (à la maison ou dans un contexte médical, ou bien dans la voiture) ou bien d'autres machines (dans le cas de systèmes automatisés à grande échelle). Le cloud et particulièrement Windows Azure fourni les vecteurs de communication et les moyens de stocker massivement des données et de les traiter, déchargeant ainsi les installations locales et donc rendant le déploiement de ses systèmes plus simple. Cette session, riche en exemples concrets, présentera la stratégie qui est celle de Microsoft autour du futur des systèmes embarqués, et leur connexion au cloud, ainsi que les technologies et les partenariats mis en oeuvre pour accélérer ces déploiements de systèmes intelligents. avec un exemple qui parlera à tous: le futur de la voiture, avec Windows Embedded Automotive!
This presentation was given as part of the April 21, 2010 Northwest Clean Energy Resource Team meeting on Smart Grid Technology in Northwest Minnesota.
The DAN or Data Access Network is a newly emerging “best practice” for passive monitoring of mission critical networks that solves real access problems, improves network performance and uptime, and saves capital, operation and maintenance costs. A DAN is a combination of out-of-band data access switching plus passive monitoring instrumentation to enable required security, compliance, forensics review, application performance, VoIP QoS, uptime and other network management tasks. Data is acquired from multiple SPAN ports or taps and multicast to multiple tools, aggregated to a few consolidated tools, and filtered or divided across many instances of the same tools. The DAN may be thought of as a “data socket” providing immediate access for ad hoc tool deployment without impact to the production network and outside of the scope of configuration management policies. Data Access Networking is a concept whose time has come due to a recent confluence of factors including enhanced fiduciary responsibilities, heightened threats to network security, real convergence of voice, video and data networks, plus greater economic dependency on network uptime and performance. This Podcast recommends the DAN as a solution to those who suffer real problems like too many tools and not enough span ports, too many links to monitor and not enough money to deploy distributed tools, or too much traffic that threatens to overflow even the highest capacity tool. For more details, visit http://www.gigamon.com.
NetFlow Auditor Anomaly Detection Plus Forensics February 2010 08NetFlowAuditor
NetFlow Auditor software uses NetFlow and sFlow to detect anomalies & analyze full network traffic forensics. The objective of our software is to provide easy to use full-featured anomaly detection and analysis of Flows to quickly identify who is doing what, where, when, with whom and for how long on a network and provide alerts, scheduled reports, SNMP Traps and or filter lists. It allows organizations to quickly identify and alert on network anomalies to help resolve performance problems and manage network security and compliance across business services and applications, dramatically reducing the risk of potential downtime.
Data Access Network for Monitoring and TroubleshootingGrant Swanson
The Data Access Network is a critical network infrastructure element for network monitoring and troubleshooting. Gigamon, the leading provider of intelligent data access solutions, ensures network integrity including performance, security and compliance by enabling your monitoring tools to operate at maximum efficiency.
The realm of mobile computing is composed of various types of mobile devices and their underlying software. Enabling or writing new software for mobile phones, or portable devices has become a new vertical in software development and testing. Smart phones are getting user-friendlier and day-to-day, new apps are being released to satisfy daily user needs. More and more user-friendly apps enable greater user interactions using stylus, touch-based gestures, multi-touch gestures, motion gestures etc. These introduce lot of challenges in development and testing. This document details the approach for mobile testing and the key focus areas for testing.
Similar to Benefits Of Remote Monitoring Mid Con Digital Oilfield Conf August 15 2012 (20)
Benefits Of Remote Monitoring Mid Con Digital Oilfield Conf August 15 2012
1. The Operational and
Economic Benefits of Remote
Production Monitoring
Mid Continent Digital
Oilfield Conference
August 15, 2012
Tulsa, Oklahoma
Jim Taylor
Wellkeeper, Inc.
2. Production Data Gathering
Spectrum: Hand Readings
Pumper Hand Entry Field Data Capture Remote Monitoring SCADA
Device
Data Frequency 1 per day
Incremental Benefit Status Quo
Communication Call in/fax
Cost Base personnel cost
3. Production Data Gathering
Spectrum: Field Data Capture
Pumper Hand Entry Field Data Capture Remote Monitoring SCADA
Device
Data Frequency 1 per day 1 per day
Incremental Benefit Status Quo •Reduce
transcription errors
•Integration with
other systems
Communication Call in/fax Upload
Cost Base personnel cost ~$10-15 per month
4. Production Data Gathering
Spectrum: Remote Monitoring
Pumper Hand Entry Field Data Capture Remote Monitoring SCADA
Device
Data Frequency 1 per day 1 per day 24 to ~300 per day
Incremental Benefit Status Quo •Reduce transcription •Reduce transcription
errors errors
•Integration with •Integration with
other systems other systems
•Alarms/Alerts
•Internet access
•History/Trends
•Some remote
control
•Field productivity
Communication Call in/fax Upload Digital
cell/satellite/radio
Cost Base personnel cost ~$10-15 per month Sensor
install+approx
$100/month (comm
& software)
5. Production Data Gathering
Spectrum: SCADA
Pumper Hand Entry Field Data Capture Remote Monitoring SCADA
Device
Data Frequency 1 per day 1 per day 12 to ~300 per day Near continuous
Incremental Benefit Status Quo •Reduce transcription •Reduce transcription •Reduce transcription
errors errors errors
•Integration with •Integration with •Integration with
other systems other systems other systems
•Alarms/Alerts •Alarms/Alerts
•Internet access •Internet access
•History/Trends •History/Trends
•Some remote control •Full remote control
•Field productivity •Field staff
reduction
Communication Call in/fax Upload Digital Dedicated radio
cell/satellite/radio
Cost Base personnel cost ~$10-15 per month Sensor install+approx Long range radio
$100/month (comm & systems, high end
software) software
7. Benefits of Remote Monitoring
Reduce downtime duration; increase
production
Reduce spill frequency and liability
through monitoring and alarms
Reduce safety liability (H2S exposure,
tank climbing)
Optimize field operations
8. What can you see?
Process Variables Production Equipment
Tank Levels Pump Off Controllers
Pressures ESPs
Tubing, Casing, etc. EFMs
Flow Rates Plunger Lift
Gas, Oil, Water
Pumps
Temperatures LACT Units
Heater Treater, Amine, etc.
Compressors
H2S Concentration
“If it can be measured, it can be monitored”
9. Remote Monitoring Components
Tank Battery
Cell
Sat
Radio Web Interface
Wellheads
Database and
HUB™RTU
Software
File Transfer
Sensors
11. RTU, Long Range
Communication, Database
HUB™ RTU
Gather and package data
for transmission to
database
Communication
Digital Cellular
Satellite/Hybrids
Radio
Wireless Broadband
Database
Co-lo Facility
Cloud
13. Pumpers/Field Personnel
“Forewarned is Forearmed”
Plan day and route based on
data
Pump by exception
Alarms/Callouts
Contract Pumpers
Reduce mileage
Reduce safety exposure
14. Engineers
Easy access to trends
Identify growing problems
e.g. paraffin
Monitor production
equipment
e.g. rod pumps, ESP’s,
compressors
16. Economics of Remote
Monitoring
Isolated or
Remote
Location
High fluid Important:
to tankage High
ratio Ideal Producer
RM
Site
History of High Exposure:
H2S or
problems Environmental
17. Quantify the Benefits
Benefits are probabilistic
(If you knew which one site would go down,
you’d just monitor that one!)
Apply across enough sites to get the broader
benefits
Company-wide deployment
Use your actual downtime and spill history to
estimate improvements
22. Thank You!
Jim Taylor
Wellkeeper, Inc.
jim.taylor@wellkeeper.com
www.wellkeeper.com
888-935-5533
Editor's Notes
Good morning. I’d like to start with an overview of the remote production data gathering landscape. First, and still very popular with the small to mid sized independent, is hand gathering by pumpers on a daily basis. It is the tried and true method, with data phoned or faxed in daily after the pumper makes his rounds. The home office then enters the data into their own systems for reporting.
Field data capture, through any number of handheld or portable devices, allows for daily data to be entered on the device and then transmitted. These systems reduce transcription errors and can integrate into the company’s systems directly.
Remote Monitoring replaces the once a day hand entry with multiple readings taken by sensors in the field. That data can be transmitted over 300 times per day, with the frequency determined by the communication method that is used. The readings can be used to generate alarms and callouts throughout the day, they can be stored for easy access to history and trends, and can generally be accessed from the internet.The extra information helps increase field staff effectiveness and productivity.Remote monitoring systems can provide some degree of remote control, but are limited by the digital cell or satellite communication that is used.Costs increase with the need to install sensors and communication devices, and pay for the communication system that is used.
Full SCADA systems require secure and near continuous communication to allow for remote control and confirmation of control changes. Most of the majors and many large independents have installed full SCADA on their production operations, allowing for restructuring of their field staff and maximizing field coverage per person.
A couple of cute ways to differentiate Remote Monitoring from SCADA, is that Remote Monitoring is SCADA without the “C”.Also, I’m indebted to Fox News for their very applicable tag line:With Remote Monitoring, “We Report, You Decide”.
The frequent and easily accessible readings make several benefits possible:Downtime duration can be reduced, since you know when something goes down rather than find out the next day when the pumper returnsSpill frequency can be reduced since levels are read and callouts can be triggered before the tank overflowsYou can minimize the time your field people spend on top of tanks, reducing their safety exposureWith “morning report” data available, field operations can plan their day based on priorities, rather than a set route.
The process variables and production equipment that is visible in a remote monitoring system is really just limited by what you want to see, and what you think is important. As long as there is a sensor made to measure what you want, or a Modbus register available to read it, you can have 24/7 access to what your remote operations are doing.
The components of a typical remote monitoring system are pretty consistent:Field sensors and local communication to get the data to an RTUAn RTU to package the readings and transmit data packets to a database for storage and retrievalAnd then a way to access the data and transfer it to other downstream systems as needed.
There is no one best choice on sensors, with some companies using proprietary designs and others using industry manufactured sensors for their readings. Modern production equipment also use Modbus protocols to allow for direct access to digital data.The choice of using short range radio or hardwired connections from the sensors to the RTU is also an issue for optimum economics, depending upon the distances involved.
The RTU, long range communication and database subsystems complete the delivery and storage of data for later access.Long range communication choices are a balance of cost and communication requirement, with the trend towards higher, low cost bandwidth, is enabling the gathering of ever more data at an acceptable cost.Finally the location of the database can range from a local single server to co-location facilities, and many Remote Monitoring companies are migrating to the cloud.
The effective and intuitive presentation of data to the user is equally important for long term customer satisfaction. Some systems use a client/server structure while others use the local browser. There are issues of user rights administration and the ability to see information in a format that is most useful and exportable to other systems.
Remote monitoring appeals to at least three groups of employees: the field, the engineers and the bosses.Giving your field organization information to start their day means that they know what they are getting into before they ever go out.A pumper may not even have to go to a particular site every day, and the system can reduce mileage and safety exposure without losing access to critical information about the operations.
Engineers really value the easy access to historical trends and the ability to easily monitor production equipment. They can foresee impending problems and work to optimize and maximize production across an entire field with ease.
Management and marketing functions can see the ‘big picture”, and have easy access to production and allocation data. Direct entry of data into downstream systems can reduce clerical errors and minimize administrative costs.
So what would be the ideal Remote monitoring site?Out in the boonies, an important high producer, in an environmentally sensitive location, with a history of problems and a high ratio of fluid production to tankage, just waiting to spill.So why not monitor just those sites?
If you think about it for a second, many of the benefits are probabilistic in nature. You don’t know when or if a single site will have a downtime event. So, like most probabilistic things, the best strategy is to build a portfolio of monitored sites. You want to have enough sites monitored so the economic benefit of the portfolio is likely to occur, even if one site never has a problem. The actual benefits are directly related to the particulars of your operation, and you can estimate the results based on your own company’s history.
This chart represents the spread of production increase experienced by one of our customers, illustrating the wide range of outcomes. Anywhere from 0% to well over 5 to 10% downtime reduction is possible with a remote monitoring system.
Based on an individual operator’s history and judgment, the benefits of remote monitoring can be quantified, and the investment economics can be calculated. In this particular instance, with a $100/month fee, $70/bbl oil, $2.50/MMBTU gas, and a $100/bbl spill clean up cost, this 50 BPD tank battery had a very attractive IRR and a payout in just over 1 year.Economics like these can be run for each site in the portfolio, with an output ready to attach to an AFE
Since the capital investment for remote monitoring is pretty low, it can be economically attractive even to as low as 15 BOPD through a tank battery.The advantages that come with the ability to see your remote operations in near real time should really be explored to improve your production operations.
I’ll close with one final analogy.When my wife and I raised our boys, we kept an eye on them when we could and everything turned out just fine. But in the years since then, the cost of baby monitors came way down. Nowadays, there is hardly a new parent couple that does not buy and use a monitor so they know right away if something is wrong.If something is important to you, like your baby or the far flung revenue stream that is your production operations, you really should keep a close eye on it!Thank you.