This document discusses lessons learned in how not to implement drilling automation. It begins by defining drilling automation and providing examples of automated processes like maintaining downhole weight on bit and optimizing mechanical specific energy. It then outlines common mistakes made in automation projects like not including drillers, having insufficient data quality, and giving drillers too many or too few controls. The key lessons are that automation must improve performance, drillers must be central to the design and implementation, reliable data and controls are essential, and human factors like training and complacency must be addressed. Critical success factors for automation include deciding what to automate and implementing with consideration of both technical and people issues.
How I failed to build a runbook automation systemTimothyBonci
How I tried and failed to build a runbook automation system and what I learned.
Our intentions can be good and the technical ability and time may be there and we’re going to build the thing to make work easier and more productive, allowing everyone to apply their labor to only the most valuable tasks – yet sometimes it’s still not enough. This is a post-mortem.
Lean Maintenance is gaining traction as a sound strategy to keep equipment running and productivity humming. The hardest part is getting started. On Thursday, March 20 at 1 p.m. CDT, Plant Engineering will present a Webcast that looks at the steps needed to implement a sound Lean Maintenance strategy on your plant floor and to begin to reap the benefits.
Learning objectives:
-The value of Lean Maintenance as a plant-floor strategy and the history of lean
-The steps and tools needed to get started down the road to Lean
-Getting plant-floor buy-in from line workers
-Incorporating technology into Lean maintenance
The document provides an agenda and overview for a two-day workshop on reliability and system risk analysis techniques. Day one focuses on classical qualitative techniques including Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis, Event Tree Analysis, and HAZOP. Day two will cover systems-theoretic techniques and human factors. The workshop introduces concepts of reliability, risk, failure, and safety and discusses how accidents can result from both component failures and unsafe interactions. Traditional techniques like FMEA have limitations and may not capture issues related to design, requirements, or organizational factors.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
This document discusses lessons learned in how not to implement drilling automation. It begins by defining drilling automation and providing examples of automated processes like maintaining downhole weight on bit and optimizing mechanical specific energy. It then outlines common mistakes made in automation projects like not including drillers, having insufficient data quality, and giving drillers too many or too few controls. The key lessons are that automation must improve performance, drillers must be central to the design and implementation, reliable data and controls are essential, and human factors like training and complacency must be addressed. Critical success factors for automation include deciding what to automate and implementing with consideration of both technical and people issues.
How I failed to build a runbook automation systemTimothyBonci
How I tried and failed to build a runbook automation system and what I learned.
Our intentions can be good and the technical ability and time may be there and we’re going to build the thing to make work easier and more productive, allowing everyone to apply their labor to only the most valuable tasks – yet sometimes it’s still not enough. This is a post-mortem.
Lean Maintenance is gaining traction as a sound strategy to keep equipment running and productivity humming. The hardest part is getting started. On Thursday, March 20 at 1 p.m. CDT, Plant Engineering will present a Webcast that looks at the steps needed to implement a sound Lean Maintenance strategy on your plant floor and to begin to reap the benefits.
Learning objectives:
-The value of Lean Maintenance as a plant-floor strategy and the history of lean
-The steps and tools needed to get started down the road to Lean
-Getting plant-floor buy-in from line workers
-Incorporating technology into Lean maintenance
The document provides an agenda and overview for a two-day workshop on reliability and system risk analysis techniques. Day one focuses on classical qualitative techniques including Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis, Event Tree Analysis, and HAZOP. Day two will cover systems-theoretic techniques and human factors. The workshop introduces concepts of reliability, risk, failure, and safety and discusses how accidents can result from both component failures and unsafe interactions. Traditional techniques like FMEA have limitations and may not capture issues related to design, requirements, or organizational factors.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
Goal Driven Performance Optimization, Peter ZaitsevFuenteovejuna
The document discusses goal driven performance optimization. It emphasizes setting clear performance goals based on metrics like response time and throughput. Goals should be set for different types of requests and measured regularly. Instrumentation of the system is important to identify bottlenecks and queries that are causing slowdowns. The key is to prioritize optimization efforts on the most important user interactions that are not meeting goals. Taking a goal-driven approach focuses work on the most significant performance issues.
This document discusses setting performance goals to optimize existing applications. It recommends defining goals like 95th percentile response times for different types of requests and measuring these goals over short intervals like every 5 minutes. The goals should focus on important user interactions and prioritize the most critical performance problems first. Instrumenting production systems to collect response time data can help understand where to optimize and ensure the goals are being met for all users.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
This document discusses anomaly detection using the Cortical Learning Algorithm (CLA). It defines anomalies and describes how NuPIC/CLA computes anomaly scores for streaming data to detect spatial, temporal, and other types of anomalies. Sample code is provided to demonstrate anomaly detection on CPU usage, heater temperature, and randomness change examples. The document also discusses how anomaly likelihood is computed in Grok and presents several use cases. It concludes by discussing future work including a benchmark for streaming anomaly detection.
Performance modeling provides important insights for capacity planning and system sizing without costly full-scale testing. While sophisticated mathematical modeling was common in the past, today's complex systems are difficult to model formally and existing tools are outdated. However, minimal modeling with common-sense approximations using metrics like resource usage per transaction and hardware capacity can still be useful. Keeping even informal models in mind helps performance engineers understand systems, but complex systems benefit from documenting models. Reviving the art of performance modeling can add value to modern continuous performance testing approaches.
The document provides an overview of reliability centered maintenance (RCM) including:
1. RCM is a process used to determine necessary maintenance to ensure assets perform their intended functions by mitigating failure consequences.
2. An RCM analysis involves a multifunctional team answering seven questions about asset functions, failures, failure causes, effects, importance, and predictive/preventive maintenance techniques.
3. Implementing RCM principles like condition-based maintenance improves reliability by focusing maintenance on asset condition rather than rigid schedules and reducing unnecessary tasks.
This document contains a summary of a presentation on best practices in maintenance and reliability by Ricky Smith. It discusses key topics like reliability definitions, failure patterns, predictive maintenance, FRACAS systems, and reliability metrics. It emphasizes that most equipment failures are self-induced due to issues like improper installation, maintenance, or lubrication. It also outlines steps for improving reliability like prioritizing assets, identifying maintenance strategies, and using failure data for continuous improvement. The goal is to move from reactive to proactive maintenance through practices like condition monitoring and root cause analysis.
Drilling systems automation is the real-time reliance on digital technology in creating a wellbore. It encompasses downhole tools and systems, surface drilling equipment, remote monitoring and the use of models and simulations while drilling. While its scope is large, its potential benefits are impressive, among them: fewer workers exposed to rig-floor hazards, the ability to realize repeatable performance drilling, and lower drilling risk. While drilling systems automation includes new drilling technology, it is most importantly a collaborative infrastructure for performance drilling. In 2008, a small group of engineers and scientists attending an SPE conference noted that automation was becoming a key topic in drilling and they formed a technical section to investigate it further. By 2015, the group reached a membership of sixteen hundred as the technology rapidly gaining acceptance. Why so much interest? The benefits and promises of an automated approach to drilling address the safety and fundamental economics of drilling. What will it take? Among the answers are an open collaborative digital environment at the wellsite, an openness of mind to digital technologies, and modified or new business practices. What are the barriers? The primary barrier is a lack of understanding and a fear of automation. When will it happen? It is happening now. Digital technologies are transforming the infrastructure of the drilling industry. Drilling systems automation uses this infrastructure to deliver safety and performance, and address cost.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
The Fine Art of Combining Capacity Management with Machine LearningPrecisely
Today, capacity management within the enterprise continues to evolve. In the past, we were focused on the hardware – but now we are focused on the services. With that in mind, the amount of data available has increased significantly and has become difficult for individuals to sort through.
It is apparent that to be successful in this discipline, we need the machines to do more of the heavy lifting. This includes automatically creating reports, calling out anomalies and producing forecasts. The intuition of the human computer is imperative to the success.
View this webinar on-demand where we discuss:
• The strengths and weaknesses of capacity management with and without machine learning
• What machine learning can provide throughout the process
• The benefits of using capacity management and machine learning within your organization
The goal of Lean Warehouse 101 is to equip distributers to be more competitive in their respective markets. Through understanding how to implement Lean Principles, participants can make changes in their facility that will eliminate waste, maximize productivity and increase profits. The class will yield immediate results as students return to their workplace with an understanding of waste and how to begin eliminating it from the process.
A great class for distributers, warehouses, logistics companies or any company that has warehousing operations.
Critical Performance Metrics for DDR4 based SystemsBarbara Aichinger
Servers are critical to today's Cloud Computing and DDR memory is at the heart of all Cloud Computing Servers. Presented at DesignCon 2015 this presentation outlines new measurable performance metrics for DDR4 Memory Subsystems.
Simulation involves developing a model of a real-world system over time to analyze its behavior and performance. The key aspects covered in this document include defining simulation as modeling the operation of a system over time through artificial history generation and observation. Simulation models can be used as analysis and design tools to predict the effects of changes to a system before actual implementation. Discrete event simulation is discussed as a common technique that models systems with state changes occurring at discrete points in time. The document also outlines the steps in a typical simulation study including problem formulation, model conceptualization, experimentation and analysis.
This document discusses the need for improved training opportunities in semiconductor equipment maintenance. It outlines that maintenance currently involves extensive down time due to issues like replacing parts without understanding failures, inadequate collection of failure symptoms, and poor record keeping. It provides examples of specific maintenance errors and proposes delivering on-site training to maintenance managers, technicians, and trainers to teach core competencies and troubleshooting skills using their own equipment. This is intended to improve maintenance performance and reduce down time industry-wide.
This document provides an overview of precision laser shaft alignment using the RotAlign Touch tool. It discusses the benefits of precision alignment, including increased uptime and reduced energy consumption. It then describes shaft alignment principles and how laser alignment works. The document reviews the key components and functionality of the Fluke shaft alignment tool, outlining a step-by-step process for setting up the machine, taking measurements, diagnosing faults, and making corrections to the alignment. Additional topics covered include soft foot checking and what to look for in a shaft alignment tool.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of poka-yoke and human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
This document outlines an introductory training on the concept of Poka-Yoke, which is a Japanese term meaning "mistake-proofing". The training contains 12 sessions that cover topics such as: the paradigm shift for achieving zero errors; introductions to Poka-Yoke concepts; examples of Poka-Yoke from daily life; the three qualifiers of Poka-Yoke being simple/inexpensive, 100% inspection, and immediate feedback; methods and types of Poka-Yoke implementation; principles of Poka-Yoke; and the 100-1000-10000 rule regarding escalating costs of defects. The overall aim is to teach participants how to utilize Poka-Yoke methods
This document outlines an introductory training on the concept of Poka-Yoke, which is a Japanese term meaning "mistake-proofing". The training contains 12 sessions that cover topics such as: the paradigm shift towards zero errors, introductions to Poka-Yoke concepts and examples, process waste management, zero defect quality systems, the three qualifiers of Poka-Yoke (simple/inexpensive, 100% inspection, immediate feedback), methods and types of Poka-Yoke, principles of Poka-Yoke, and the 100-1000-10000 rule regarding increasing costs of errors. The overall aim is to teach participants how to utilize mistake-proofing approaches to prevent errors and reduce
This document provides an introduction to computer simulation. It begins with defining key concepts like systems, models, simulation, and discrete event simulation. It discusses how simulation is used to imitate the operations of a system by developing a model and evaluating it numerically. The document then covers topics like the process of developing a simulation model, different types of simulation models, components and organization of discrete event simulation models, and time advance mechanisms used in simulation. Finally, it provides an example of simulating a single server queueing system to estimate performance measures like average delay in queue.
Unitization is the process of developing an oil or gas field that spans multiple license or international boundaries as a single unit. It ensures optimal resource recovery and maximizes value for the involved parties and states. Historically, the "rule of capture" led to inefficient development as individual operators sought to quickly extract resources. Modern unitization agreements establish initial participation shares and include provisions for later redeterminations based on new technical data. They aim to facilitate cooperative development while equitably allocating costs and production among stakeholders.
The document provides information about a lecture on compositional simulation given by Dr. Russell T. Johns. It discusses:
1) Current compositional simulators use averaged properties and phase labels which can lead to discontinuities and inaccurate simulations.
2) A new approach is presented to model relative permeability as a state function dependent on saturation, connectivity, capillary number, and wettability without using phase labels.
3) Examples show this new approach improves simulation robustness, speed, and accuracy, and can provide more reliable recovery estimates compared to current compositional and black-oil simulators.
More Related Content
Similar to Lessons Learned: How NOT to Do Drilling Automation
Goal Driven Performance Optimization, Peter ZaitsevFuenteovejuna
The document discusses goal driven performance optimization. It emphasizes setting clear performance goals based on metrics like response time and throughput. Goals should be set for different types of requests and measured regularly. Instrumentation of the system is important to identify bottlenecks and queries that are causing slowdowns. The key is to prioritize optimization efforts on the most important user interactions that are not meeting goals. Taking a goal-driven approach focuses work on the most significant performance issues.
This document discusses setting performance goals to optimize existing applications. It recommends defining goals like 95th percentile response times for different types of requests and measuring these goals over short intervals like every 5 minutes. The goals should focus on important user interactions and prioritize the most critical performance problems first. Instrumenting production systems to collect response time data can help understand where to optimize and ensure the goals are being met for all users.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
This document discusses anomaly detection using the Cortical Learning Algorithm (CLA). It defines anomalies and describes how NuPIC/CLA computes anomaly scores for streaming data to detect spatial, temporal, and other types of anomalies. Sample code is provided to demonstrate anomaly detection on CPU usage, heater temperature, and randomness change examples. The document also discusses how anomaly likelihood is computed in Grok and presents several use cases. It concludes by discussing future work including a benchmark for streaming anomaly detection.
Performance modeling provides important insights for capacity planning and system sizing without costly full-scale testing. While sophisticated mathematical modeling was common in the past, today's complex systems are difficult to model formally and existing tools are outdated. However, minimal modeling with common-sense approximations using metrics like resource usage per transaction and hardware capacity can still be useful. Keeping even informal models in mind helps performance engineers understand systems, but complex systems benefit from documenting models. Reviving the art of performance modeling can add value to modern continuous performance testing approaches.
The document provides an overview of reliability centered maintenance (RCM) including:
1. RCM is a process used to determine necessary maintenance to ensure assets perform their intended functions by mitigating failure consequences.
2. An RCM analysis involves a multifunctional team answering seven questions about asset functions, failures, failure causes, effects, importance, and predictive/preventive maintenance techniques.
3. Implementing RCM principles like condition-based maintenance improves reliability by focusing maintenance on asset condition rather than rigid schedules and reducing unnecessary tasks.
This document contains a summary of a presentation on best practices in maintenance and reliability by Ricky Smith. It discusses key topics like reliability definitions, failure patterns, predictive maintenance, FRACAS systems, and reliability metrics. It emphasizes that most equipment failures are self-induced due to issues like improper installation, maintenance, or lubrication. It also outlines steps for improving reliability like prioritizing assets, identifying maintenance strategies, and using failure data for continuous improvement. The goal is to move from reactive to proactive maintenance through practices like condition monitoring and root cause analysis.
Drilling systems automation is the real-time reliance on digital technology in creating a wellbore. It encompasses downhole tools and systems, surface drilling equipment, remote monitoring and the use of models and simulations while drilling. While its scope is large, its potential benefits are impressive, among them: fewer workers exposed to rig-floor hazards, the ability to realize repeatable performance drilling, and lower drilling risk. While drilling systems automation includes new drilling technology, it is most importantly a collaborative infrastructure for performance drilling. In 2008, a small group of engineers and scientists attending an SPE conference noted that automation was becoming a key topic in drilling and they formed a technical section to investigate it further. By 2015, the group reached a membership of sixteen hundred as the technology rapidly gaining acceptance. Why so much interest? The benefits and promises of an automated approach to drilling address the safety and fundamental economics of drilling. What will it take? Among the answers are an open collaborative digital environment at the wellsite, an openness of mind to digital technologies, and modified or new business practices. What are the barriers? The primary barrier is a lack of understanding and a fear of automation. When will it happen? It is happening now. Digital technologies are transforming the infrastructure of the drilling industry. Drilling systems automation uses this infrastructure to deliver safety and performance, and address cost.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
The Fine Art of Combining Capacity Management with Machine LearningPrecisely
Today, capacity management within the enterprise continues to evolve. In the past, we were focused on the hardware – but now we are focused on the services. With that in mind, the amount of data available has increased significantly and has become difficult for individuals to sort through.
It is apparent that to be successful in this discipline, we need the machines to do more of the heavy lifting. This includes automatically creating reports, calling out anomalies and producing forecasts. The intuition of the human computer is imperative to the success.
View this webinar on-demand where we discuss:
• The strengths and weaknesses of capacity management with and without machine learning
• What machine learning can provide throughout the process
• The benefits of using capacity management and machine learning within your organization
The goal of Lean Warehouse 101 is to equip distributers to be more competitive in their respective markets. Through understanding how to implement Lean Principles, participants can make changes in their facility that will eliminate waste, maximize productivity and increase profits. The class will yield immediate results as students return to their workplace with an understanding of waste and how to begin eliminating it from the process.
A great class for distributers, warehouses, logistics companies or any company that has warehousing operations.
Critical Performance Metrics for DDR4 based SystemsBarbara Aichinger
Servers are critical to today's Cloud Computing and DDR memory is at the heart of all Cloud Computing Servers. Presented at DesignCon 2015 this presentation outlines new measurable performance metrics for DDR4 Memory Subsystems.
Simulation involves developing a model of a real-world system over time to analyze its behavior and performance. The key aspects covered in this document include defining simulation as modeling the operation of a system over time through artificial history generation and observation. Simulation models can be used as analysis and design tools to predict the effects of changes to a system before actual implementation. Discrete event simulation is discussed as a common technique that models systems with state changes occurring at discrete points in time. The document also outlines the steps in a typical simulation study including problem formulation, model conceptualization, experimentation and analysis.
This document discusses the need for improved training opportunities in semiconductor equipment maintenance. It outlines that maintenance currently involves extensive down time due to issues like replacing parts without understanding failures, inadequate collection of failure symptoms, and poor record keeping. It provides examples of specific maintenance errors and proposes delivering on-site training to maintenance managers, technicians, and trainers to teach core competencies and troubleshooting skills using their own equipment. This is intended to improve maintenance performance and reduce down time industry-wide.
This document provides an overview of precision laser shaft alignment using the RotAlign Touch tool. It discusses the benefits of precision alignment, including increased uptime and reduced energy consumption. It then describes shaft alignment principles and how laser alignment works. The document reviews the key components and functionality of the Fluke shaft alignment tool, outlining a step-by-step process for setting up the machine, taking measurements, diagnosing faults, and making corrections to the alignment. Additional topics covered include soft foot checking and what to look for in a shaft alignment tool.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of poka-yoke and human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
This document outlines an introductory training on the concept of Poka-Yoke, which is a Japanese term meaning "mistake-proofing". The training contains 12 sessions that cover topics such as: the paradigm shift for achieving zero errors; introductions to Poka-Yoke concepts; examples of Poka-Yoke from daily life; the three qualifiers of Poka-Yoke being simple/inexpensive, 100% inspection, and immediate feedback; methods and types of Poka-Yoke implementation; principles of Poka-Yoke; and the 100-1000-10000 rule regarding escalating costs of defects. The overall aim is to teach participants how to utilize Poka-Yoke methods
This document outlines an introductory training on the concept of Poka-Yoke, which is a Japanese term meaning "mistake-proofing". The training contains 12 sessions that cover topics such as: the paradigm shift towards zero errors, introductions to Poka-Yoke concepts and examples, process waste management, zero defect quality systems, the three qualifiers of Poka-Yoke (simple/inexpensive, 100% inspection, immediate feedback), methods and types of Poka-Yoke, principles of Poka-Yoke, and the 100-1000-10000 rule regarding increasing costs of errors. The overall aim is to teach participants how to utilize mistake-proofing approaches to prevent errors and reduce
This document provides an introduction to computer simulation. It begins with defining key concepts like systems, models, simulation, and discrete event simulation. It discusses how simulation is used to imitate the operations of a system by developing a model and evaluating it numerically. The document then covers topics like the process of developing a simulation model, different types of simulation models, components and organization of discrete event simulation models, and time advance mechanisms used in simulation. Finally, it provides an example of simulating a single server queueing system to estimate performance measures like average delay in queue.
Similar to Lessons Learned: How NOT to Do Drilling Automation (20)
Unitization is the process of developing an oil or gas field that spans multiple license or international boundaries as a single unit. It ensures optimal resource recovery and maximizes value for the involved parties and states. Historically, the "rule of capture" led to inefficient development as individual operators sought to quickly extract resources. Modern unitization agreements establish initial participation shares and include provisions for later redeterminations based on new technical data. They aim to facilitate cooperative development while equitably allocating costs and production among stakeholders.
The document provides information about a lecture on compositional simulation given by Dr. Russell T. Johns. It discusses:
1) Current compositional simulators use averaged properties and phase labels which can lead to discontinuities and inaccurate simulations.
2) A new approach is presented to model relative permeability as a state function dependent on saturation, connectivity, capillary number, and wettability without using phase labels.
3) Examples show this new approach improves simulation robustness, speed, and accuracy, and can provide more reliable recovery estimates compared to current compositional and black-oil simulators.
This document summarizes a presentation about transitioning from a competency-based training approach to a performance-based training approach for developing upstream oil and gas professionals. It discusses defining competencies through competency mapping, then shifting to identify key work processes, outcomes of top performers, and aligning learning with job roles and business goals. It provides a case study of implementing a performance-based program across multiple disciplines at an oil company, including partnerships, technologies, evaluations, and measurements of impact. The presentation emphasizes that a performance-based approach can reduce time to competency and burden on operations while engaging employees.
The primary funding for the Society of Petroleum Engineers Distinguished Lecturer Program is provided by member donations to The SPE Foundation and a contribution from Offshore Europe. The program also receives support from companies that allow their employees to serve as lecturers and from AIME. The January 2020 tour lecture focuses on thriving in a lower oil price environment, including topics such as market dynamics, keys to success, technology impacts, and takeaway points.
The Distinguished Lecturer Program is primarily funded by donations to the SPE Foundation and contributions from Offshore Europe. Additional support is provided by AIME. The program allows industry professionals to serve as lecturers. Martin Rylance will give a presentation called "The Fracts of Life" covering key aspects of geomechanics, formation permeability, fracturing, QA/QC, and the transition from vertical to horizontal wells.
The document discusses injectivity decline in water injectors. It provides an overview of the main mechanisms of impairment, including solids deposition, water quality issues, and reservoir/well factors. It also discusses options for monitoring injector health, such as pressure-transient analysis, and interventions like back-flushing or re-fracturing to restore injectivity. The key messages are that impairment is complex with multiple causes, but also predictable; mitigation strategies exist but may not always be economically viable; and proper planning, surveillance and considering multiple factors are important for project success.
This document discusses the Society of Petroleum Engineers Distinguished Lecturer Program. It provides the following key details in 3 sentences:
The SPE Distinguished Lecturer Program is funded primarily by the SPE Foundation through member donations and Offshore Europe. It allows industry professionals to serve as lecturers on topics like CO2 storage and CO2-EOR. Additional support is provided by AIME to further the program's educational mission.
The document summarizes funding sources and support for the Society of Petroleum Engineers Distinguished Lecturer Program, which is primarily funded by member donations to The SPE Foundation and a contribution from Offshore Europe. Additional support comes from companies that allow employees to serve as lecturers and from AIME. The document then outlines the topics to be covered in a presentation on 4D seismic history matching.
This document discusses developing the next generation of completion engineers through advanced engineering training. It defines the need for such training by highlighting workforce gaps, global expansion of unconventionals, and the multidisciplinary knowledge required. Training options presented include internally-focused engineering programs and using industry resources from SPE. Companies that focus on advanced training will have a more competent workforce.
The Society of Petroleum Engineers Distinguished Lecturer Program provides funding through member donations and industry support to bring expert lecturers to discuss emerging topics. This lecture discusses how big data analytics can help petroleum engineers and geoscientists reduce costs, improve productivity and efficiency by analyzing large datasets to find patterns and relationships. Case studies demonstrate applications in reservoir modeling, production optimization, and predictive maintenance.
The document discusses coiled tubing telemetry (CTT) technology. It provides an overview of CTT, including its description and benefits. It also presents four case histories that demonstrate how CTT improved coiled tubing operations by enabling real-time downhole data acquisition. CTT allowed operations to be completed more efficiently and safely by mitigating uncertainties in unknown downhole conditions. The case histories show that CTT can reduce operational time and costs for applications like logging, milling, perforating and camera runs. The document concludes that CTT will become commonly used for coiled tubing operations to make them less people intensive and more automated.
The document summarizes a presentation on the past, present, and future of oil prices. It explains that oil prices rose extraordinarily since 1970 due to above-ground hurdles limiting supply expansion. Recent price declines are attributed to slowing global growth and rising shale oil production. Technological advances may allow shale and other sources to continue growing, keeping supply abundant and prices in the range of $40-60 per barrel long-term.
The SPE Foundation and member donations primarily fund the SPE Distinguished Lecturer Program. Companies also support the program by allowing employees to serve as lecturers. Additional support comes from AIME. The program provides 30 minute presentations on reservoir topics. Robert Hawkes will present on hydraulic fracture flowback dynamics, discussing load fluid recovery and its implications for long term production. His presentation will cover laboratory observations, field data, and diagnostic tools to understand flowback mechanisms and estimate ultimate load fluid recovery.
This document summarizes a presentation on solving the mystery of low rates of penetration in deep wells. It discusses how early researchers thought rock failure at atmospheric pressure simulated downhole conditions, but testing found lower ROPs downhole. The Mohr-Coulomb model was the first suspect considered to explain rock strengthening with pressure, but was found to only explain part of the reduction in ROP observed. Additional factors beyond simple rock failure criteria were discovered to influence ROP at depth.
Primary funding for the Society of Petroleum Engineers Distinguished Lecturer Program is provided through member donations to the SPE Foundation and a contribution from Offshore Europe. Additional support comes from AIME. The program offers lectures from industry professionals on various topics, and is grateful to companies that allow their employees to participate as lecturers.
Geochemical logging provides quantitative estimates of formation mineralogy through measurements of elemental abundances. This allows for improved evaluation of complex reservoirs containing multiple minerals. Case studies demonstrated how geochemical logs aided in characterizing carbonate, sandstone, and shale gas formations through mineral identification, matrix density calculation, and porosity/saturation determinations. Core-log integration can be challenging due to differences in sampling volumes, but geochemical logs provide valuable mineralogical context for formation evaluation.
The Distinguished Lecturer Program provides concise summaries of technical documents on facilities sand management. This summary covers a two-day course on the topic presented by Dr. Hank Rawlins, who has over 25 years of industry experience. The course covers the five key steps to managing sand in production facilities: separation, collection, cleaning, dewatering, and transport. It emphasizes understanding sand issues in facilities rather than focusing on specific equipment.
The document discusses the Distinguished Lecturer Program run by the Society of Petroleum Engineers (SPE). It is primarily funded by member donations and industry support. The program brings in expert lecturers to discuss topics like global warming, fossil fuels, and the linkage between human activity and climate change. The document outlines some of the key debates in this area between those who believe human activity is the primary driver of climate change and those who are more skeptical of this view.
The document summarizes a presentation on using wireline formation testing (WFT) to characterize reservoirs and reduce uncertainties. It discusses how WFT can be used to measure pressures, sample and analyze downhole fluids, conduct transient tests, and test in-situ stresses. The results from these WFT analyses can be integrated into reservoir modeling workflows and help understand properties like permeability, fluid contacts, and the safe drilling window. Advanced sensors and improved transient testing capabilities in new generation WFT tools are providing more downhole data to reduce risks in reservoir evaluation.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
International Conference on NLP, Artificial Intelligence, Machine Learning an...
Lessons Learned: How NOT to Do Drilling Automation
1. Primary funding is provided by
The SPE Foundation through member donations
and a contribution from Offshore Europe
The Society is grateful to those companies that allow their
professionals to serve as lecturers
Additional support provided by AIME
Society of Petroleum Engineers
Distinguished Lecturer Program
www.spe.org/dl
2. Society of Petroleum Engineers
Distinguished Lecturer Program
www.spe.org/dl 2
Dr. William L. Koederitz, SPE, PE
Lessons Learned,
How NOT to Do Drilling Automation
3. Outline
• What is drilling
automation?
– Examples
– Pros and Cons
• How NOT to do drilling
automation
– A positive side will also be
shown!
• Conclusions
3
4. Drilling Automation
• The technique of operating or controlling
a process by highly automatic means,
reducing human intervention to a
minimum.
• Mechanization refers to the replacement
of human power with mechanical power of
some form.
4
5. The 10 Stages of Automation
5
Level Automation Description
10
The computer decides everything, acts autonomously, ignoring the
human.
9 Informs the human only if it, the computer, decides to
8 Informs the human only if asked, or
7 Executes automatically, then necessarily informs the human, and
6
Allows the human a restricted time to veto before automatic
execution, or
5 Executes that suggestion if the human approves, or
4 Suggests one alternative
3 Narrows the selection down to a few, or
2 The computer offers a complete set of decision/ action alternatives, or
1
The computer offers no assistance: human must take all decisions
and actions
IEEE Transactions on Systems, Man, and Cybernetics- Part A: Systems and Humans, Vol. 30, No. 3, May 2000
6. Example – DWOB Control
• DWOB = “Downhole Weight on Bit”
• SWOB = “Surface Weight on Bit”
• DWOB ≠ SWOB
• Constant DWOB provides better results
– Higher Rate of Penetration
– Better directional control
6
SurfaceWeight
W eig ht o n B it
NormalForce
7. Manual DWOB Control
• Control process by driller
– Read slow-speed DWOB
– Compare to desired DWOB
– Adjust SWOB setpoint in autodriller
• Holds DWOB “close” to desired
• Requires constant monitoring, adjusting
• If downhole conditions change, must react
rapidly
7
8. Automated DWOB Control
• Driller sets bounds on DWOB, SWOB
• Automated optimization process
– Analyze high-speed surface and downhole
drilling data
– Compute change in SWOB
– New SWOB sent direct to rig
• Driller now only has to monitor
• Holds DWOB very close to desired
• Reacts quickly to changes downhole
8
9. Example – MSE Optimization
• MSE = “Mechanical Specific Energy”
• MSE = energy in / volume of rock drilled
• Lower MSE more efficient drilling
9
10. Manual MSE Optimization
• Optimization process by driller
– Change Bit Weight and/or RPM
– MSE response dictates next change
• Performance improvement
– More as driller gains experience
• Requires constant monitoring, adjusting
10
11. Automated MSE Optimization
• Driller sets bounds on Bit Weight, RPM
• Automated optimization process
– Analyze recent drilling & MSE data
– Search technique selects Bit Weight, RPM
– New Bit Weight, RPM sent direct to rig
• Driller now only has to monitor
• Performance improved in most cases
– Can’t compete with dedicated expert driller
11
12. Why Automate?
• Efficiency
– Tasks that are repetitive and require
continuous monitoring can be done more
consistently with automation.
– Free up rig crew for other tasks
• Enhance Crew Capability
– Shortage of experienced individuals at the rig
• Improved Performance
– Do things that people can’t do (non-stop)
• Safety
12
13. Risks of Automation
• Complacency
• Loss of ownership
• Dependent on data & control
quality
• Maximum performance limited by
“smartness” of automation logic
– In the specific situation
• Automation can not innovate
– Only motivated people can do that
13
14. When & What to Automate
• Selection Methods
– Look for good automation applications
– Look for performance improvement
opportunities
• Define automated and non-automated options
• Decide based on your criteria
– Return on Investment
– Safety
14
15. Drilling Automation in SPE
• SPE DSATS
– Drilling Systems Automation Technical Section
– Purpose is to accelerate automation in drilling
– On SPE website, workshops, forums, …
– SPE/IADC-173010-MS “Drilling Systems Roadmap
– The Means to Accelerate Adoption”
• IADC ART
– Advanced Rig Technology Committee
– Focused on safety and efficiency of automation
15
16. How Not to …
“The office saw value and
wanted it, so the rig will too.”
•Performance-motivated rig
•Office often out of touch with actual
rig operations
– Rig crew sees the negatives and
focuses on them
•Solution
– Include driller from the start
– Change how people work
16
Aha!!!
17. How Not to …
“The office saw value and wanted it, so the rig
will too.”
•NOT a performance-motivated rig
•Solution
– Change to performance-motivated rig!
– If not willing to do that:
◦ Acceptance will be an issue
◦ Design in value that has meaning at rigsite
–Make their life easier
17
18. How Not to …
“Driller is no longer needed. ”
•Driller is the core of rig activity
•If he feels left out, automation will not work
– Even if no action is required on his part
•Solution
‒ Design system with driller at center and in
control
‒ Treat driller as most-critical automation enabler
18
19. How Not to …
“That rig’s data was good enough for drilling,
so it’ll be fine for automation.”
•Typical rig data is never good enough
– Often already insufficient (if you really look)
•Reliable, high-quality data is a must-have
•Solution
– Investigate rig data quality, upgrade as needed
– Continuous monitoring of data quality
19
20. How Not to …
“That rig’s controls were good enough for
drilling, so they’ll be fine for automation.”
•Reliable, sufficiently precise control of rig equipment
is a must-have
•Typical rig control is often not precise enough or is
not readily accessible
•Solution
– Evaluate rig control capability, resolve issues
– Continuous monitoring of control quality
20
21. How Not to …
“Since it’s automated, driller only needs to turn
it on, not understand how it works.”
•This reduces effective use (loss of value)
– Worst case, destroys rigsite acceptance
•Optimum use by rig maximum value
•Solution
– Design so driller is well informed of how it works
– Enhance comfort level (simulator exercises a +)
21
22. How Not to …
“This rig is a sister rig to the last one we
automated, so we are ready to go.”
22
• Every rig has some unique aspects
• Office records often aren’t perfect
• Solution
‒ Do a detailed rig survey
‒ Build configuration specific to rig
‒ Pre-test configuration in lab
23. How Not to …
“It’s a highly-automated system, so there
shouldn’t be any maintenance for the rig to
do.”
•Maintenance needed for optimum, safe
performance
•Changes in rig, sensors, drilling, …
•Solution
– Design for easy, minimal maintenance
◦ Automated diagnostics or remote monitoring
23
24. How Not to …
“Their only choice is on or off.”
“Let’s let them adjust everything.”
•There is an optimum level of interaction for
each driller and situation
•But too many levels are confusing
•Solution
– Analyze drillers, identify group(s)
– Design for some variation in drillers
◦ Basic vs advanced
24
25. How Not to …
“Automation seems to be going well, so
driller must be paying close attention.”
•Complacency is a risk
– The “better” the automation does its job, the
higher the risk
– A tough problem to solve
•Solution
– Human factors engineering, in some form
25
26. How Not to …
“Let’s make the system do everything (we
think) they need. They’ll sort it out.”
•The driller is over-loaded by this, resulting in
misuse or non-use
•Solution
– Design the system as a suite of tools
◦ Driller picks the right tool for the right job
– Key decision criteria are simplicity,
modularity, benefit/cost ratio
26
27. Conclusions
• Automation is a tool to improve
performance
– Pros and cons, per application
• Critical success factors
– Deciding if and what to automate
– Design and implementation
◦ People issues often > technical issues
◦ Do not leave the driller out!
27
28. Society of Petroleum Engineers
Distinguished Lecturer Program
www.spe.org/dl 28
Your Feedback is Important
Enter your section in the DL Evaluation Contest by
completing the evaluation form for this presentation
Visit SPE.org/dl