This was presented in a series of webinar organized by SPE Asia Pacific University and SPE Northern Emirates virtually in Malaysia. In this webinar, I presented a motivation presentation for students and young professionals on the education of programming for petroleum engineering and geoscience domains. I showcased some of my open-source works written in Python.
Industry studies show that mature fields currently account for over 70% of the world’s oil and gas production. Increasing production rates and ultimate recovery in these fields in order to maintain profitable operations, without increasing costs, is a common challenge.
This lecture addresses techniques to extract maximum value from historical production data using quick workflows based on common sense. Extensive in-depth reservoir studies are obviously very valuable, but not all situations require these, particularly in the case of brown fields where the cost of the study may outweigh the benefits of the resulting recommendations.
This lecture presents workflows based on Continuous Improvement/LEAN methodology which are flexible enough to apply to any mature asset for short and long term planning. A well published, low permeability brown oil field was selected to retroactively demonstrate the workflows, as it had an evident workover campaign in late 2010 with subsequent production increase. Using data as of mid-2010, approximately 40 wells were identified as under-performing due to formation damage or water production problems, based on three days of analyses. The actual performance of the field three years later was then revealed along with the actual interventions performed. The selection of wells is compared to the selection suggested by the workflow, and the results of the interventions are shown. The field's projected recovery factor was increased by 5%, representing a gain of 1.4 million barrels of oil.
Production decline analysis is a traditional means of identifying well production problems and predicting well performance and life based on real production data. It uses empirical decline models that have little fundamental justifications. These models include
•
Exponential decline (constant fractional decline)
•
Harmonic decline, and
•
Hyperbolic decline.
Industry studies show that mature fields currently account for over 70% of the world’s oil and gas production. Increasing production rates and ultimate recovery in these fields in order to maintain profitable operations, without increasing costs, is a common challenge.
This lecture addresses techniques to extract maximum value from historical production data using quick workflows based on common sense. Extensive in-depth reservoir studies are obviously very valuable, but not all situations require these, particularly in the case of brown fields where the cost of the study may outweigh the benefits of the resulting recommendations.
This lecture presents workflows based on Continuous Improvement/LEAN methodology which are flexible enough to apply to any mature asset for short and long term planning. A well published, low permeability brown oil field was selected to retroactively demonstrate the workflows, as it had an evident workover campaign in late 2010 with subsequent production increase. Using data as of mid-2010, approximately 40 wells were identified as under-performing due to formation damage or water production problems, based on three days of analyses. The actual performance of the field three years later was then revealed along with the actual interventions performed. The selection of wells is compared to the selection suggested by the workflow, and the results of the interventions are shown. The field's projected recovery factor was increased by 5%, representing a gain of 1.4 million barrels of oil.
Production decline analysis is a traditional means of identifying well production problems and predicting well performance and life based on real production data. It uses empirical decline models that have little fundamental justifications. These models include
•
Exponential decline (constant fractional decline)
•
Harmonic decline, and
•
Hyperbolic decline.
Drill stem test (DST) is one of the most famous on-site well testing that is used to unveil critical reservoir and fluid properties such as reservoir pressure, average permeability, skin factor and well potential productivity index. It is relatively cheap on-site test that is done prior to well completion. Upon the DST results, usually, the decision of the well completion is taken.
prediction of original oil in place using material balance simulation. It's also useful for future reservoir performance and predict ultimate hydrocarbon recovery under various types of primary driving mechanisms.
Overview of Reservoir Simulation by Prem Dayal Saini
Reservoir simulation is the study of how fluids flow in a hydrocarbon reservoir when put under production conditions. The purpose is usually to predict the behavior of a reservoir to different production scenarios, or to increase the understanding of its geological properties by comparing known behavior to a simulation using different geological representations.
Artificial Intelligence has caught the attention of the oil and gas industry. This presentation sets out 4 use cases that are live in oil and gas as of October 2017. The speaker notes include specific solution names and vendors, including Stream Systems, Veerum, Osprey Analytics and IBM.
Reservoir engineers cannot capture full value from waterflood projects on their own. Cross-functional participation from earth sciences, production, drilling, completions, and facility engineering, and operational groups is required to get full value from waterfloods. Waterflood design and operational case histories of cross-functional collaboration are provided that have improved life cycle costs and increased recovery for onshore and offshore waterfloods. The role that water quality, surveillance, reservoir processing rates, and layered reservoir management has on waterflood oil recovery and life cycle costs will be clarified. Techniques to get better performance out of your waterflood will be shared.
New Developments in H2O: April 2017 EditionSri Ambati
H2O presentation at Trevor Hastie and Rob Tibshirani's Short Course on Statistical Learning & Data Mining IV: http://web.stanford.edu/~hastie/sldm.html
PDF and Keynote version of the presentation available here: https://github.com/h2oai/h2o-meetups/tree/master/2017_04_06_SLDM4_H2O_New_Developments
Drill stem test (DST) is one of the most famous on-site well testing that is used to unveil critical reservoir and fluid properties such as reservoir pressure, average permeability, skin factor and well potential productivity index. It is relatively cheap on-site test that is done prior to well completion. Upon the DST results, usually, the decision of the well completion is taken.
prediction of original oil in place using material balance simulation. It's also useful for future reservoir performance and predict ultimate hydrocarbon recovery under various types of primary driving mechanisms.
Overview of Reservoir Simulation by Prem Dayal Saini
Reservoir simulation is the study of how fluids flow in a hydrocarbon reservoir when put under production conditions. The purpose is usually to predict the behavior of a reservoir to different production scenarios, or to increase the understanding of its geological properties by comparing known behavior to a simulation using different geological representations.
Artificial Intelligence has caught the attention of the oil and gas industry. This presentation sets out 4 use cases that are live in oil and gas as of October 2017. The speaker notes include specific solution names and vendors, including Stream Systems, Veerum, Osprey Analytics and IBM.
Reservoir engineers cannot capture full value from waterflood projects on their own. Cross-functional participation from earth sciences, production, drilling, completions, and facility engineering, and operational groups is required to get full value from waterfloods. Waterflood design and operational case histories of cross-functional collaboration are provided that have improved life cycle costs and increased recovery for onshore and offshore waterfloods. The role that water quality, surveillance, reservoir processing rates, and layered reservoir management has on waterflood oil recovery and life cycle costs will be clarified. Techniques to get better performance out of your waterflood will be shared.
New Developments in H2O: April 2017 EditionSri Ambati
H2O presentation at Trevor Hastie and Rob Tibshirani's Short Course on Statistical Learning & Data Mining IV: http://web.stanford.edu/~hastie/sldm.html
PDF and Keynote version of the presentation available here: https://github.com/h2oai/h2o-meetups/tree/master/2017_04_06_SLDM4_H2O_New_Developments
I am shubham sharma graduated from Acropolis Institute of technology in Computer Science and Engineering. I have spent around 2 years in field of Machine learning. I am currently working as Data Scientist in Reliance industries private limited Mumbai. Mainly focused on problems related to data handing, data analysis, modeling, forecasting, statistics and machine learning, Deep learning, Computer Vision, Natural language processing etc. Area of interests are Data Analytics, Machine Learning, Machine learning, Time Series Forecasting, web information retrieval, algorithms, Data structures, design patterns, OOAD.
Streaming data presents new challenges for statistics and machine learning on extremely large data sets. Tools such as Apache Storm, a stream processing framework, can power range of data analytics but lack advanced statistical capabilities. These slides are from the Apache.con talk, which discussed developing streaming algorithms with the flexibility of both Storm and R, a statistical programming language.
At the talk I dicsussed issues of why and how to use Storm and R to develop streaming algorithms; in particular I focused on:
• Streaming algorithms
• Online machine learning algorithms
• Use cases showing how to process hundreds of millions of events a day in (near) real time
See: https://apacheconna2015.sched.org/event/09f5a1cc372860b008bce09e15a034c4#.VUf7wxOUd5o
An introductory talk on scientific computing in Python. Statistics, probability and linear algebra, are important aspects of computing/computer modeling and the same is covered here.
Travis Oliphant "Python for Speed, Scale, and Science"Fwdays
Python is sometimes discounted as slow because of its dynamic typing and interpreted nature and not suitable for scale because of the GIL. But, in this talk, I will show how with the help of talented open-source contributors around the world, we have been able to build systems in Python that are fast and scalable to many machines and how this has helped Python take over Science.
Constrained Optimization with Genetic Algorithms and Project BonsaiIvo Andreev
Traditional machine learning requires volumes of labelled data that can be time consuming and expensive to produce,”
“Machine teaching leverages the human capability to decompose and explain concepts to train machine learning models
direction (teaching the correct answer is not by showing the data for it, but by using a person to show the answer).
Project Bonsai is a low code platform for intelligent solutions but with a different perspective on data it allows a completely new approach to tasks, especially when the physical world is involved. Under the hood it combines machine teaching, calibration and optimization to create intelligent control systems using simulations. The teaching curriculum is performed using a new language concept - “Inkling” and training a model is easy and interactive.
This presentation has slides from a talk that I gave at the annual Experimental Biology meeting, 2015, on our curriculum for Big Data Analytics in the Inland Empire.
Ease of use and high performance in the open-source Python ecosystem
Psydac is an open source library for isogeometric analysis (IGA), that is, a finite element method which uses the same basis functions of a CAD model (B-splines and NURBS).
It has been developed at the Max Planck Institute for Plasma Physics, with the goal of exploring advanced numerical methods for electromagnetism, magneto-hydro-dynamics, and plasma kinetics. It is completely written in Python, uses only open-source libraries, and can run large parallel simulations on high-performance computing facilities. It employs a domain specific language, automatic code generation, a transpiler, MPI communication, OpenMP multithreading, and parallel I/O. In this talk we explore the library architecture and its overall design philosophy, which can be applied to other domains.
A Production Quality Sketching Library for the Analysis of Big DataDatabricks
In the analysis of big data there are often problem queries that don’t scale because they require huge compute resources to generate exact results, or don’t parallelize well.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Python Awareness for Exploration and Production Students and Professionals
1. Python Awareness for Exploration and
Production Students and Professionals
Yohanes Nuwara
2. Outline
• Why do we need to program and why Python?
• Python & oil and gas industries in-action
• Python as scientific programming language
• Open datasets and cloud computing with Python
• Python “lifehacks” you should know (for oil and gas)
• How to start learning and pitch ideas
• Demo, yeay!
3. Reasons why Engineers are not Starting?
• Engineers love working on spreadsheets – But, is it effective enough if we need to
automate?
• Engineers think programming is horrible (back in college when they learned C) –
But, do we want to learn if in fact programming is much simpler?
• Engineers depend on software – But, are you sure you know what’s behind the black
box?
• Engineers think that most problems are easy (1+1=2, so no need to code) – You
kidding! Engineering is not that simple.
• Engineers don’t need to be a programmer because there have been so many out
there – Nope! They don’t solve problems for you; they solve for themselves.
4. Why Python?
• Python is open-source
• Python is easy : It is built based on human language (feels like we’re “having
conversation” with the programs)
• Python is fast and efficient : The use of “list comprehension” instead of “for-loop”
makes consuming much less lines of codes
• Python is flexible : We can build and deploy programs almost anywhere (in local PC
systems and in the cloud)
• Python is abundant : 2 million ++ libraries and packages available to use
• Python is scientific : For scientific task, Python is as capable as scientific programming
languages like MATLAB, or even better.
5. The use of Python in Oil and Gas
Industries
• CGG GeoSoftware develops a Python
plugin that connects machine learning to
their interpretation software
• Schlumberger develops Python editor
that connects petrophysical calculation to
their Techlog software
• Equinor actively works on Python libraries
such as segyio for seismic data loader,
libecl for reservoir simulation data loader,
dlisio, etc.
• Tech companies help oil and gas
industries to deploy Python in their cloud
service, such as AWS
• OPM open-source reservoir simulator runs
on Python
AWS and CMG simulation architecture
7. • Acoustic/elastic wave simulation based on various
methods (finite-difference, pseudo-spectral, spectral)
for computational seismology (Prof. Heiner Igel from
LMU in Germany)
• FeNiCs high performance computing for
computational fluid dynamics (KTH in Sweden)
• Gekko numerical solvers and optimization (Prof.
John Hedengren from BYU in USA)
• Pylops package for (mathematical) inversion of
seismic data (Matteo Ravasi from Equinor)
• Fatiando a Terra package for computation in
potential methods in geophysics (Leonardo Uieda
from University of Liverpool, UK)
Navier-Stokes simulation in FeNiCs
Gravity processing in
Fatiando
Bayesian inversion in
Pylops
Python as Scientific Programming
Language
9. Numpy “lifehacks”
• Manipulating multidimensional arrays (transposing, etc) – seismic data is a “4D” data
consists of inlines, crosslines, time, and amplitude
• Data cleansing (search for NaNs, replace values) – data quality control (QC)
• Load and read files with text formats – parsing multiple files (e.g. ECLIPSE simulator
files)
• Operations in Numpy (matrix linear algebra, FFT, etc) – signal processing in
geophysics, inverse problems
• Generating random numbers – synthetic data generation when data is not available (!),
Monte Carlo simulation of a stochastic analysis in oil and gas
• Write files – output results from program to a file
10. Matplotlib “lifehacks”
• Creating subplots through automation – displaying multiple well logs and composites
(RHOB, NPHI, resistivity, GR, PEF, etc.) during formation eval
• Scatter plots – facies typing from petrophysical crossplots
• Logarithmic plots – semilog plot for well-test analysis
• Displaying histogram – evaluation of the statistical distribution of petrophysical variables
• Displaying boxplot – identifying outliers in well logs or production data
• Creating contour plots – spatial distribution of porosity, permeability, net pay, sand
thickness, etc.
• Displaying 3D plot – wellbore trajectory visualization
11. Pandas “lifehacks”
• Read spreadsheets
• Dataframe manipulation (slicing or creating subsets of dataframes, merging, etc)
• Datetime conversion (datetime format in Pandas is YYYY-MM-DD. If our input has
different format, we need to convert it)
• Summary statistics (knowing the mean, interquartiles, standard deviation, min, max)
• Data cleansing (searching for NaNs, removing unwanted rows or columns)
• Regex (string contains)
12. Scipy “lifehacks”
• Gridding data – gridding from a reservoir model (porosity, permeability, net pay in 2D/3D)
• Interpolate – mapping porosity, permeability, net pay using different methods (spline,
cubic, or kriging)
• Solve systems of equations and roots – solving PVT (e.g. Peng-Robinson cubic EOS)
• Using optimization methods – linear regression or curve fitting in well-test analysis,
Arps’ decline curve analysis
13. Open datasets and cloud computing
with Python
• Many available open datasets that you can access: SEG Wiki, Geothermal Data
Repository (GDR) OpenEi, KGS, MSEEL, and many more
• We have list down open datasets in: https://github.com/yohanesnuwara/open-geoscience-
repository
• Python allows interfacing with data stored in websites (cloud computing in Google Colab)
16. Pitching Ideas: Volumetrics with Python
• Producing contour maps of petrophysical properties
• Calculating OOIP and OGIP using Green’s theorem and geometric rules; Trapezoidal,
Pyramidal, and Simpson’s 1/3
17. Pitching Ideas: PVT Calculator with Python
• Creating a library of PVT correlations for reservoir gas, oil, and water
• Calculating PVT properties, such as z-factor of gas, formation volume factor (FVF),
solution gas-oil ratio, isothermal compressibility, and viscosity
18. Pitching Ideas: Material Balance with
Python
• Calculating original gas (and condensate) in place from dry-gas and gas-condensate
reservoirs
• Calculating original oil and gas in place from undersaturated and saturated oil
reservoirs; volatile and non-volatile
• Calculating aquifer influx and reservoir drive indices
19. Pitching Ideas: Well Testing with Python
• Modeling well rate and pressure transient response from a simulated multi-rate
pressure and rate tests; series of drawdowns and shut-ins
• Analyzing drawdown, buildup, and constant pressure tests
20. Pitching Ideas: Geomechanics with
Python
• Pore pressure estimation from geophysical logs
(sonic and density) logs
• Stress bounds (Sv, SHmax, Shmin) estimation using
stress diagram
• Mohr-Coulomb analysis of fault data (dip and
azimuth) from image logs
• Measuring probability of induced seismicity from
hydraulic fracturing operations
Taken from my project
reservoir-geomechanics
in GitHub
21. Pitching Ideas: Reservoir Simulation
with Python
• Build a simple reservoir simulator with Python
• Solve single-phase simulation (incompressible, slightly compressible, and
compressible); and multi-phase simulation
Taken from my project
PyReSim in GitHub
22. Pitching Ideas: Formation Evaluation
with Python
• Well-log visualization
• Exploratory data analysis (EDA) of well logs
• Petrophysical computations for total porosity, shale volume, water saturation, and
permeability
23. Pitching Ideas: Seismic Attribute
Computation
• Compute various seismic attributes (RMS, envelope, phase envelope, instantaneous
frequency, instantaneous phase, apparent polarity)
• Utilizing effective computation that does not consume memory space and RAM
Taken from my project
computational-
geophysics in GitHub