• Save
Jan 2009 The Geomodeling Network Newsletter
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Jan 2009 The Geomodeling Network Newsletter

  • 1,345 views
Uploaded on

Jan 2009 GeomodelingNetwork newsletter

Jan 2009 GeomodelingNetwork newsletter

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,345
On Slideshare
1,335
From Embeds
10
Number of Embeds
2

Actions

Shares
Downloads
0
Comments
0
Likes
1

Embeds 10

http://www.lmodules.com 6
http://www.linkedin.com 4

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. The Geomodeling Network Newsletter January 2009 Welcome to the third installment of the Geomodeling Network newsletter and a very Happy New Year to you all. The more eagle eyed of you will have realized that there has been a bit of a gap since our last release – everyone was busy in Q4 2008 so it’s only now that I have had the time to chase up and collate the articles. Anyway we are now back on track with the shiny January 2009 newsletter. One of the major changes since our last newsletter has been a LinkedIn technology upgrade. This upgrade has given our group members the opportunity to post discussions or questions on this site (which a number of you have already done). I have posted a couple of example threads in this newsletter to give those that have not seen or used these discussion boards what is going on. We also have the ability to upload presentations. For those of you that have not received our backdated presentations you will find them by using the SlideShare feature on LinkedIn. It’s also a very convenient tool if you want to avoid opening up the newsletter on your email system which in some cases can take a while to download. At the Geomodeling Network we are trying to be as ambitious as we possibly can and when one of our members proposed that we should think about running a Geomodeling conference for one or two days I was of course very interested. The merits of this were discussed very loosely over a few pints and it is still on the table for Q4 2009. The base idea was to come up with an event that was primarily made up of Geomodeling case studies, workarounds, success stories or glorious failure which would hopefully be of interest to our members as well as a wider audience. If this is the kind of event that appeals to you please let me know and whether or not you may be interested in either attending or indeed presenting. Other than that I hope you all enjoy reading this newsletter and are looking forward to an exciting and prosperous 2009! “Lang may yer lum reek, Wi' ither folks coal!” Mitch Sutherland mitch.sutherland@blueback-reservoir.com Page 1 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 2. The Geomodeling Network Newsletter January 2009 Table of Contents Member Articles, Reviews & Questions 1. Probabilistic geological modelling: the “gun control” issue of geomodeling. Dan O'Meara Chief Advisor, Reserves at Landmark and Owner, O'Meara Consulting, Inc (Thread taken from the Geomodeling Network discussion page on LinkedIn) Page 3 2. Whilst quantifying subsurface uncertainty is recognized as important in field development planning, it is often not directly accounted for in the 3D model” – this article outlines an approach and discusses the advantages of dealing with your uncertainties directly in the 3D model.” Alister MacDonald Technical Advisor at Roxar Page 12 3. Effective porosity vs. total porosity in 3D-models? Laurence Bellenfant Lead Geologist at Senergy Ltd Page 21 (Thread taken from the Geomodeling Network discussion page on LinkedIn) Career Networking Subsurface Global Page 28 Blueback Reservoir Page 29 Requests for newsletter No4 Page 30 Page 2 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 3. The Geomodeling Network Newsletter January 2009 Member Articles, Reviews & Questions 1. Probabilistic geological modelling: the “gun control” issue of geomodeling. Red: In 1966, Andy Dan O'Meara Dufresne escaped from Chief Advisor, Reserves at Landmark and Owner, O'Meara Consulting, Inc (Thread taken from the Geomodeling Network discussion page on LinkedIn) Shawshank prison. All they found of him was a muddy set of prison If ever you are attending a party in Houston, bring up the issue of gun control clothes, a bar of soap, and and step back to enjoy the fireworks. You are sure to hear vociferous proponents on both sides of the issue. Well, probabilistic geological modeling is an old rock hammer, damn a “gun control” issue for us. In a related discussion, Mike Hardwicke has stated a near worn down to the nub. commonly held perception that 3D modelling software...seems to have gained I used to think it would favour as the definitive technology for capturing the range of uncertainty in take six-hundred years to static models. Let’s have some fun by opening a discussion that challenges this tunnel under the wall with Sort of thinking. it. Old Andy did it in less From my experience, the image that I have of probabilistic geological modelling than twenty. Oh, Andy is of people playing dice in a small room. Too often, the resulting models ignore loved geology. I guess it high impact possibilities that do not fit into the boxes that the software appealed to his meticulous providers have provided for you to play in. Consequently, those who play only inside the box are deluded into thinking they understand uncertainties related nature. An ice age here, to their reservoir when all they have done is to understand uncertainties related million years of mountain to the box that they have chosen to play in. building there. Geology is the study of pressure and Let me give you an example of what I mean. On one field I worked on recently, we could construct a model that was in the box. It had multiple rock types and time. That's all it takes a single reservoir compartment. This was the type of model that was expected, really, pressure, and time. ready-made for running multiple realizations. On the other hand, we That, and a big god- constructed another model that had fewer rock types but that had as many as damned poster. fifteen reservoir compartments. Both models were entirely consistent with all of the known data. The second model was definitely out of the box but it fit the Narrated by Red, The data just as well as the first model. You can think of the second model as actually a number of models, consisting of consistent interpretations of anything Shawshank Redemption between two and fifteen compartments. Now, if you were planning a waterflood Page 3 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 4. The Geomodeling Network Newsletter January 2009 on this field, the second model of a compartmentalized reservoir would have high impact. What probability would I assign to the second model? I don't know. In fact, I think it's very difficult to assign probabilities. In fact, discussions of Gaussian probability distributions seem completely irrelevant to the problem at hand. However, it's very important to know that the second model is both reasonable and consistent with all of the data. In this case, possibility is more important than probability. Identifying such a possibility is certainly important for understanding uncertainty in your field. But the job of identifying such possibilities seems to be relegated to a secondary role because it's a lot easier to play dice in the box than it is to imagine high impact possibilities. You might suggest that an Ockham’s razor approach would argue for the first model to be the one worth spending time on because it is the simplest model that explains all the data. Well, it depends on who is defining “simple”. For me, it is simpler to think of compartments than it is to think of rock types. And, in the field under discussion, the geological model had thirty-five faults that seemed to cry out for inclusion a simple explanation of compartmentalization based on fault blocks. Robert Smallshire Geoscience Software Development and Structural Geology I'm much inclined to agree with Dan that the uncertainty modelling techniques available out-of-the-box fail to capture the full range of uncertainty present in our understanding of reservoir properties and predicted performance. This is because of the difficulty of programmatically simulating conceptual uncertainties - current software systems are good at providing sparsely sampled distributions of realizations within a single concept (playing dice in a small room) with the positions of channels or other sedimentary features, but cannot begin to generate alternative concepts which also fit the available data (compartmentalized versus continuous, thrust faults versus inverted extension). Concept generation is left to us humans, who are dogmatic, often insufficiently well informed of the possibilities or simply too busy to invest significant time in creating alternatives. What is more, we tend to believe in our current concept, in which we have a large emotional investment, even beyond the point of it being untenable in the face of the data: Witness the the discussion over whether the Silverpit structure in the North Sea is impact or salt related! [1] . I'm aware of some work that has been done into analysing concept uncertainty and its impact on seismic interpretation [2], but none of that addresses the topic in relation to the specifics of static or dynamic geomodels. Its was once said by my colleague Dave Hardy that uncertainty isn’t commonly considered part of the 3-D modeling process, even though everyone knows they should be doing something about it [3]. This is hopefully changing, but Page 4 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 5. The Geomodeling Network Newsletter January 2009 uncertainty modelling continues to be applied largely within the within the confines of what the software tools easily allow - within a single geological concept. I predict that until software can impartially explore concept-space for us, multiple scenario assessment with a worthwhile number of scenarios won't be commonly done, simply because it's too expensive [4]. [1] UK impact crater debate heats up http://news.bbc.co.uk/2/hi/science/nature/6503543.stm > [2] Bond C.E., Gibbs A.D., Shipton Z.K. Jones S, (2007) What do you think this is? “Conceptual uncertainty” in geoscience interpretation. GSA Today 17 < http://www.gsajournals.org/archive/1052-5173/17/11/pdf/i1052-5173-17-11- 4.pdf > [3] Top Down or Bottoms Up? Getting a Grip On Reservoirs < http://www.aapg.org/explorer/2006/10oct/uncertainty.cfm > [4] I've been wrong before. Yannick Boisseau Senior production geologist- Chief Reservoir Modeller embedded only sample a space of uncertainty as defined by the geologist or geomodeller. A commun misconception is that by generating multiple realisations, one gets to know what are the uncertainties and what is the true P50. There is no true P50. there is reality one one side and models on the other side. Depending on concepts you apply during your modelling and uncertainties you assume on input parameters, you get a range of possible outcomes. The main biais (but may be not the main risk) is probably the choice of a concept as Dan is presenting in is case study. The same rules applies with probabilisitc modelling as would apply with deterministic modelling. - trash in, trash out - what you don't put in, you don't get out. I also hear often the sentence let's keep it simple initially, or simple is better. I tend to approach this a different way. First try to estimate how complicated it might be, and then simplify to what really matters for your case. Dealing with heavy oil or gas will make a huge difference on how the same rock type will behave to flow and what net pay you might what to consider. The main benefit for me in uncertainty modelling tools is that assuming a particular concept for the reservoir architecture, you can assess how your pay criteria, STOIIP or recoverable oil is sensible to different ranges of input parameters (different porosity mean & std deviation), rocktypes proportions... This should be applied on as many concepts as you see relevant (compartimentalisation or not, as an example). This helps you decide where your risk lies and where you need to get more data. Page 5 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 6. The Geomodeling Network Newsletter January 2009 Mark Whelan Development Geologist Concept design as expressed here is what Design of Experiments is about. I think the modelling community really needs to do a better job of convincing senior management and peers that a set of concepts or experiments do a much better job at capturing ranges of uncertainty than a host of multiple realizations do. Start off by defining the realistic end members of the scenario/concept and work towards generating that all elusive P50 and the rest will follow. The dependent variable in an experimental design whether it be In-Place or Recoverable variables is arrived at by making realistic assumptions about the A man goes into a restaruant, constituent variables and that should not be forgotten, so geological knowledge sits down and starts reading the and empirical values/analogues are a great way to start and can always be menu. The menu says: defended. Broiled Accountant $5.95 per Gun control? Let's have more of it! plate Dan O’Meara Fried Engineer $7.95 per Chief Advisor, Reserves at Landmark and Owner, O'Meara Consulting, Inc plate What is the best way to expend money and effort in assessing uncertainty in our Toasted Teacher $7.95 per reservoir models? Isn’t this the crucial question? For those of us who are plate mathematically inclined, we find solace in probability theory, geostatistics, experimental design, and other such constructions. But at the risk of appearing a Grilled Geologist $25.95 per troglodyte, I would like to explore what they really buy us in terms of addressing plate true uncertainties. As Yannick points out, our current tools help us to assess the uncertainties related to varying relevant parameters for a given conceptual The man calls a waiter over model. But that’s what I mean by “playing dice in a small room”. The real action seems to be in conceptual models. So, isn’t money best spent on exploring a and asks Hey, why does the wide range of conceptual models than it is in exhaustively quantifying Grilled Geologist cost so much uncertainties associated within a “small room”? more? Ah, but who would be responsible for exploring these concepts? Let’s imagine The waiter says, Are you that you are attending a conference on uncertainty in reservoir models. Who kidding? Do you know how would you expect to see in the room? Who would be giving the presentations? hard it is to clean one of Can you see their faces? Do you have their names in your head? Okay, well how many of them are mathematical types – well known geostatisticians, them?!?! experimental designers, or simulation experts? I imagine most of them. ..............i’ll get my coat! Now imagine that you are put in charge of understanding uncertainties on a mega-development project that can make or break your company. What kinds Page 6 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 7. The Geomodeling Network Newsletter January 2009 of people would you want on your team? From my experience with real reservoirs, I tend to think of marshalling together an interdisciplinary team where the team members knock around lots of concepts about what might explain all that is known about the reservoir of interest. The uncertainties come from analyzing various conceptual models that honor both physics and data, with special attention to outliers. Instead of investigating multiple, equiprobable realizations, I think of the teams as investigating multiple scenarios that span, as Mark says, “realistic end members”. In other words, I don’t think of a team of uncertainty experts but a team that stimulates right-brained activity aimed at generating a range of concepts. Now, once you think of teams and batting around concepts, you’ve got to think of the psychology of it all, as Robert mentions. When I think of the purported uncertainty experts, I think of left-brained thinkers who are highly analytical and highly structured. But when I think of the folks that I’d like on a team that is looking at uncertainty, I’d like to see a balance of out-of-the-box thinkers as a better job of convincing senior management of the benefits of a “set of concepts” rather than running multiple realizations. He is right. But I doubt that he will make much headway with a frontal assault. Think for a moment of the managers you know. Would you characterize them as highly analytical and structured or as out-of-the-box thinkers? I suggest that most middle managers are the former. So, when middle managers get together with mathematical folks, it is a marriage made in heaven. Any talk of a trial separation or divorce is likely to get very messy. Fortunately, studies have shown that as you go up the management tree, you will find more right-brained thinking. Please join the discussion and tell me whether I am missing the boat on this. Because if I am not, then one has to wonder whether the resources expended on addressing uncertainty issues are misplaced. Dan O'Meara Chief Advisor, Reserves at Landmark and Owner, O'Meara Consulting, Inc Let’s go back to the concrete example that I raised earlier. I postulated a very reasonable model that is consistent with all data but that has fifteen reservoir compartments as opposed to one. Experimental design would certainly help us to narrow down the number of variations of models that need to be studied in order to get a handle on the response surfaces. But, we hear the term “probabilistic reservoir modeling”. Well, help me out. How do we assign probabilities? In understatement, Mark refers to “that all elusive P50”. Yannick is more direct when he says, “There is no true P50”. Can we assign a P50 in this case? Are the various possibilities of anywhere between one and fifteen compartments equally probable? Do we want to exclude the fifteen compartment case from consideration? Page 7 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 8. The Geomodeling Network Newsletter January 2009 If there is really a consensus amongst practicing geomodelers that “there is no true P50”, then management does not seem to be hearing it. Steve Flew Technical Advisor at Schlumberger To add some 'customer' perspective to this too, as a reservoir engineer with a geomodelling background, its galling to see the lengths subsurface teams go to force fitting observations to a single model, deluding themselves that 'this is the one'. There is often an apparent breakdown of inter-discipline communication when it comes to possible scenarios, and often this is made worse in fields with production history, somewhat ironically. Dan's comments about right- and left-brainisms matches my experience, if you get an enlightened asset manager, etc, then they both understand and support the type of data gathering - try justifying drilling a well off structure or downdip in many companies and you'll be met with a brick wall. However, find a way to explain the true range of OIP/reserves and the impact that observation will have ”I think there’s a world through some form of VOI exercise - this requires a few things to be in place: Realistic input ranges (and no, those points aren't outliers and should be marker for about 5 discarded) Realistic alternative structural/depositional scenarios computers.” Acceptance of (some) production/pressure trends that don't fit the 'conventional' model Thomas Watson (founder of An ability of the geoscientist/team to convey these issues in an unbiased, and IBM) straight forward manner for the 'in-the-box' thinkers, highlighting the impacts these could have on any development planning decision. Sadly, whilst we can bemoan in-the-box managers, I suspect much of the issue lies with our own disciplines - constrained by what some computer scientist has coded up in software, or by their own background/limited experience, organisation. So, to comment on Dan's P50 consensus statement, in my experience even if we portray a P50, there are others who MISUNDERSTAND what that actually means, thinking that this is actually a far higher confidence number. I've started to present ranges in meetings, focusing on the high confidence number, but have found more often than not that the number people (and not just mgmt) remember, is the mythical P50, despite insistance that it means half the time we won't achieve this! Be interested to hear others experiences! Guillaume Caumon Associate Prof at the School of Geology, Nancy Universite Page 8 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 9. The Geomodeling Network Newsletter January 2009 We have to be very careful with some software vendors and some so-called geostatisticians who claim uncertainty is assessed just by running a bunch of stochastic simulations. To that respect, yes, we need to go beyond this simple dice in a small room. However, the point is not to decide for some objective automatic uncertainty assessment versus some scenario-based subjective uncertainty assessment. The uncertainty is all about the lack of information, so trying to assess subsurface uncertainties just amounts to tell what you don't know from what you know. Therefore, uncertainty assessment is ALWAYS subjective, and this subjectivity may originate from the scenario, from the underlying assumptions of some mathematical / statistical framework, from the particular software/algorithms your are using, or from all of these. In one case, the subjectivity is yours, in another case, it comes from others (e.g. data independence, multivariate normal assumption or iid in stats methods, two- point statistics in classical geostats, etc.). The problem with software is that the underlying assumptions are not always explicitly stated by software vendors, and are not always properly understood by software users, hence the impression that just experience and one's own subjectivity is preferable, and easier to discuss (or defend) in an asset team. Still, software and modeling has proved extremely valuable at integrating data in a consistent manner from a deterministic standpoint, and is an invaluable companion of the human brain when it comes to processing large amounts of information. Then, why should we drop software when we want to consider uncertainty ? Or should we just make one deterministic model for each scenario? I don't think so. The problem of dimensionality and cost of exploring the space of uncertainty (Robert's point) can hardly be addressed by just a few scenarios. Huge biases may appear in such a method. This hold especially if the model is simplified from the start, see Yannick's point (e.g., if we have reduced the number of faults to generate a grid more easily). In my opinion, the only approach to sample this space is, as for deterministic modeling, to use software, and understand the underlying parameters and uncertainty modeling rationale behind them: the oil and gas industry is high tech and relies on skilled engineers & geoscientists; hopefully they can spend some time learning new technology. For now, we can already benefit from a mixture of discrete deterministic models and automatic perturbations of these models to sample the space of uncertainty and make relevant decisions from these samples, see for instance refs 1-6. For tomorrow: - researchers and software vendors should urgently explore new ways to stochastically generate realistic models other than by perturbing some reference model. Ideally, these models should help answer the question at hand and Page 9 The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com
  • 10. The Geomodeling Network Newsletter January 2009 depend on the subsurface complexity, and should be updated in some inversion loops (e.g., history matching). - We will not avoid biiiig supercomputers to tackle this, because our subsurface modeling problem is ill-posed, and a huge number of models should be considered. Guillaume Caumon Associate Prof at the School of Geology, Nancy Universite Refs: 1- Massonnat, G. J. “Can we sample the complete geological uncertainty space in reservoir-modeling uncertainty estimates?” paper SPE 59801, SPE Journal (2000) 5(1):46–59. 2- Hollund, K., Mostad, P., Nielsen, B.F., Holden, L., Gjerde, J., Contursi, M.G., McCann, A. J., Townsend, C. and Sverdrup, E. [2002] Havana — a fault modeling tool, A. G. Koestler and R. Hunsdale (eds.), Hydrocarbon Seal Quantification, Norwegian Petroleum Society Conference, vol. 11 NPF Spec. Pub., Elsevier Science, Amsterdam. 3- G. Caumon, S. Strebelle, J. K. Caers, and A. G. Journel, 2004. Assessment of Global Uncertainty for Early Appraisal of Hydrocarbon Fields. SPE Annual technical Conference and Exhibition (Houston). (SPE 89943)., 4-G. Caumon, A.-L. Tertois and L. Zhang, 2007. Elements for Stochastic Structural Perturbation of Stratigraphic Models. Proc. Petroleum Geostatistics 2007, EAGE, A02, 4p. 5- S. Suzuki, G. Caumon and J. Caers, 2008. Dynamic data integration for structural modeling: model screening approach using a distance-based model parameterization. Computational Geosciences 12(1): 105—119. 6- Scheidt, C. and Caers, J. (2008). Representing Spatial Uncertainty Using Distances and Kernels, Math. Geosciences, in press. Posted 1 month ago | Delete comment Dan O'Meara Chief Advisor, Reserves at Landmark and Owner, O'Meara Consulting, Inc I appreciate Guillaume Caumon’s remarks, especially about the subjectivity of uncertainty analysis. Let me begin exploring two of his statements concerning the “huge biases” that are inherent with a scenario approach and his hope for Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 10
  • 11. The Geomodeling Network Newsletter January 2009 the future, the industry should learn to “stochastically generate realistic models” that are not merely perturbations on a “reference model”. No doubt, you’ve heard (especially in this election year), people complain about the biases of the media. Well, I am always surprised to think that anyone would expect to find unbiased reporting anywhere. That’s not a complaint. It’s just a matter of recognizing that human beings are inherently biased. We bring our prejudices to the analysis of politics, religion, and (indeed) reservoir modeling – three topics, by the way, that a gentleman or lady would never deign to bring up in polite company over dinner. That’s why I began this discussion by saying that probabilistic geological modeling is an issue like “gun control”. Discussions of it can become quite heated, especially when basic tenets of the faith are questioned. Have you ever talked with a conspiracy theorist? There are people who believe that the world is controlled by a vast right-wing or left-wing conspiracy. Talking with them is like talking to a guy who thinks he is the King of England. No doubt, you can come up with a lot of counterarguments. But, ultimately, all that you say will be used as proof of the great international conspiracy that exists to prevent the poor fellow from taking his rightful place on the throne. Well, I find that discussions about geostatistics can have this flavor about them. There are those who see the world through Gaussian distribution functions, variograms, and equiprobable realizations. And, there are those of us who are the great unwashed that do not share the faith, who are not believers in the one true religion. Biases are part and parcel of the human condition. Just as with television or the newspapers, I expect biases and, in fact, embrace them. When it comes to understanding uncertainty in reservoirs, I admit that I am biased. I am biased towards accepting recommendations of teams that have a balance of right- brained as well as left-brained members, who think “out of the box” when it comes to understanding uncertainty in reservoirs. Now, I would argue that people who put a lot of faith in stochastic models also betray “huge biases” even though many of them are reluctant to think they do so. So, when I read Steve Flew’s comments that “it’s galling to see the lengths subsurface teams go to force fitting observations to a single model”, I thought “Amen, brother”. I think of multiple, equiprobable realizations on the same geological model as being “a single model”. Guillaume decries the use of “just a few scenarios”. Well, I would argue that staying in the typical geostatistical modeling box constitutes little more than one scenario. Go back to the situation that I posed where we have two reservoir models that are consistent with all existing data, with one model having fifteen compartments and the other having one. As I’ve argued before, it’s not a matter of probability but possibility that is important. If the fifteen compartment case is possible (it fits the data and is physically realistic), then it ought to be considered within an uncertainty analysis of the reservoir. And, if its possibility has costly Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 11
  • 12. The Geomodeling Network Newsletter January 2009 consequences, then we should be driven in the direction of seeking information such as well test data or fault seal analysis that might foreclose or make highly unlikely its possibility. How would you stochastically deal with these models in order to assign probabilities? How would you come up with that “all elusive P50”? How would you “stochastically generate realistic models” like this? 2. U3D Reservoir Uncertainty Modeling – workflows, products & benefits In recent years the quantification, understanding and management of subsurface uncertainties has become increasingly important for oil and gas companies as they strive to optimise reserve portfolios, make better field development decisions and improve day-to-day technical operations such as well planning Although the use of realistic 3D models in reservoir management is becoming standard practice, 3D modelling is seldom used directly for uncertainty management. This is partly related to a lack of procedures for implementation of such studies and partly related to a lack of high quality software to support the work processes necessary to model and explore subsurface uncertainty in 3D. The uncertainty module in Roxar‟s IRAP RMS reservoir modelling software has been developed to fill this gap and allow the application of 3D modelling tools in uncertainty quantification and management. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 12
  • 13. The Geomodeling Network Newsletter January 2009 Seed Variation Parameter Variation Figure 1: Seed variation & parameter variation. Realisations of channel architecture generated using a stochastic (object) model. The “Seed variation” realisations have the same geometrical input parameters. The “Parameter variation” realisations have variable channel volume fractions, channel widths and azimuths The central concept in the IRAP RMS uncertainty module is to provide users with a tool to set up 3D modelling workflows and analyse the results where the input parameter values to the component modeling jobs (operations) are varied in a controlled manner. This is in contrast to working with multiple realisations where input parameters are kept constant and random seeds are changed to generate a variety of different reservoir models. This difference is illustrated in Figure 1, using a sand- filled fluvial channel facies model. The geometrical input parameters are kept constant and the random seed is varied the resultant realisations are locally different, but are all characterized by a similar overall architecture. From well data and general paleogeographic knowledge it is not possible to know the true channel facies volume fraction, average channel Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 13
  • 14. The Geomodeling Network Newsletter January 2009 sandbody width, or average azimuth precisely. It is therefore important to include this lack of precise knowledge in the uncertainty analysis by varying the input parameters. This produces a much wider range of geometries which more closely represents the true uncertainty in the channel architecture. It is important to incorporate this lack of precise knowledge throughout the modelling process, from velocity modeling to flow simulation, to estimate realistically uncertainty in reservoir volumes and reserves. 3D Uncertainty Modelling Workflow Figure 2 (left) shows the 3D uncertainty modelling workflow. It is a standard „structure to simulation‟ 3D modelling workflow, but includes uncertainty distributions for the most important input parameters. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 14
  • 15. The Geomodeling Network Newsletter January 2009 The workflow involves: 1. Setting up input distributions for the 3D model components 2. Generation of multiple realisation of 3D models based on sampled values from the input distributions 3. Volumetric and analysis based on multiple 3D models The uncertainty workflow includes 3 main stages. The GRV uncertainty is controlled by the structure and the contacts. The HCPV uncertainty is controlled by the internal rock properties (facies and porosity) and fluid properties (SW, Bo, Bg). Reserves uncertainty is defined by connectivity and the dynamic behaviour of the reservoir. GRV Uncertainty – Structure & Fluid Contacts Gross rock volume is to a large extent controlled by the structure of the horizons and faults and the fluid contacts. GRV uncertainty is often the most significant uncertainty for in place hydrocarbon volumes and the correct handling of the structure and contacts is often the key to a realistic uncertainty assessment and asset evaluation. An uncertainty model for velocities can be used to generate multiple realisations of the depth structure. Relatively low velocity gradients will produce flatter structures and high velocity gradients will produce steeper dips (Figure 3). This is important for capturing realistic uncertainty in the field volumes as flat structures are generally associated with larger closures and higher volumes than steep structures. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 15
  • 16. The Geomodeling Network Newsletter January 2009 The structural depth uncertainty also needs to be linked to the fault model. Uncertainty in velocities and depth conversion need to be handled in a consistent manner for both depth horizons and the fault surfaces. The modern fault modelling algorithms in Irap RMS allow the fault model to be rebuilt automatically using velocity models with varying input parameters. The result is multiple realisations of the reservoir structure with consistent depth surfaces, isochores and Figure 3: 4 depth structure map realizations resulting from faults.These consistent uncertainty parameters for velocity modeling. The cross structure models are used to section underneath shows multiple surface outcomes generate the 3D grids which are using the same well data, but different velocity gradients used for internal property modelling, volumetric calculations and flow Uncertainty in fluid contact definition is often the key uncertainty in simulations. reserves estimation. Government organisations and oil companies use very specific definitions of contacts for reserves accounting based on criteria such as Lowest Known Hydrocarbons (LKH), Oil Down To (ODT), Water Up To (WUT), half way depths, etc. When working with realistic uncertainty analysis, distributions need to be defined for the contact (or Free Water Level, FWL) depths. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 16
  • 17. The Geomodeling Network Newsletter January 2009 Figure 4 (left) illustrates two examples of input distributions for contact uncertainty. Depth is on the Y-axis (increasing downwards) and probability on the X-axis. Example (A) is from a structure with a single discovery well. A uniform probability distribution between the LKH and the structural spill point is used to define contact uncertainty. The second example (B) is from a structure with two wells; a discovery well with a full hydrocarbon column and a dry offset appraisal well. A triangular distribution has been used to define contact uncertainty. The minimum depth value is defined by the LKH depth in the discovery well, the maximum value at the WUT depth in the appraisal well and the mode at the MDT pressure derived contact using data from both wells. Other distributions could be used for the examples outlined above. The key is for the asset team (geoscientists, petrophysicists and engineers) to define input distributions which describe realistically the uncertainty in the contact location based on the available data. One of the main benefits of working in 3D is that intrinsic geological dependencies are incorporated in the uncertainty analysis. Structure and contacts need to be accounted for together. A combination of steep structures (high velocities) and shallow contacts will lead to low GRV whereas flat structures (low velocities) and deep contacts will lead to high GRV. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 17
  • 18. The Geomodeling Network Newsletter January 2009 HCPV Uncertainty – Rock & Fluid Properties Rock and fluid property uncertainties control the HCPV, STOOIP and GIIP within the structural framework (Figure 2). The rock and fluid properties are many and include: facies, porosity, permeability, SW, net cut-off, Bo and Bg. Facies uncertainty includes the volume fraction, geometry and stacking relationships of different facies types. Volume fractions are important for the in place volumes, where as the geometrical parameters are generally more important for connectivity and recoverable hydrocarbon volumes. The implications of the facies geometries can be analysed in 3D using static connectivity measures, streamline calculations or full flow simulation calculations. Figure 6 (left) illustrates an example of facies distribution uncertainty in deltaic reservoir penetrated by three exploration wells. The regional paleogeography includes a continental high to the NE and a seaway to the SW. The facies distribution in the wells supports the general paleogeographic picture with higher proportions of proximal facies to the NE and distal facies to the SW. The progradation direction however cannot be known precisely from this information and a distribution is used to capture this uncertainty. This leads to a set of realisations with different facies and architectures and anisotropy. There are two important sources of petrophysical uncertainty. The first source is related to logging tool measurement, processing and interpretation. The second source is related to the sampling of the wells. With only a few wells it is difficult to know the “average petrophysical” values within the field precisely. This uncertainty is particularly significant if there significant trends within the field. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 18
  • 19. The Geomodeling Network Newsletter January 2009 Standard spreadsheet-based uncertainty evaluations use distributions for average petrophysical parameters as part of volumetric uncertainty quantification. It is however very difficult to know, for example, the average SW in a structure. Average water saturation in a reservoir is dependent on many other parameters including the structure, FWL depth and reservoir properties (porosity and permeability). These dependencies are very difficult to estimate using petrophysical analysis alone, but are automatically incorporated in the uncertainty analysis when working in 3D. Another main advantage of 3D modelling is that the uncertainties are defined at the individual input level rather than using some arbitrary amalgamated average. Water saturation is typically modelled using some form of saturation height function where the key parameters are the SWirr values and the parameters of height functions within transition zones, e.g. the “a” and “b” constants in a power-function (Figure 7). Figure 7: saturation height functions in the 3D model Recovery Factors & Reserves Conventional reserves accounting used „analogue‟ fields and basic general engineering considerations to estimate recovery factors. When working with 3D modelling, these considerations can be supplemented by more realistic flow simulation analysis. The IRAP RMS uncertainty Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 19
  • 20. The Geomodeling Network Newsletter January 2009 module links the geomodel directly to a flow simulator (RMS flowsim) so that realistic flow simulations can be carried out on multiple geomodels. The 3D grids which have been generated from the structure model and used in the property modelling can be used directly in flow simulations or can be upscaled and used in lower resolution simulations. The grids can be used for the whole variety of simulation purposes including: quantification of realistic recovery factors, design robust development solutions, and identification of bypassed oil and infill well targets. There are a number of parameters used in flow simulations which can have a significant impact on production profiles and recovery factors. These include typical “simulation” parameters such as the Corey exponents and the relative permeability end points. Additional parameters such as fault sealing and aquifer dimensions should also be evaluated in realistic reservoir simulation uncertainty evaluations. Summary Advances in 3D reservoir modelling technology allow for uncertainty quantification workflows to be implemented in 3D instead of being based solely on the use of speadsheets and direct sampling of input distributions. This brings a wide range of benefits including: • The input distributions are defined directly at the level of the modelling components (velocity model, saturation model) instead of amalgamated averages • The input and output are not restricted to functions and histograms. • Dependencies between input parameters can be treated in a realistic manner • 3D grids which can be used directly in reservoir simulation and connectivity analyses are created. • A wide variety of maps and 3D uncertainty cubes can be generated to quantify spatially varying uncertainty • The 3D models and derivatives can be used directly for well planning and geosteering. A full version Alister MacDonald‟s white paper on uncertainty management can be downloaded from Roxar‟s website where you will also find a range of other useful references and background material. http://www.roxar.com/reduceuncertainty Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 20
  • 21. The Geomodeling Network Newsletter January 2009 3. Effective porosity vs. total porosity in 3D-models? Laurence Bellenfant Lead Geologist at Senergy Ltd I have only been building 3D models for the last 5 years applying the same methodology for petrophysical properties distribution. My experience is not as Statistics: The only science broad as most people in this group of dicussion so I thought I would take the that enables different experts opportunity to ask some questions. using the same figures to draw different conclusions. I normally distribute effective porosity unless total porosity is only available, in — Evan Esar what case I would apply a NTG cut-off. Only recently I realised that effetive porosity can be calculated in different ways which means it doesn't always represent the same thing. When we distribute petrophysical properties in a 3D model, we make assumptions about the petrophysical data we are using. I always thought effective porosity was the way to go as I thought effective meant effective to flow. I have recently realised that the term effective means different things depending on your speciality. This is why some modellers prefer to distribute total porosity and apply NTG. By applying the cut-off at the end of the process, errors on what is really the effective porosity would be avoided. I guess the term effective will have different meanings wether you are a petrophysicist, a geologist or a reservoir engineer, but still, we pass on that information via the 3D model. My question is then, what should we apply as a methodology? Mohit Khanna Chief Geological Advisor at BG-India Effective porosity or PHIE, as commonly known, is usually defined as Total Porosity minus the porosity due the the shale component in the rock. Now the definition of shale varies between a petropysicist & a geologist! That is why it is important to talk to the petrophysicists right upfront and challenge them on their approach so that we know what we are modelling. If the PHIE is calculated as mentioned before then there is no NTG used there, hence, some sort of NTG will be required for volumes etc. either as a poro-perm cut-off or a facues cut- off. Therefore, have a chat with the guys and then proceed.. you will realise that in Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 21
  • 22. The Geomodeling Network Newsletter January 2009 most cases what you have been doing i.e distributing PHIE is best and then use a NTG. Simon Haworth Geologist at Nexen Petroleum Laurence! Ca va? Its a very good question. I'll tell you my workflow (purely petrophysical model not facies based) so you can get even more confused. We use effective porosities at work- so called because 'effective porosity' excludes clay bound water. Clay bound water affects the neutron tool measurements and tends to over predict porosity in shale layers when in actual fact it is very low. Effective porosity therefore has a correction applied during the porosity log calculation to take into account the underlying effects of shale. The 'effective' term therefore means rocks which actually has porosity and which is connected. So you can have very low (0-10%) porosities in an effective porosity log but these porosities may not necessarily flow hence our requirement at this stage to apply a cut-off (NTG or Facies based). I am only really interested in modelling reservoir rock so a cut-off helps me to discriminate between net and non-net facies. My engineer/petrophysicist will normally tell me what he expects the minimum permeability for fluid flow (normally 1mD) and this can be related back to a porosity value (roughly 10% but highly dependent on facies). This can be validated via core analysis and a trip to the core store! So in my models, I model effective 'net' porosities. I make all values less than my cut-off undefined at the log scale and then scale-up or block my log to the resolution of my grid. My minimum porosity in my grid will be set to my cut-off (I have to set this manually). I also generate a net flag at the log scale (1,0) and then scale this up to the resolution of my grid so that my NTG flag becomes a continuous parameter. I model NTG stochastically using trend maps etc so that people can visualise the distribution of reservoir rock. Its not perfect and there are alternatives to this methods.I'd go with whichever one you can get to grips with and more importantly explain to the people who eventually use your models. Your not alone! ingrid Demaerschalk Principal Geologist at BG Group Not sure i can agree with Simon's method... I certainly wouldn't go around blanking curves. I try to leave the decision of what cut-off to use as late as possible in the modelling process... saves a lot of work when you want to change Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 22
  • 23. The Geomodeling Network Newsletter January 2009 the cut-off value afterwards or do some sensitivities. Ideally you want to model at a scale where a cell is either completely net or non net (using effective porosity). You can then model porosity through out the model (e.g. stochastically) and apply your net cut-off to the model resulting in a net flag. This flag can then be upscaled for simulation (if necessary) and used as a bias to upscale net porosity to maintain volumes correctly. Net to gross is a non geological parameter that we impose and in my opinion should fall out of the model rather than being modelled itself. A trend applied to the porosity model will result in a N/G trend if done correcty. Noelia Vera Reservoir Engineer at TAQA Energy Bonjour Laurence I agree with Ingrid, NTG is a 2D deterministic concept that we are trying to use in 3D modelling wher is not really needed. I prefer to consider a whole cell either net or non net when using effective porosity (which is by classical definition to correct the effect of shales in the neutron tool measurements). After that you can leave the cutoffs for later in the modelling, for upscaling or preserve the flow units in your simulation grid. Simon Haworth Geologist at Nexen Petroleum Wow! That generated some feedback. I agree that NTG is deterministic but we are limited- in essence - to computing limitations. In theory it is possible to not use a cut-off at all in a geomodel (and therefore move away from the concept of NTG) and let the simulator decide what will flow and what will not. I'd suggest two things 1) you read the newsletter that was sent out some time ago regarding modelling NTG 2) you find me a simulator that can simulate a fine scale multimillion cell model in a reasonable time frame. Cheers. Simon Haworth Geologist at Nexen Petroleum To add a bit more- even when you think your fine scale model is fine enough- you will never ever capture the true distribution of net vs non net until you grid cells are the same resolution as your logs. Thats why NTG is used as a substitute for the scale of investigation we are working at. ingrid Demaerschalk Principal Geologist at BG Group Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 23
  • 24. The Geomodeling Network Newsletter January 2009 I did start my sentence with ideally and of course we have to compromise in a lot of situations. But it comes down to getting get the upscaling right (whether you do it in 2 steps i.e. blocking your logs + upscaling again to a simulation model, or in 1 step i.e. blocking straight into a simulation model) Also simulators can handle a lot more cells than you average RE will admit to and in fact I am modelling some models at log scale (15cm cells) which then get upscaled to a range of 1.5 to 5m cells. It works, the whole team understands and upscaling is simple and teh models only gets slow when trying to do sensitivity runs. Those need to be set running overnight. I have no problem with applying a net cut-off providing it's done correctly and volume calculation is still consistent. Ian Taggart PE at RISC Pty Ltd Nice simple question ..... without a simple answer. There is another dimension to this question not yet discussed - and that is choice of saturation basis, SWE or SWT. For a hydrocarbon in-place and moveable hydrocarbon in-place viewpoint you can make either a PHIE or PHIT system work as long as you recognise the saturation basis is different for each case. If you beleive in Dual water or simple Archie models then, in general, you are in a total porosity space. Expressed another way, combining PHIE, SWT or PHIT, SWE will result in inconsistencies. (eg Indonesia equation & PHIT) There is another aspect - that can sometimes generate debate .... and that is ; oven-dried helium porosity is (close to) PHIT - so if you want to calibrate to core - then PHIT has some advantages. As noted there is NO universal definition of PHIE. The definition of PHIT as conencted pore space to helium after oven drying is failry workable - and relates closely to PHID (density porosity). (Ian Juhasz from Shell was an early advocate of this basis) PHIT (vs the many PHIE alternatives) tends to have a smoother distribution and is less affected by normal-score transforms that seek to correlate data behind the scences in some geostat packages (eg cokriging perm from PHIT tends to behave a little better than perm from PHIE under such conditions) Bottom line - if you are careful - you can make either system work (and should Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 24
  • 25. The Geomodeling Network Newsletter January 2009 get the same HC volumes in either method) - but you cant choose your porosity basis independently of saturation. ie PHIE. (1-SWE) = PHIT.(1-SWT) Oh & BTW - virtually all lab measured Sw, krw Pc's are done on a total porosity basis. ( In good qualty rock where Vcl ->0 then the difference is moot) Upscaling should work in either basis if done correctly. As noted by others PHIT doesnt distinguish facies particularly well - so use of facies and/or cutoffs is needed so hydrocarbons are placed in the wrong place. Just takes a little more care. Time to put on my flame-proof suit. Mark Whelan Development Geologist Blimey! Tricky subject.....As Ingrid touched upon, it depends who the customer is of your model. If it's the nice RE in your organisation, he'll/she'll take care of it with a N2G flag. I have been modeling PHIT for a while now and my engineers are still talking to me - which is nice! Effective to flow is nice to define so longs as you have core and rel perm data to work with. The perfect scenario would be having core and PHIT/Krw to work with so that you could define a cut-off or N2G flag based on permeability, and in this case PHIT would be the way to go. If you don't have core then I would still use PHIT and use analog data from a field nearby to help you decide on a cut-off based on Permeability. Nice question and one that will have as many answers as there are modellers in the world - or at least this forum. Roger Kimber Senior Development Geologist at Centrica Energy Good thing to come into work to - get the grey cells working. I've tried different workflows - at the same time trying to take into account the views of the RE in Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 25
  • 26. The Geomodeling Network Newsletter January 2009 particular as he/she is the customer. For me, I have followed a workflow similar to Simon but it is also important to consider different cut-offs as this is in itself a deterministic interpretation. It also brings into question the whole debate about calculating static volumes as gross and leaving the facies model to take out the non-net. Personally, I take a pragmatic view and like to use a NTG parameter as people understand the concept (or at least we assume they do), particularly managers. The petrophysicist will compute a net:gross, it is a standard parameter for 2D models and I don't see why we should not model it. At least it can give us some flexibilty when we present static volumes as we can then measure the impact of non net v net and the influence of facies. I also agree that net cut-off should be permeability based. And this comes to the debate about core - we don't have enough of it and to often make decisions based on a log evaluation derived from a suite of logs which may or may not have taken the core fully into account. A perfect example is my current project where I have the luxury of >3000 ft of core where it is very evident that if you actually touch the core you end up with a toatlly different mind-set - the logs tell you that it is net but in reality the heterogeneity is on a fine scale with much argillacous content “The world was created in introduced by burrowing. 4004 BC.” James Ussher (1581-1656 – Archbishop of Armagh, Laurence Bellenfant Lead Geologist at Senergy Ltd who worked out a long- accepted chronology of Thank you very much for your very useful answers. The NTG is one part of the scripture) problem/solution but understanding the origin of my petrophysical data is going to help a lot. Thank you particularly to Ian Taggart for his detailed comment. I actually went to see the cores of my field this week and I feel more confident about the way to takle the model. I hope you enjoyed the discussion and that it helped ... at least it helped me! Laurence PS: Salut Noelia et Simon, hope all is well. Lawrence Itakpe Geoscientist - Horizon Energy Partners That’s a very important question and issue for discussion regarding reservoir modelling. Ideally I think both PhiT and Phie are important information/parameters in models, which is more important than the other depends on what you’re looking to achieve, but the best practice is model PhiE. Phie would define the amount of effective connected pore space in the matrix, Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 26
  • 27. The Geomodeling Network Newsletter January 2009 while the total porosity – PhiT would define porosity of the total pore space – connected pores, isolated pores in the grain matrix. As a result – if you model PhiT you might just be over estimating volumes, hydrocarbon pore volumes space in the rock and also perm if you apply log Perm vs. Poro transform to model perm. Most times both PhiT and Phie are generated by the Petrophysicists and would always provide both to the geologist. Ideally one would want to normalize its data and understand their relationship in space with each other before modelling in a 3D scale/resolution – data analysis. But if the data are just model and cut-offs applied later, I don’t think the data set modelled would have been normalized. My views about net to gross – I see it has an important parameter in 3D reservoir model. Net to gross shows a fraction of the reservoir that contains Hydrocarbon – Net pay sand – Net sand – these if shown in the form of a Map provide an idea about - sand distribution, development and source into the reservoir/Basin - this is important in reservoir development and could be applied same way – seismic amplitude attributes is applied. Net to gross is also an important parameter in volumetric computations. Because the volumes considered is the volumes in the Net sand of the reservoir and not the total sand reservoir package. I agree with Roger Kimber regarding the importance of Net to gross. Ian Taggart raised yet another important issue about water saturation – SWE or SWT as similar to PhiT or PhiE; I think in either case the effective is important but keeping in mind the total. Yes Ian Taggart you are right trying to model saturation using Porosity vs. water saturation transform – this is workable if there is a transform Law linking Porosity and water saturation, most times the result using this transform have oil saturation below the contact or in the aquifer. The question is how this could be possible. I think the best practice for saturation modelling is depth or thickness vs water saturation transform. This I think would be more realistic and consistent. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 27
  • 28. The Geomodeling Network Newsletter January 2009 I terms of Phie or Phit modelling methodology – the clients would always prefer their in house methodology. But in ideal practice I think modelling Phie is best practice but keep in mind the phit as well. Good Luck Career Networking Subsurface Global 2008/2009 – 2008 certainly has been a year for change in the oil and gas industry. Oil prices almost reaching $150/barrel before falling back to close to $40. The demand for energy has not, and will not go away. Subsurface Global plans for 2009 are to develop upon our relationships to provide excellence at delivering ideal candidates to organisations and the ultimate jobs for candidates. If you find yourself wanting to take on a new challenge or considering a move across the world please contact one of our specialists ( future@subsurfaceglobal.com ). Our expertise lie in Geosciences, Petroleum and Senior E&P appointments. May I take this time to wish you all a Happy Christmas and a prosperous 2009. Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 28
  • 29. The Geomodeling Network Newsletter January 2009 Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 29
  • 30. The Geomodeling Network Newsletter January 2009 Requests for the newsletter No4 The next newsletter is planned for a March 2009 release, so please send any articles to me at the following email address for inclusion (mitch.sutherland@blueback-reservoir.com). Also, please take advantage of the Geomodeling Network discussion board on LinkedIn to initiate comments on any Geomodeling subject of interest to you – all I ask is that you respect other people’s opinions – even if you think they are talking mince! Finally: Ever wonder why you studied Geology?..........Click below to find out http://s65.photobucket.com/albums/h223/fishmato/?action=view&current=Am ericanDad.flv Fin Page The Geomodeling Network – Sponsored by Blueback Reservoir www.blueback-reservoir.com 30