Well Log Data Inversion Using Radial Basis Function Network<br />Kou-Yuan Huang,Li-Sheng Weng<br />Department of Computer ...
Outline<br /><ul><li>Introduction
Proposed Methods
Modification of two-layer RBF
Proposed three-layer RBF
Experiments
Simulation using two-layer RBF
Simulation using three-layer RBF
Application to real well log data inversion
Conclusions and Discussion</li></li></ul><li>Real well log data: Apparent conductivity vs. depth<br />
Inversion to get the true layer effect?<br />
Review of well log data inversion<br />Lin, Gianzero, and Strickland used the least squares technique,  1984.<br />Dyos us...
Review of RBF<br /><ul><li>Powell, 1985,  proposed RBF for multivariate interpolation.
Hush and Horne, 1993,used RBF network for functional approximation.
Haykin, 2009, summarized RBF in Neural Networks book.</li></li></ul><li>Conventional two-layer RBFHush and Horne, 1993<br />
Training in conventional two-layer RBF<br />
Properties of RBF<br />RBF is a supervised training model.<br />The 1st layer used the K-means clustering algorithm todete...
Output of the 1st layer of RBF<br /><ul><li>Get mean & variance of each cluster from K-means clustering algorithm.
Cluster number K is pre-assigned.
Variance
Output of the 1st layer: response of Gaussian basis function</li></ul>π‘œπ‘–=expβˆ’(π±βˆ’π¦π‘–)𝑇(π±βˆ’π¦π‘–)2σ𝑖2<br />Β <br />
Training in the 2nd layer<br />Widrow-Hoff’s learning rule.<br />Error function<br />𝐸=12𝑗=1𝐽(π‘‘π‘—βˆ’π‘œπ‘—)2<br />Use gradient de...
Outline<br /><ul><li>Introduction
Proposed Methods
Modification of two-layer RBF
Proposed three-layer RBF
Experiments
Simulation using two-layer RBF
Simulation using three-layer RBF
Application to real well log data inversion
Conclusions and Discussion</li></li></ul><li>Modification of two-layer RBF<br />
Training in modified two-layer RBF<br />
Optimal number of nodes in the 1st layer<br /><ul><li>We use K-means clustering algorithm & Pseudo F-Statistics (PFS) (Vog...
PFS:𝑃𝐹𝑆=π‘‘π‘ŸS𝐡(πΎβˆ’1)π‘‘π‘ŸS𝑀/(π‘›βˆ’πΎ)=π‘‘π‘ŸSπ΅βˆ—(π‘›βˆ’πΎ)π‘‘π‘ŸSπ‘€βˆ—(πΎβˆ’1)</li></ul>n is the pattern number. K is the cluster number.<br /><ul><li>S...
Perceptron training in the 2nd layer<br />Activation function at the 2nd layer: sigmoidal<br />π‘œπ‘—=𝑓𝑠𝑗=Β 11+π‘’βˆ’π‘†π‘—<br />Error ...
Outline<br /><ul><li>Introduction
Proposed Methods
Modification of two-layer RBF
Proposed three-layer RBF
Experiments
Simulation using two-layer RBF
Simulation using three-layer RBF
Application to real well log data inversion
Conclusions and Discussion</li></li></ul><li>Proposed three-layer RBF<br />
Training in proposed three-layer RBF<br />
Generalized delta learning rule (Rumelhart, Hinton, and Williams, 1986)<br />Adjust weights between the 2nd layer and the ...
Outline<br /><ul><li>Introduction
Proposed Methods
Modification of two-layer RBF
Proposed three-layer RBF
Experiments
Simulation using two-layer RBF
Simulation using three-layer RBF
Application to real well log data inversion
Conclusions and Discussion</li></li></ul><li>Experiments: System flow in simulation<br />True formationresistivity(Rt)<br ...
Upcoming SlideShare
Loading in …5
×

IGARSSWellLog_Vancouver_07_29.pptx

627 views
471 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
627
On SlideShare
0
From Embeds
0
Number of Embeds
11
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

IGARSSWellLog_Vancouver_07_29.pptx

  1. 1. Well Log Data Inversion Using Radial Basis Function Network<br />Kou-Yuan Huang,Li-Sheng Weng<br />Department of Computer Science<br />National Chiao Tung University<br />Hsinchu, Taiwan<br />kyhuang@cs.nctu.edu.tw<br />and<br />Liang-Chi Shen<br />Department of Electrical & Computer Engineering<br />University of Houston<br />Houston, TX<br />
  2. 2. Outline<br /><ul><li>Introduction
  3. 3. Proposed Methods
  4. 4. Modification of two-layer RBF
  5. 5. Proposed three-layer RBF
  6. 6. Experiments
  7. 7. Simulation using two-layer RBF
  8. 8. Simulation using three-layer RBF
  9. 9. Application to real well log data inversion
  10. 10. Conclusions and Discussion</li></li></ul><li>Real well log data: Apparent conductivity vs. depth<br />
  11. 11. Inversion to get the true layer effect?<br />
  12. 12. Review of well log data inversion<br />Lin, Gianzero, and Strickland used the least squares technique, 1984.<br />Dyos used maximum entropy, 1987.<br />Martin, Chen, Hagiwara, Strickland, Gianzero, and Hagan used 2-layer neural network, 2001. <br />Goswami, Mydur, Wu, and Hwliot used a robust technique,2004.<br />Huang, Shen, and Chen used higher order perceptron, IEEE IGARSS, 2008.<br />
  13. 13. Review of RBF<br /><ul><li>Powell, 1985, proposed RBF for multivariate interpolation.
  14. 14. Hush and Horne, 1993,used RBF network for functional approximation.
  15. 15. Haykin, 2009, summarized RBF in Neural Networks book.</li></li></ul><li>Conventional two-layer RBFHush and Horne, 1993<br />
  16. 16. Training in conventional two-layer RBF<br />
  17. 17. Properties of RBF<br />RBF is a supervised training model.<br />The 1st layer used the K-means clustering algorithm todetermine the K nodes.<br />The activation function of the 2nd layer was linear. f(s)=s. f ’(s)=1.<br />The 2ndlayer used the Widrow-Hoff learning rule. <br />
  18. 18. Output of the 1st layer of RBF<br /><ul><li>Get mean & variance of each cluster from K-means clustering algorithm.
  19. 19. Cluster number K is pre-assigned.
  20. 20. Variance
  21. 21. Output of the 1st layer: response of Gaussian basis function</li></ul>π‘œπ‘–=expβˆ’(π±βˆ’π¦π‘–)𝑇(π±βˆ’π¦π‘–)2σ𝑖2<br />Β <br />
  22. 22. Training in the 2nd layer<br />Widrow-Hoff’s learning rule.<br />Error function<br />𝐸=12𝑗=1𝐽(π‘‘π‘—βˆ’π‘œπ‘—)2<br />Use gradient descent method to adjust weightsβˆ†Β π‘€π‘—π‘–π‘‘=𝑀𝑗𝑖𝑑+1βˆ’π‘€π‘—π‘–π‘‘=βˆ’Ξ·πœ•πΈπœ•π‘€π‘—π‘–<br />=Ξ·π‘‘π‘—βˆ’π‘œπ‘—π‘“π‘—β€²π‘ π‘—π‘œπ‘–=Ξ·π‘‘π‘—βˆ’π‘œπ‘—π‘œπ‘–<br />f(s)=s. 𝑓′(𝑠)=1<br />Β <br />
  23. 23. Outline<br /><ul><li>Introduction
  24. 24. Proposed Methods
  25. 25. Modification of two-layer RBF
  26. 26. Proposed three-layer RBF
  27. 27. Experiments
  28. 28. Simulation using two-layer RBF
  29. 29. Simulation using three-layer RBF
  30. 30. Application to real well log data inversion
  31. 31. Conclusions and Discussion</li></li></ul><li>Modification of two-layer RBF<br />
  32. 32. Training in modified two-layer RBF<br />
  33. 33. Optimal number of nodes in the 1st layer<br /><ul><li>We use K-means clustering algorithm & Pseudo F-Statistics (PFS) (Vogel and Wong, 1979) to determine the optimal number of nodes in the 1st layer.
  34. 34. PFS:𝑃𝐹𝑆=π‘‘π‘ŸS𝐡(πΎβˆ’1)π‘‘π‘ŸS𝑀/(π‘›βˆ’πΎ)=π‘‘π‘ŸSπ΅βˆ—(π‘›βˆ’πΎ)π‘‘π‘ŸSπ‘€βˆ—(πΎβˆ’1)</li></ul>n is the pattern number. K is the cluster number.<br /><ul><li>Select K when PFS is the maximum. Kbecomes the node number in the 1st layer.</li></ul>Β <br />
  35. 35. Perceptron training in the 2nd layer<br />Activation function at the 2nd layer: sigmoidal<br />π‘œπ‘—=𝑓𝑠𝑗=Β 11+π‘’βˆ’π‘†π‘—<br />Error Function<br />𝐸=12𝑗=1𝐽(π‘‘π‘—βˆ’π‘œπ‘—)2<br />Delta learning rule(Rumelhart, Hinton, and Williams, 1986): use gradient descent method to adjust weights<br />βˆ†π‘€π‘—π‘–π‘‘=𝑀𝑗𝑖𝑑+1βˆ’π‘€π‘—π‘–π‘‘=βˆ’Ξ·πœ•πΈπœ•π‘€π‘—π‘–=Β Ξ·π‘‘π‘—βˆ’π‘œπ‘—π‘“π‘—β€²(𝑠𝑗)π‘œπ‘–<br />Β <br />
  36. 36. Outline<br /><ul><li>Introduction
  37. 37. Proposed Methods
  38. 38. Modification of two-layer RBF
  39. 39. Proposed three-layer RBF
  40. 40. Experiments
  41. 41. Simulation using two-layer RBF
  42. 42. Simulation using three-layer RBF
  43. 43. Application to real well log data inversion
  44. 44. Conclusions and Discussion</li></li></ul><li>Proposed three-layer RBF<br />
  45. 45. Training in proposed three-layer RBF<br />
  46. 46. Generalized delta learning rule (Rumelhart, Hinton, and Williams, 1986)<br />Adjust weights between the 2nd layer and the 3rd layer<br />π‘€π‘˜π‘—π‘‘+1=π‘€π‘˜π‘—π‘‘+βˆ†π‘€π‘˜π‘—π‘‘<br />βˆ†π‘€π‘˜π‘—(𝑑)=πœ‚Β π‘‘π‘˜βˆ’π‘œπ‘˜π‘“π‘˜β€²π‘ π‘˜π‘œπ‘—=πœ‚π›Ώπ‘˜π‘œπ‘—<br />π›Ώπ‘˜=Β π‘‘π‘˜βˆ’π‘œπ‘˜π‘“π‘˜β€²π‘ π‘˜<br />Adjust weights between the 1st layer and the 2nd layer,<br />βˆ†π‘€π‘—π‘–(𝑑)=πœ‚Β π‘˜=1πΎπ›Ώπ‘˜π‘€π‘˜π‘—π‘“π‘—β€²π‘ π‘—π‘œπ‘–=πœ‚π›Ώπ‘—π‘œπ‘–<br />𝛿𝑗=Β Β π‘˜=1πΎπ›Ώπ‘˜π‘€π‘˜π‘—π‘“π‘—β€²π‘ π‘—<br />Adjust weights with momentum term:<br />βˆ†π‘€π‘˜π‘—π‘‘=πœ‚π›Ώπ‘˜π‘‘π‘œπ‘—π‘‘+π›½βˆ†π‘€π‘˜π‘—π‘‘βˆ’1<br />βˆ†π‘€π‘—π‘–π‘‘=πœ‚π›Ώπ‘—π‘‘π‘œπ‘–π‘‘+π›½βˆ†π‘€π‘—π‘–(π‘‘βˆ’1)<br />Β <br />
  47. 47. Outline<br /><ul><li>Introduction
  48. 48. Proposed Methods
  49. 49. Modification of two-layer RBF
  50. 50. Proposed three-layer RBF
  51. 51. Experiments
  52. 52. Simulation using two-layer RBF
  53. 53. Simulation using three-layer RBF
  54. 54. Application to real well log data inversion
  55. 55. Conclusions and Discussion</li></li></ul><li>Experiments: System flow in simulation<br />True formationresistivity(Rt)<br />Apparent resistivity (Ra)<br />Apparent conductivity (Ca)<br />Re-scale Ct’ to Ct<br />Radial basis function network (RBF)<br />Scale Ca to 0~1 (Ca’)<br />True formation conductivity (Ct’)<br />Desired true formationconductivity (Ct’’)<br />
  56. 56. Experiments: on simulated well log data<br />In the simulation, there are 31 well logs.<br />Professor Shenat University of Houston worked on theoretical calculation.<br />Each well log has the apparent conductivity (Ca) as the input, and the true formation conductivity (Ct) as the desired output.<br /> Well logs #1~#25 are for training.<br /> Well logs #26~#31 are for testing.<br />
  57. 57. Simulated well log data: examples <br />Simulated well log data #7 <br />
  58. 58. Simulated well log data #13 <br />
  59. 59. Simulated well log data#26<br />
  60. 60. What is the input data length? Output length?<br /><ul><li>200 records on each well log. 25 well logs for training. 6 well logs for testing.
  61. 61. How many inputs to the RBF is the best?</li></ul> Cut 200 records into 1, 2, 4, 5, 10, 20, 40, 50, 100,<br /> and 200 data, segment by segment, to test the best<br /> input data length to RBF model.<br /><ul><li>For inversion, the output data length is equal to the input data length in the RBF model.
  62. 62. In testing, input n data to the RBF model to get the n output data, then input n data of the next segment to get the next n output data, repeatedly. </li></li></ul><li>Example of input data lengthat well log #13<br />If each segment (pattern vector) has 10 data, 200 records of each well log are cut into 20 segments (pattern vectors).<br />
  63. 63. Input data length and # of training patterns from 25 training well logs<br />
  64. 64. Optimal cluster number of training patternsExample: for input data length 10<br />PFS vs. K. For input N=10, the optimal cluster number K is 27.<br />
  65. 65. Optimal cluster number of training patterns in 10 cases<br />Set up 10 two-layer RBF models. <br />Compare the testing errors of 10 models to select the optimal RBF model.<br />
  66. 66. Experiment: Training in modified two-layer RBF<br />
  67. 67. Parameter setting in the experiment<br />Parameters in RBF training<br /> Learning rate Ξ·Β : 0.6<br /> Momentum coefficient 𝛽: 0.4 (in 3-layer RBF)<br /> Maximum iterations: 20,000<br /> Error threshold: 0.002.<br />Define mean absolute error (MAE):<br /> Pis the pattern number, K is the output nodes.<br />Β <br />Β MAE=1𝑃𝐾𝑝=1π‘ƒπ‘˜=1πΎπ‘‘π‘π‘˜βˆ’π‘œπ‘π‘˜<br />Β <br />
  68. 68. Testing errors at 2-layer RBF models in simulation<br /><ul><li>10-27-10 RBF model gets the smallest error in testing.</li></li></ul><li>Training result: error vs. iterationusing 10-27-10 two-layer RBF<br />
  69. 69. Inversion testing using 10-27-10 two-layer RBF<br />Inverted Ct of log #26 by network 10-27-10 (MAE= 0.051753).<br />Inverted Ct of log #27 by network 10-27-10 (MAE= 0.055537).<br />
  70. 70. Inverted Ct of log #28 by network 10-27-10 (MAE= 0.041952).<br />Inverted Ct of log #29 by network 10-27-10 (MAE= 0.040859).<br />
  71. 71. Inverted Ct of log #31 by network 10-27-10 (MAE= 0.050294).<br />Inverted Ct of log #30 by network 10-27-10 (MAE= 0.047587).<br />
  72. 72. Outline<br /><ul><li>Introduction
  73. 73. Proposed Methods
  74. 74. Modification of two-layer RBF
  75. 75. Proposed three-layer RBF
  76. 76. Experiments
  77. 77. Simulation using two-layer RBF
  78. 78. Simulation using three-layer RBF
  79. 79. Application to real well log data inversion
  80. 80. Conclusions and Discussion</li></li></ul><li>Experiment: Training in modified three-layer RBF.Hidden node number?<br />
  81. 81. Determine the number of hidden nodes in the 2-layer perceptron<br /><ul><li>On hidden nodes for neural nets (Mirchandani and Cao,1989)</li></ul> It divides space to maximum M regions when input space is d dimensions and there are H hidden nodes.<br />𝑀𝐻,𝑑=π‘˜=0𝑑𝐢𝐻,π‘˜=𝐢𝐻,0+𝐢𝐻,1+β‹―+𝐢𝐻,𝑑          𝐢𝐻,Β π‘˜=0Β if 𝐻<π‘˜<br /> T: number of training patterns.<br /> Each pattern is in one region.<br /> From T β‰ˆ M, we can determine H hidden nodes. <br />Β <br />
  82. 82. Hidden node number and optimal 3-layer RBF<br /><ul><li>10-27-10 2-layer RBF gets the smallest error in testing. We extend it to 10-27-H-10 in the 3-layer RBF. H=?
  83. 83. For original 10 inputs, the number of training patterns is 500. T=500.
  84. 84. For a 27-H-10 two-layer perceptron, the number of input nodes is 27. </li></ul>When d=27, H=9,Β <br />𝑀𝐻,𝑑=𝑀9,27=𝐢9,0+𝐢9,1+…+𝐢9,9<br />=29=512Β maximumΒ regions<br />𝑀=512Β β‰ˆΒ π‘‡(=500), we select hidden node number H=9.<br /><ul><li>Finally, we get 10-27-9-10 as the optimal 3-layer RBF model.</li></ul>Β <br />
  85. 85. Training result: error vs. iterationusing 10-27-9-10 three-layer RBF<br />
  86. 86. Inversion testing using 10-27-9-10 three-layer RBF<br />Inverted Ct of log 27 by network 10-27-9-10 (MAE= 0.059158)<br />Inverted Ct of log 26 by network 10-27-9-10 (MAE= 0.041526)<br />
  87. 87. Inverted Ct of log 28 by network 10-27-9-10 (MAE= 0.046744)<br />Inverted Ct of log 29 by network 10-27-9-10 (MAE= 0.043017)<br />
  88. 88. Inverted Ct of log 30 by network 10-27-9-10 (MAE= 0.046546)<br />Inverted Ct of log 31 by network 10-27-9-10 (MAE= 0.042763)<br />
  89. 89. Testing error of each well log using 10-27-9-10 three-layer RBF model<br /> Average error: 0.046625<br />
  90. 90. Average testing error of each three-layer RBF model in simulation<br />Experiments using RBFs with different number of hidden nodes. <br />10-27-9-10 get the smallest average error in testing. So it is selected to the real data application.<br />
  91. 91. Outline<br /><ul><li>Introduction
  92. 92. Proposed Methods
  93. 93. Modification of two-layer RBF
  94. 94. Proposed three-layer RBF
  95. 95. Experiments
  96. 96. Simulation using two-layer RBF
  97. 97. Simulation using three-layer RBF
  98. 98. Application to real well log data inversion
  99. 99. Conclusions and Discussion</li></li></ul><li>Real well log data: Apparent conductivity vs. depth<br />
  100. 100. Application to real well log data inversion<br />Real well log data:<br /><ul><li>Depth from 5,577.5 to 6,772 feet.
  101. 101. Sampling interval 0.5 feet.
  102. 102. Total 2,290 data in one well log.
  103. 103. Select 10-27-9-10 optimal RBF model for real data inversion.</li></ul> After convergence in training, input 10<br /> real data to the RBF model to get the 10 output<br /> data, then input 10 data of the next segment to<br /> get the next 10 output data, repeatedly. <br />
  104. 104. Inversion of real well log data: Inverted Ct vs. depth<br />
  105. 105. Outline<br /><ul><li>Introduction
  106. 106. Proposed Methods
  107. 107. Modification of two-layer RBF
  108. 108. Proposed three-layer RBF
  109. 109. Experiments
  110. 110. Simulation using two-layer RBF
  111. 111. Simulation using three-layer RBF
  112. 112. Application to real well log data inversion
  113. 113. Conclusions and Discussion</li></li></ul><li>Conclusions and Discussion<br /><ul><li>We have the modification of 2-layer RBF and propose 3-layer RBF for well log data inversion.
  114. 114. 3-layer RBF has better inversion than 2-layer RBF because more layers can do more nonlinear mapping.</li></ul>In the simulation, the optimal 3-layer model is 10-27-9-10. It can get the smallest average mean absolute error in the testing.<br />The trained 10-27-9-10 RBFmodel is applied to the real well log data inversion. The result is acceptable and good. It shows that the RBF model can work on well log data inversion.<br />Errors are different at experiments because initial weights are different in the network. But the order or percentage of errors can be for comparison in the RBF performance. <br />
  115. 115. Thank you for your attention.<br />

Γ—