Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Probability Studies of Nuclear Accidents are Flawed - here's why.

409 views

Published on

Now that the Fukushima Japan nuclear meltdowns have occurred, it seems that nuclear accidents happen more frequently than previously estimated. This short report shows examples of previous erroneous estimates, and finds 4 common flaws prevalent in projecting nuclear accident frequency.

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

Probability Studies of Nuclear Accidents are Flawed - here's why.

  1. 1. Scott D. Portzline Three Mile Island Alert Harrisburg PA September 2013 1
  2. 2. If the Nuclear Regulatory Commission used its faulty accident probabilities to run a Las Vegas style casino, they’d have been bankrupt in the first year. 2
  3. 3. simply stated: The NRC’s probabilities are an analysis of an event’s likelihood of occurring over a period of time. 3
  4. 4. Las Vegas can bank on it’s accuracy because there is far more certainty in the mathematics of systems that do not involve breakdowns, human errors, electrical failures, mis-management etc.. Of course the NRC has been unable to calculate those factors with accuracy. 4
  5. 5. Governments need to know what risks are associated with nuclear power. Therefore, the nuclear industry creates favorable probabilistic risk analyses in order to get the governmental “green light.” Here is an example…. a 5
  6. 6. The NRC’s first probabilistic risk study, the Rasmussen Report, was used to persuade Congress to extend the insurance policy for the nuclear power industry. 6
  7. 7. The Rasmussen Report gave the odds as 1 chance in 1 million per reactor per year 7
  8. 8. A mathematician will use a probability risk study to say something like …. “You are more likely to be struck by a meteor than to have a nuclear accident.” (We were told this at Three Mile Island.) 8
  9. 9. March 28, 1979 9
  10. 10. Just two months prior to the TMI accident the NRC was forced to disavow its accreditation of the Rasmussen Report. 10
  11. 11. However, the NRC could not say if the odds of an accident were greater or less than Rasmussen had predicted. This amounted to an admission that it did not understand its own study or which side of the fence to fall on. 11
  12. 12. The biggest problem was that the technical data did not support the conclusions touted in the executive summary. This caused one NRC Commissioner to state that he thought the Rasmussen report was used for propaganda to persuade Congress. 12
  13. 13. Then another analysis after TMI concluded: “…about 20 factors contributed to the damaging results at Three Mile Island. If any one of these factors had been different -- in a way which is common in other plants -there would have been no core damage and no release of radioactivity.” Electric Power Research Institute March 1980 13
  14. 14. In other words, all 20 factors came up against all odds. Vegas would never let that happen. 14
  15. 15. With long-shots like that, Vegas would be “broke” many times over. 15
  16. 16. After the TMI accident, still another analysis showed that the odds as presented in the Rasmussen Report were accurately predictive of the occurrence of the TMI accident. Only, now the odds were described as 1 chance in 7.7 of an accident occurring. Report of the Technical Assessment Task Force Vol. 1 Kemeny Commission 16
  17. 17. Interpretation of Rasmussen Report before TMI vs. afterwards 17
  18. 18. What government would green light a project with a 13% risk of having a severe nuclear accident? 18
  19. 19. As you can see, interpretations of the probabilities for the same incident are “all over the map.” 19
  20. 20. WANTED: MATHEMATICIAN An aptitude test was given to three applicants at a public relations firm. Question: What does 1/3 mean to you? The purest answered, .33333 The theorist answered, “one third” The statistician answered, “What do you want it to say?” 20
  21. 21. PRAs do yield numerical estimates and are thus “quantitative.” Some scientists believe the estimates are so imprecise and subject to manipulation as to be virtually useless in decision making. Human Factors Reliability Benchmark Exercise, Commission of the European Communities, 1989 21
  22. 22. Estimates for the occurrence of “human error” have varied by a factor of ten thousand. Human Factors Reliability Benchmark Exercise, Commission of the European Communities, 1989 That means that older NRC accident PRAs could be off by 4 orders of magnitude on the human aspects alone. 22
  23. 23. The more recent international HRA empirical study in 2011, sponsored by the NRC has found that, twenty years later, large variability appears to remain. Science-Based Simulation Model Of Human Performance for Human Reliability Analysis Idaho National Laboratory 23
  24. 24. The human contribution to the risk of operation of complex technological systems is significant, with typical estimates lying in the range of 60-85%. Case Study Report on Loss of Safety System Function Events, AEOD/C504, Washington, DC: U.S. Nuclear Regulatory Commission, 1985 Therefore PRA’s cannot be accurately modeled, since human behavior varies greatly, even for the same event. 24
  25. 25. The conclusion(s) are not always supported by the data. 25
  26. 26. Some data has been cherry-picked while other data is ignored. 26
  27. 27. The data is applied to faulty Modeling 27
  28. 28. Incomplete processing 28
  29. 29. All four of these deficits are found in the NRC’s latest severe accident study called SOARCA. They are described in detail with examples in my report. http://www.efmr.org/files/2012/SOARCA-2-22-2012.pdf SOARCA is another example of a probability study gone awry. 29
  30. 30. The NRC provides false conclusions to Congress. The NRC Commissioners testified that a Fukushima-type accident can not happen in the US. Flooding by tsunami triggered the accidents. However, one of their own PRAs shows that there is a 100% chance of a meltdown at a specific US plant (unnamed here for security reasons) if a nearby upstream dam were to fail. 30
  31. 31. The confidence of the nuclear industry is falsely bolstered by exaggerated probabilities. Furthermore, these unwarranted confidences morph into an attitude that an accident is possible, but it won’t happen “here” or “any time soon.” Mantra: “defense in depth.” 31
  32. 32. These attitudes created conditions that led to Three Mile Island and Fukushima. Specifically: At TMI, the industry banked on numerous backup systems which were not available, did not work properly, or were disabled prior to and during the accident. “Defense in depth” did not succeed. (see slide #13) At Fukushima, the industry and the government banked on the probabilities of Tsunami height. The entire chain after that point was defeated. 32
  33. 33. There is a long chain of safety systems with many variables. Too many to accurately quantify in a probabilistic risk analysis. The NRC is able to understand and quantify certain (but not all) shorter segments of the chain. However the NRC cannot quantify the risks for the entire chain. 33
  34. 34. Insurance companies and Las Vegas are very good at probabilities. Neither one is in the nuclear game for good reason. The NRC should not place a heavy reliance upon PRAs in their licensing and safety analyses and processes. The NRC should likewise not use PRAs in testimony to governmental bodies until the time comes when they have a far better track record. end 34

×