SlideShare a Scribd company logo
1 of 5
Download to read offline
Con artistry in predictive science
The credibility of mathematical models is no longer an esoteric issue for the few. Rather, it is
important for many of us in our modern life, since models provide the only means for predicting
the future, given prior knowledge deduced from historical observations and the current state.
Weather forecasting is one well-known example addressed below. As our understanding never will
be perfect, errors in predictive science are unavoidable. The most relevant question is thus not
how errors can be extinguished since they can't, but how they are quantified in terms of prediction
uncertainty, which is nothing but a supposedly conservative prediction of the magnitude of
prediction errors. The uncertainty is the 'safety belt of scientific deduction' which transforms
unwarranted to plausible, credible predictions of the future by adding reasonable doubt to
otherwise precise expectations.
Why bother about uncertainty?
Travelling in harsh environments like oceans and polar regions, the accuracy and precision of
weather forecasting may literally be a question of life and death. Knowing the expected
temperature, wind speed or expected core temperature in a nuclear power reactor is not nearly
enough to be successful, or even survive. What is often most essential is to find a credible estimate
of the worst possible scenario. A fair guess of the worst is conventionally found by adding a
conservative amount of uncertainty to the expected.
A fatal consequence of in-credible uncertainty
Reading about the 1979 Fastnet sailing race disaster (coarse indicated by green oval): ”A worse-
than-expected storm on the third day of the race wreaked havoc on over 306 yachts taking part in
the biennial race, resulting in 18 fatalities (15 yachtsmen and 3 rescuers)...became the largest ever
rescue operation in peace-time. This
involved some 4,000 people
including the entire Irish Naval
Service's fleet, lifeboats, commercial
boats, and helicopters.”
With an imperfect weather
forecasting model (like most
models), the occurrence of the
unexpected is nothing extra-ordinary,
comparable to encountering a black
swan, which indeed exist. The
intriguing questions are rather: How
much worse and could such an
extreme outcome have been anticipated from what was known and observed prior to the disaster,
utilizing an appropriate method of uncertainty quantification? How credible is current practice?
Most importantly, how is prior information represented and propagated to a number of
uncertainty for our prediction of interest, like wind speed or temperature. Saying that the wind
speed tomorrow is expected to be 15 m/s but not exceeding 20 m/s is utterly different to an
expected wind speed of 15 m/s but not exceeding 40 m/s, since the latter but not the former
would likely have cancelled the race. The weather forecast uncertainty was apparently set too low:
The Meteorological Office assessed the maximum winds as force 10 on the Beaufort scale; many
race competitors believed the winds to have reached force 11. An alternative forecast of a
maximum wind speed of Beaufort 12 could have prevented the large scale disaster, unprecedented
of its kind and a game changer for offshore safety. Could such differences possibly be traced to
how available information is processed, or even made complete by supplementary assumptions?
Conventional art of modeling
Models are normally proposed with mathematical and logical insight based on physical
understanding gained from past observations, following a rather well scientifically orchestrated
'learning' (featured image, left) process. The model 'prediction' (featured image, right) is
consequently validated by comparison to future observations, similar to the comparisons of
forecasted and experienced weather we all do on a daily basis. Credible modeling practice also
prescribe evaluation of the uncertainty of the inferred model, due to the uncertainty of the
observations used for learning, shown as observation bounds (green). This uncertainty of the best
model results in uncertainty of its prediction, or prediction bounds (red). The reduction from
observation uncertainty to prediction uncertainty here labeled uncertainty quenching is often
large.
Model predictions are faulty or inconsistent with observations whenever the model error (black
arrows) is large enough for the prediction to fall out of the observation bounds (green, not
displayed above for the prediction). Consistent, or valid models are consistent with respect to the
uncertain observations used for learning but by no means it is guaranteed that their prediction
bounds (red) embrace (black arrows) the hidden truth (blue). Consistent models may thus be con
artists, as they are consistent with observations but may not include truth as a possibility. Another
peculiarity is that the uncertainty of prediction is lower than that of the observations used for
learning, i.e. ”the student excels his/her master”. Additional information must have been
supplemented in the learning process to make this happen, but how and can it be justified?
The color of observations
The crucial piece is the information observations entail. It is clearly important to distinguish what is
known from what is guessed. It is far more complex to appreciate and understand all relevant
nuggets building the uncertainty of the observed, than the expected or most plausible outcome.
Even more difficult is to associate statistical dependencies between possible errors, i.e. how errors
in various points may combine statistically. A forecasted uncertainty resulting in, e.g., a finite
Beaufort 8-12 wind speed interval, emanates from the uncertainty of the underlying weather
forecasting model (governing equations and their parameters) gained in its learning process. To a
considerable extent it results from stated dependencies of past observations.
Dependencies within data sets like weather radar observations at different locations are often not
only completely ignored, they are in fact often willfully claimed independent. It might appear
harmless to state that observations are independent in absence of knowledge. However, a
corresponding statement of color in everyday life would be much more astonishing: Imagine a
husband telling his spouse after having seen a personal car for sale – ”I took a test drive of the car
but it was too dark to see its color. I can tell you it is definitely white, with complete certainty since
I don't know”. The analogy is better than it may seem – signals composed of statistically
independent values are commonly referred to as ”white noise”. A statement of independence, or
white observations is practical since it simplifies analysis of the model prediction error and closes
the irritating gap of ambiguity defied in virtually all fields of science. Nevertheless, it is the main
cause of excessive uncertainty quenching. It leads to overestimated confidence in modeling
accuracy, potentially with as fatal consequences as the Fastnet race disaster and the Fukushima
nuclear power accident (tsunami models). Severe incidents of this kind usually require several
concurrent failures, which is just another way to express dependencies.
The importance of color...
Dependencies providing the color of observation errors are often far more important for model
prediction errors, than the character of individual variations of the error magnitude. That is easily
understood by counting numbers used in mathematical statistics to represent variations and
dependencies. For 10 uncertain quantities there are 10 variations and 10*9/2=45 different
pairwise dependencies, which thus are in more than 4-fold excess. For 100 uncertain quantities,
such dependencies are in close to 50-fold excess. For dependencies between three sources this
excess add up to no less than 120 and about 160 000(!) times for 10 and 100 quantities,
respectively. The abundance of dependencies of observations errors usually make them the
dominating source of uncertainty for the predicted uncertainty. Returning to the Fastnet race
disaster, differences in describing dependencies, or the color of observation errors may have set
the forecasted maximum wind speed to either Beaufort 10, or 12. To an offshore sailor
assumptions of this 'color' made by the Meteorological Office may spell the difference between a
safe voyage, or wrecking the ship and falling helplessly into a freezing cold ocean.
Uncertainty Quenching
Motivated by lack of knowledge, the errors of different numbers, like temperature at different
positions, are often claimed independent. In Bayesian terminology it is said to reflect 'our state of
knowledge', rather than any physical observable 'truth'. The consequences of such claims are
nevertheless ridiculous and deceiving: If two points of observation are separated by one kilometer
and if the predicted wind speed are wrong by 5 m/s in one point, statistically it should have no
definite relation to associated errors in the other adjacent location. That is a truly amazing
statement, since it entails knowledge that average weather systems are less than 1 km in diameter!
On the contrary, satellite images of weather systems of size 100 km or larger are presented many
times a day in virtually all countries around the world. Lack of knowledge does not make
assumptions of independence more plausible than partial or even complete dependence. The
number zero (representing independence) is just as precise as any other number. For this type of
observation it requires substantial mental stamina to ignore all prior knowledge of the typical size
of weather systems, which conveniently could be cast into so-called correlation lengths.
In the learning process, model results are compared to observations. Models fundamentally
exist(!) to describe observed dependencies. Independent observation errors stand in utter contrast
to what models are capable of. The dichotomy of dependency expressed by models and common
assumptions of observations is the main cause of excessive uncertainty quenching. This can be
explained by how the different pieces of information, the model structure and the observations
are fused.
The innovation of the model
The model equations constitute deterministic prior information, comparable but different to
statistical prior information utilized in Bayesian methods. How much the model structure reduces,
or quench the uncertainty of observations to predictions is determined by the innovation of the
model. It reflects the dissimilarity between what the model equations are capable of and the
actual observations used for learning. Models are devised to describe contexts and patterns, as
they originate from observations of such. They cannot describe, or represent independent short-
range variations. Believing in any model while claiming independence of observation errors is
therefore inconsistent. It maximizes the innovation of the model resulting in the largest possible
quench of uncertainty. The sole reason why the quenching is not infinite(!), i.e. zero prediction
uncertainty is due to sampling variance of observations in excess of modeling degrees of freedom.
Combing such extraordinary disparate pieces of information is indeed a very powerful method to
find a model superior to observations. However, if something appears too amazing to be true, it is
probably not.
Several claims leading to almost perfect models with excessive uncertainty quenching are indeed
troublesome. Statements of independence reflect willful ignorance, not lack of knowledge. Truly
ignoring knowledge due to its intractable complexity would truthfully respect instead of
exterminate ambiguity, and thus qualify as justifiable ignorance. Utilizing only one specific set of
model equations also implies complete certainty about the principal behavior, until proven wrong.
Can such strong statements ever be validated? A more humble alternative would be to use as
many different available models simultaneously as possible. That would add structural uncertainty
to the parametric uncertainty of each model so far discussed. All these suggested adjustments of
current practice lowers the innovation of the model, with intent to enhance the credibility of its
predictions. It is quite plausible that the innovation of most models relying upon whiteness of
observations is boldly exaggerated, leading to excessive uncertainty quenching and consequently
failing predictions. Models may therefore be rejected as a consequence of faulty assumptions, not
necessarily that they are incapable of making credible useful predictions.
The confidence trick
Thus, the ubiquitous confidence trick of predictive science is as follows:
Motivated by a deceivingly honest declaration of 'lack or state of
knowledge' it is claimed that possible observation errors are statistically
independent, or 'white'. That maximizes the innovation of any model
and consequently leads to excessive uncertainty quenching, well beyond
reason in most situations; ergo, predictions always hits the 'bullseye' of
an imaginary truth, without virtually any spread (uncertainty). In
retrospect however, predictions almost surely turn out to be faulty when
compared to sufficiently accurate new observations, since the model has
been found consistent with a limited set of past observations but is likely
not consistent with the unknown truth (see featured image).
Uncertain Models
This discussion highlights that it is not enough to just evaluate the uncertainty, the method as well
as all claims for quantifying this uncertainty must be mutually consistent and credible. Targeting a
certain model and assessing its uncertainty according to current state-of-the-art is to a large extent
based on hypothetical assumptions about the truth and blunt claims of the nature, or color of
observations.
It may be better to do the opposite, i.e. pragmatically target an uncertain model and express as
much certainty about its constitution as our knowledge allow, leaving a residual ambiguity. That
suggest the uncertain model is utilized as the best possible representation of available
information, rather than describes any 'truth' calling for its justification. The uncertain
observations then 'corresponds to an uncertain model', instead of 'implies uncertainty of a certain
model'. The residual ambiguity should be respected and represented by the uncertain model to
the extent it is possible – not knowing does not imply anything specific whatsoever.
The novel approach of deterministic sampling is a powerful enabler of the unprecedented concept
uncertain models, which elegantly manages ambiguity. A manuscript tentatively entitled
'Uncertain models' is currently being prepared, targeting scientific publication in a journal
dedicated to uncertainty quantification. It will include a definition of the concept uncertain
models, suggest applicable methods and practical tools of deterministic sampling, as well as
contain comparisons to conventional modeling practice and a simple example.
For more information and to provide feedback, please contact peter@kapernicus.com.
Illustrations
Images and graphs are provided by Kapernicus Inc. (featured image) and Wikimedia Commons:
Weather map – Koninklijk Nederlands Meteorologisch Instituut (KNMI, Royal Netherlands
Meteorological Institute) ([1] via Weerkaarten archief Europa) , with inset – Ian Kirk from
Broadstone, Dorset, UK (2013 Rolex Fastnet Race ESP 123 Tales II) [CC BY 2.0
(http://creativecommons.org/licenses/by/2.0)], and blue 'bullseye' – www.accurateairmo.com.

More Related Content

Similar to Con artistry in predictive science

Summary of Chapter 5In chapter five of The Signal and the No
Summary of Chapter 5In chapter five of The Signal and the NoSummary of Chapter 5In chapter five of The Signal and the No
Summary of Chapter 5In chapter five of The Signal and the Nodaniatrappit
 
Are catastrophe models able to capture extreme events?
Are catastrophe models able to capture extreme events?Are catastrophe models able to capture extreme events?
Are catastrophe models able to capture extreme events?Barbara Bisanti
 
Probability ppt by Shivansh J.
Probability ppt by Shivansh J.Probability ppt by Shivansh J.
Probability ppt by Shivansh J.shivujagga
 
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.com
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.comTraveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.com
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.comElizabeth Montes
 
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...ijscai
 
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...ijscai
 
Electromagnetic foundations of solar radiation collection
Electromagnetic foundations of solar radiation collectionElectromagnetic foundations of solar radiation collection
Electromagnetic foundations of solar radiation collectionFernanda Valerio
 
Confounding and Directed Acyclic Graphs
Confounding and Directed Acyclic GraphsConfounding and Directed Acyclic Graphs
Confounding and Directed Acyclic GraphsDarren L Dahly PhD
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalredacooil
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalredacooil
 
Scanned by CamScannerS.docx
Scanned by CamScannerS.docxScanned by CamScannerS.docx
Scanned by CamScannerS.docxanhlodge
 
modelinginterpretingeconomics.pdf
modelinginterpretingeconomics.pdfmodelinginterpretingeconomics.pdf
modelinginterpretingeconomics.pdfTamash Majumdar
 
stouffl_hyo13rapport
stouffl_hyo13rapportstouffl_hyo13rapport
stouffl_hyo13rapportLoïc Stouff
 
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdf
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdfrophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdf
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdfDiego Lopez
 
Class 12th Physics Investigatory Project for CBSE on ERRORS
Class 12th Physics Investigatory Project for CBSE on ERRORSClass 12th Physics Investigatory Project for CBSE on ERRORS
Class 12th Physics Investigatory Project for CBSE on ERRORSRanjan Lohia
 
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTA
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTAINVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTA
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTAIAEME Publication
 

Similar to Con artistry in predictive science (20)

Summary of Chapter 5In chapter five of The Signal and the No
Summary of Chapter 5In chapter five of The Signal and the NoSummary of Chapter 5In chapter five of The Signal and the No
Summary of Chapter 5In chapter five of The Signal and the No
 
Are catastrophe models able to capture extreme events?
Are catastrophe models able to capture extreme events?Are catastrophe models able to capture extreme events?
Are catastrophe models able to capture extreme events?
 
Probability ppt by Shivansh J.
Probability ppt by Shivansh J.Probability ppt by Shivansh J.
Probability ppt by Shivansh J.
 
H7-SafetyHFE.pdf
H7-SafetyHFE.pdfH7-SafetyHFE.pdf
H7-SafetyHFE.pdf
 
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.com
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.comTraveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.com
Traveling Essay. Travelling Experience Essay Example 300 Words - PHDessay.com
 
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...
https://www.scoop.it/topic/soft-computin/p/4142616393/2023/04/12/internationa...
 
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...
APPLICATION OF MATRIX PROFILE TECHNIQUES TO DETECT INSIGHTFUL DISCORDS IN CLI...
 
Electromagnetic foundations of solar radiation collection
Electromagnetic foundations of solar radiation collectionElectromagnetic foundations of solar radiation collection
Electromagnetic foundations of solar radiation collection
 
Confounding and Directed Acyclic Graphs
Confounding and Directed Acyclic GraphsConfounding and Directed Acyclic Graphs
Confounding and Directed Acyclic Graphs
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
 
Evolution terriskmod woo_journalre
Evolution terriskmod woo_journalreEvolution terriskmod woo_journalre
Evolution terriskmod woo_journalre
 
Scanned by CamScannerS.docx
Scanned by CamScannerS.docxScanned by CamScannerS.docx
Scanned by CamScannerS.docx
 
modelinginterpretingeconomics.pdf
modelinginterpretingeconomics.pdfmodelinginterpretingeconomics.pdf
modelinginterpretingeconomics.pdf
 
stouffl_hyo13rapport
stouffl_hyo13rapportstouffl_hyo13rapport
stouffl_hyo13rapport
 
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdf
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdfrophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdf
rophecy,_reality_and_uncertainty_in_distributed_hydrological_modelling.pdf
 
Butterfly paper
Butterfly paperButterfly paper
Butterfly paper
 
Class 12th Physics Investigatory Project for CBSE on ERRORS
Class 12th Physics Investigatory Project for CBSE on ERRORSClass 12th Physics Investigatory Project for CBSE on ERRORS
Class 12th Physics Investigatory Project for CBSE on ERRORS
 
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTA
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTAINVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTA
INVESTIGATION AND EVALUATION OF SCINTILLATION PREDICTION MODELS AT OTA
 
Errors
ErrorsErrors
Errors
 
Essays On Nuclear Power
Essays On Nuclear PowerEssays On Nuclear Power
Essays On Nuclear Power
 

Recently uploaded

Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsSérgio Sacani
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRDelhi Call girls
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Seismic Method Estimate velocity from seismic data.pptx
Seismic Method Estimate velocity from seismic  data.pptxSeismic Method Estimate velocity from seismic  data.pptx
Seismic Method Estimate velocity from seismic data.pptxAlMamun560346
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.Nitya salvi
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticssakshisoni2385
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...RohitNehra6
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bSérgio Sacani
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxUmerFayaz5
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsSérgio Sacani
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and ClassificationsAreesha Ahmad
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencySheetal Arora
 
Botany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questionsBotany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questionsSumit Kumar yadav
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksSérgio Sacani
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPirithiRaju
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...ssuser79fe74
 
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...chandars293
 

Recently uploaded (20)

Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Seismic Method Estimate velocity from seismic data.pptx
Seismic Method Estimate velocity from seismic  data.pptxSeismic Method Estimate velocity from seismic  data.pptx
Seismic Method Estimate velocity from seismic data.pptx
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
 
Botany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questionsBotany krishna series 2nd semester Only Mcq type questions
Botany krishna series 2nd semester Only Mcq type questions
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Pests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdfPests of mustard_Identification_Management_Dr.UPR.pdf
Pests of mustard_Identification_Management_Dr.UPR.pdf
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
 

Con artistry in predictive science

  • 1. Con artistry in predictive science The credibility of mathematical models is no longer an esoteric issue for the few. Rather, it is important for many of us in our modern life, since models provide the only means for predicting the future, given prior knowledge deduced from historical observations and the current state. Weather forecasting is one well-known example addressed below. As our understanding never will be perfect, errors in predictive science are unavoidable. The most relevant question is thus not how errors can be extinguished since they can't, but how they are quantified in terms of prediction uncertainty, which is nothing but a supposedly conservative prediction of the magnitude of prediction errors. The uncertainty is the 'safety belt of scientific deduction' which transforms unwarranted to plausible, credible predictions of the future by adding reasonable doubt to otherwise precise expectations. Why bother about uncertainty? Travelling in harsh environments like oceans and polar regions, the accuracy and precision of weather forecasting may literally be a question of life and death. Knowing the expected temperature, wind speed or expected core temperature in a nuclear power reactor is not nearly enough to be successful, or even survive. What is often most essential is to find a credible estimate of the worst possible scenario. A fair guess of the worst is conventionally found by adding a conservative amount of uncertainty to the expected.
  • 2. A fatal consequence of in-credible uncertainty Reading about the 1979 Fastnet sailing race disaster (coarse indicated by green oval): ”A worse- than-expected storm on the third day of the race wreaked havoc on over 306 yachts taking part in the biennial race, resulting in 18 fatalities (15 yachtsmen and 3 rescuers)...became the largest ever rescue operation in peace-time. This involved some 4,000 people including the entire Irish Naval Service's fleet, lifeboats, commercial boats, and helicopters.” With an imperfect weather forecasting model (like most models), the occurrence of the unexpected is nothing extra-ordinary, comparable to encountering a black swan, which indeed exist. The intriguing questions are rather: How much worse and could such an extreme outcome have been anticipated from what was known and observed prior to the disaster, utilizing an appropriate method of uncertainty quantification? How credible is current practice? Most importantly, how is prior information represented and propagated to a number of uncertainty for our prediction of interest, like wind speed or temperature. Saying that the wind speed tomorrow is expected to be 15 m/s but not exceeding 20 m/s is utterly different to an expected wind speed of 15 m/s but not exceeding 40 m/s, since the latter but not the former would likely have cancelled the race. The weather forecast uncertainty was apparently set too low: The Meteorological Office assessed the maximum winds as force 10 on the Beaufort scale; many race competitors believed the winds to have reached force 11. An alternative forecast of a maximum wind speed of Beaufort 12 could have prevented the large scale disaster, unprecedented of its kind and a game changer for offshore safety. Could such differences possibly be traced to how available information is processed, or even made complete by supplementary assumptions? Conventional art of modeling Models are normally proposed with mathematical and logical insight based on physical understanding gained from past observations, following a rather well scientifically orchestrated 'learning' (featured image, left) process. The model 'prediction' (featured image, right) is consequently validated by comparison to future observations, similar to the comparisons of forecasted and experienced weather we all do on a daily basis. Credible modeling practice also prescribe evaluation of the uncertainty of the inferred model, due to the uncertainty of the observations used for learning, shown as observation bounds (green). This uncertainty of the best model results in uncertainty of its prediction, or prediction bounds (red). The reduction from observation uncertainty to prediction uncertainty here labeled uncertainty quenching is often large. Model predictions are faulty or inconsistent with observations whenever the model error (black arrows) is large enough for the prediction to fall out of the observation bounds (green, not displayed above for the prediction). Consistent, or valid models are consistent with respect to the uncertain observations used for learning but by no means it is guaranteed that their prediction bounds (red) embrace (black arrows) the hidden truth (blue). Consistent models may thus be con artists, as they are consistent with observations but may not include truth as a possibility. Another
  • 3. peculiarity is that the uncertainty of prediction is lower than that of the observations used for learning, i.e. ”the student excels his/her master”. Additional information must have been supplemented in the learning process to make this happen, but how and can it be justified? The color of observations The crucial piece is the information observations entail. It is clearly important to distinguish what is known from what is guessed. It is far more complex to appreciate and understand all relevant nuggets building the uncertainty of the observed, than the expected or most plausible outcome. Even more difficult is to associate statistical dependencies between possible errors, i.e. how errors in various points may combine statistically. A forecasted uncertainty resulting in, e.g., a finite Beaufort 8-12 wind speed interval, emanates from the uncertainty of the underlying weather forecasting model (governing equations and their parameters) gained in its learning process. To a considerable extent it results from stated dependencies of past observations. Dependencies within data sets like weather radar observations at different locations are often not only completely ignored, they are in fact often willfully claimed independent. It might appear harmless to state that observations are independent in absence of knowledge. However, a corresponding statement of color in everyday life would be much more astonishing: Imagine a husband telling his spouse after having seen a personal car for sale – ”I took a test drive of the car but it was too dark to see its color. I can tell you it is definitely white, with complete certainty since I don't know”. The analogy is better than it may seem – signals composed of statistically independent values are commonly referred to as ”white noise”. A statement of independence, or white observations is practical since it simplifies analysis of the model prediction error and closes the irritating gap of ambiguity defied in virtually all fields of science. Nevertheless, it is the main cause of excessive uncertainty quenching. It leads to overestimated confidence in modeling accuracy, potentially with as fatal consequences as the Fastnet race disaster and the Fukushima nuclear power accident (tsunami models). Severe incidents of this kind usually require several concurrent failures, which is just another way to express dependencies. The importance of color... Dependencies providing the color of observation errors are often far more important for model prediction errors, than the character of individual variations of the error magnitude. That is easily understood by counting numbers used in mathematical statistics to represent variations and dependencies. For 10 uncertain quantities there are 10 variations and 10*9/2=45 different pairwise dependencies, which thus are in more than 4-fold excess. For 100 uncertain quantities, such dependencies are in close to 50-fold excess. For dependencies between three sources this excess add up to no less than 120 and about 160 000(!) times for 10 and 100 quantities, respectively. The abundance of dependencies of observations errors usually make them the dominating source of uncertainty for the predicted uncertainty. Returning to the Fastnet race disaster, differences in describing dependencies, or the color of observation errors may have set the forecasted maximum wind speed to either Beaufort 10, or 12. To an offshore sailor assumptions of this 'color' made by the Meteorological Office may spell the difference between a safe voyage, or wrecking the ship and falling helplessly into a freezing cold ocean. Uncertainty Quenching Motivated by lack of knowledge, the errors of different numbers, like temperature at different positions, are often claimed independent. In Bayesian terminology it is said to reflect 'our state of knowledge', rather than any physical observable 'truth'. The consequences of such claims are
  • 4. nevertheless ridiculous and deceiving: If two points of observation are separated by one kilometer and if the predicted wind speed are wrong by 5 m/s in one point, statistically it should have no definite relation to associated errors in the other adjacent location. That is a truly amazing statement, since it entails knowledge that average weather systems are less than 1 km in diameter! On the contrary, satellite images of weather systems of size 100 km or larger are presented many times a day in virtually all countries around the world. Lack of knowledge does not make assumptions of independence more plausible than partial or even complete dependence. The number zero (representing independence) is just as precise as any other number. For this type of observation it requires substantial mental stamina to ignore all prior knowledge of the typical size of weather systems, which conveniently could be cast into so-called correlation lengths. In the learning process, model results are compared to observations. Models fundamentally exist(!) to describe observed dependencies. Independent observation errors stand in utter contrast to what models are capable of. The dichotomy of dependency expressed by models and common assumptions of observations is the main cause of excessive uncertainty quenching. This can be explained by how the different pieces of information, the model structure and the observations are fused. The innovation of the model The model equations constitute deterministic prior information, comparable but different to statistical prior information utilized in Bayesian methods. How much the model structure reduces, or quench the uncertainty of observations to predictions is determined by the innovation of the model. It reflects the dissimilarity between what the model equations are capable of and the actual observations used for learning. Models are devised to describe contexts and patterns, as they originate from observations of such. They cannot describe, or represent independent short- range variations. Believing in any model while claiming independence of observation errors is therefore inconsistent. It maximizes the innovation of the model resulting in the largest possible quench of uncertainty. The sole reason why the quenching is not infinite(!), i.e. zero prediction uncertainty is due to sampling variance of observations in excess of modeling degrees of freedom. Combing such extraordinary disparate pieces of information is indeed a very powerful method to find a model superior to observations. However, if something appears too amazing to be true, it is probably not. Several claims leading to almost perfect models with excessive uncertainty quenching are indeed troublesome. Statements of independence reflect willful ignorance, not lack of knowledge. Truly ignoring knowledge due to its intractable complexity would truthfully respect instead of exterminate ambiguity, and thus qualify as justifiable ignorance. Utilizing only one specific set of model equations also implies complete certainty about the principal behavior, until proven wrong. Can such strong statements ever be validated? A more humble alternative would be to use as many different available models simultaneously as possible. That would add structural uncertainty to the parametric uncertainty of each model so far discussed. All these suggested adjustments of current practice lowers the innovation of the model, with intent to enhance the credibility of its predictions. It is quite plausible that the innovation of most models relying upon whiteness of observations is boldly exaggerated, leading to excessive uncertainty quenching and consequently failing predictions. Models may therefore be rejected as a consequence of faulty assumptions, not necessarily that they are incapable of making credible useful predictions.
  • 5. The confidence trick Thus, the ubiquitous confidence trick of predictive science is as follows: Motivated by a deceivingly honest declaration of 'lack or state of knowledge' it is claimed that possible observation errors are statistically independent, or 'white'. That maximizes the innovation of any model and consequently leads to excessive uncertainty quenching, well beyond reason in most situations; ergo, predictions always hits the 'bullseye' of an imaginary truth, without virtually any spread (uncertainty). In retrospect however, predictions almost surely turn out to be faulty when compared to sufficiently accurate new observations, since the model has been found consistent with a limited set of past observations but is likely not consistent with the unknown truth (see featured image). Uncertain Models This discussion highlights that it is not enough to just evaluate the uncertainty, the method as well as all claims for quantifying this uncertainty must be mutually consistent and credible. Targeting a certain model and assessing its uncertainty according to current state-of-the-art is to a large extent based on hypothetical assumptions about the truth and blunt claims of the nature, or color of observations. It may be better to do the opposite, i.e. pragmatically target an uncertain model and express as much certainty about its constitution as our knowledge allow, leaving a residual ambiguity. That suggest the uncertain model is utilized as the best possible representation of available information, rather than describes any 'truth' calling for its justification. The uncertain observations then 'corresponds to an uncertain model', instead of 'implies uncertainty of a certain model'. The residual ambiguity should be respected and represented by the uncertain model to the extent it is possible – not knowing does not imply anything specific whatsoever. The novel approach of deterministic sampling is a powerful enabler of the unprecedented concept uncertain models, which elegantly manages ambiguity. A manuscript tentatively entitled 'Uncertain models' is currently being prepared, targeting scientific publication in a journal dedicated to uncertainty quantification. It will include a definition of the concept uncertain models, suggest applicable methods and practical tools of deterministic sampling, as well as contain comparisons to conventional modeling practice and a simple example. For more information and to provide feedback, please contact peter@kapernicus.com. Illustrations Images and graphs are provided by Kapernicus Inc. (featured image) and Wikimedia Commons: Weather map – Koninklijk Nederlands Meteorologisch Instituut (KNMI, Royal Netherlands Meteorological Institute) ([1] via Weerkaarten archief Europa) , with inset – Ian Kirk from Broadstone, Dorset, UK (2013 Rolex Fastnet Race ESP 123 Tales II) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], and blue 'bullseye' – www.accurateairmo.com.