This document discusses cognitive biases that can affect project decision making. It aims to subject these biases to scientific investigation. Some biases discussed include regression to the mean, confirmation bias, gambler's fallacy, experimenter bias, framing effect, knowledge bias, normalcy bias, outcome bias, and hindsight bias. False positives, conditional probability, and the inability to predict rare events are also mentioned. The conclusion is that randomness can only be understood after events occur, not before.
This document discusses various types of perceptual biases and cognitive biases that can influence how people perceive and make attributions about others and themselves. It defines perceptual bias as systematic errors in perceiving others based on sensory inputs. It then outlines several specific biases, including fundamental attribution error, culture bias, actor-observer difference, dispositional attributions, self-serving bias, defensive attribution hypothesis, belief bias, hindsight bias, and selective perception. It emphasizes the importance of being aware of biases, fact-checking assumptions, avoiding stereotypes, and making evaluations based on objective factors in order to overcome biases.
The document discusses how perception affects business communications. It begins by defining perception as using the senses to become aware of something, and communication as conveying information. There are different types of perception including self-perception, environmental perception, learned perception, physical perception, and cultural perception, which are shaped by factors like personality, culture, experiences, and physical senses. Perception affects communication through things like dress, eye contact, tone of voice, and past experiences. Differences in perception based on age, gender, culture, or experiences can negatively impact communication if not properly understood. Overall, the document examines how our perceptions are formed and how varying perceptions between individuals can introduce barriers in effective business communication.
The document discusses perception and personality in organizations. It defines perception as how individuals interpret and make sense of sensory information from their environment. The perceptual process involves selecting stimuli, organizing it, and interpreting it based on attitudes and expectations. When observing behaviors, people use attribution theory to determine whether the cause is internal or personal to the individual, versus external factors. There are also several errors people can make in perception and attribution. The document also defines personality as relatively stable behavioral patterns and internal states that influence how a person interacts with others. It discusses several theories for determining personality types.
Perception is the process of receiving and interpreting sensory information. It involves selecting stimuli, organizing that information, and attributing meaning based on existing knowledge and biases. Perception is influenced by factors within the perceiver like attitudes, motives, and expectations, factors within the target like novelty and size, and situational factors like time and social context. People use perceptual shortcuts like selective perception, halo effects, stereotyping, and projection to make judgments about others.
The document discusses various cognitive biases and heuristics that influence human decision-making, such as the planning fallacy in which people underestimate costs and overestimate benefits, and optimism bias which can motivate action but also lead to false beliefs. It also examines loss aversion bias and how optimism can help protect against the paralyzing effects of fearing losses more than valuing gains. A number of heuristics are explored, including the affect heuristic where emotional reactions can drive behavior over cognitive risk assessments.
Several myths underpin our understanding of decision making. We expose five of these myths - and identify ways to improve decisions in conditions of uncertainty
Cognitive biases are systematic patterns of deviation from rational judgment that occur in human decision making. There are over 188 known types of cognitive biases that fall under categories like decision making biases, social biases, and memory errors. Some examples of cognitive biases include anchoring bias, where people rely too heavily on the first piece of information offered; bandwagon effect, where people do something because others are doing it regardless of their own beliefs; and confirmation bias, where people interpret information in a way that confirms their preexisting beliefs. Understanding cognitive biases is important because they influence how people think and make judgments in ways that can lead to irrational decisions.
This document discusses various types of perceptual biases and cognitive biases that can influence how people perceive and make attributions about others and themselves. It defines perceptual bias as systematic errors in perceiving others based on sensory inputs. It then outlines several specific biases, including fundamental attribution error, culture bias, actor-observer difference, dispositional attributions, self-serving bias, defensive attribution hypothesis, belief bias, hindsight bias, and selective perception. It emphasizes the importance of being aware of biases, fact-checking assumptions, avoiding stereotypes, and making evaluations based on objective factors in order to overcome biases.
The document discusses how perception affects business communications. It begins by defining perception as using the senses to become aware of something, and communication as conveying information. There are different types of perception including self-perception, environmental perception, learned perception, physical perception, and cultural perception, which are shaped by factors like personality, culture, experiences, and physical senses. Perception affects communication through things like dress, eye contact, tone of voice, and past experiences. Differences in perception based on age, gender, culture, or experiences can negatively impact communication if not properly understood. Overall, the document examines how our perceptions are formed and how varying perceptions between individuals can introduce barriers in effective business communication.
The document discusses perception and personality in organizations. It defines perception as how individuals interpret and make sense of sensory information from their environment. The perceptual process involves selecting stimuli, organizing it, and interpreting it based on attitudes and expectations. When observing behaviors, people use attribution theory to determine whether the cause is internal or personal to the individual, versus external factors. There are also several errors people can make in perception and attribution. The document also defines personality as relatively stable behavioral patterns and internal states that influence how a person interacts with others. It discusses several theories for determining personality types.
Perception is the process of receiving and interpreting sensory information. It involves selecting stimuli, organizing that information, and attributing meaning based on existing knowledge and biases. Perception is influenced by factors within the perceiver like attitudes, motives, and expectations, factors within the target like novelty and size, and situational factors like time and social context. People use perceptual shortcuts like selective perception, halo effects, stereotyping, and projection to make judgments about others.
The document discusses various cognitive biases and heuristics that influence human decision-making, such as the planning fallacy in which people underestimate costs and overestimate benefits, and optimism bias which can motivate action but also lead to false beliefs. It also examines loss aversion bias and how optimism can help protect against the paralyzing effects of fearing losses more than valuing gains. A number of heuristics are explored, including the affect heuristic where emotional reactions can drive behavior over cognitive risk assessments.
Several myths underpin our understanding of decision making. We expose five of these myths - and identify ways to improve decisions in conditions of uncertainty
Cognitive biases are systematic patterns of deviation from rational judgment that occur in human decision making. There are over 188 known types of cognitive biases that fall under categories like decision making biases, social biases, and memory errors. Some examples of cognitive biases include anchoring bias, where people rely too heavily on the first piece of information offered; bandwagon effect, where people do something because others are doing it regardless of their own beliefs; and confirmation bias, where people interpret information in a way that confirms their preexisting beliefs. Understanding cognitive biases is important because they influence how people think and make judgments in ways that can lead to irrational decisions.
This document summarizes key points from Nate Silver's book "The Signal and the Noise" regarding predictive modeling and forecasting. It discusses how Bayesian reasoning using priors can improve predictions by explicitly incorporating uncertainty. Examples are given where rating agencies, economists, and pundits made poor predictions by failing to account for factors outside their models. The document advocates applying Bayesian methods to statistical models to estimate parameters and improve predictions.
This document discusses various logical fallacies, including the "full moon fallacy" which is the tendency to attribute increased accidents, crime rates, etc. to the full moon despite a lack of evidence. It provides examples of arguments committing fallacious reasoning by making unjustified causal links between the full moon and accidents or behaviors. The document warns readers not to fall prey to fallacious thinking and assumptions, but rather to conduct root cause analyses of accidents logically using tools like cause-and-effect diagrams to identify actual causal factors, not assume blame, and ultimately prevent future accidents.
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy)APDEMS370AlyciaGold776
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy)
AP/DEMS3706 Note Share
Hello everyone! Think of this space as a crowdsourced notebook . . . everyone is welcome to take and share DEMS3706 lecture and reading notes here. -[;.
Module One - Rational, Irrational, or Something Else? 2
Cognitive Biases - Definitions 2
Bounded Rationality (Tversky, Kahneman) 6
Representativeness 6
Availability Bias 7
Adjustment and Anchoring 8
Cultural Cognition (Kahan, Braman) 8
DEMS3706 Lecture #1 10
DEMS3706 Lecture #2 (Cultural Cognition) 11
Module Two - Uncertainty & Prediction 13
Prediction, Cognition and the Brain (Bubic, von Cramon, Schubotz) 13
“A 30% Chance of Rain Tomorrow”: How Does the Public Understand Probabilistic Weather Forecasts? (Gigerenzer et al.) 16
Don’t Believe the COVID-19 Models (Tufekci) 18
Lecture #1 20
Lecture #2 21
Module Three - Fear, Anxiety, and All Things Scary 25
Lecture #1 25
Module Four - Decision-making Under Pressure 29
Lecture #1 29
Module Five - Expertise & Thinking as an Institution 33
54Lecture #1 33
Module Six - PTSD & Mental Health 35
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy) 1
Module One - Rational, Irrational, or Something Else?Cognitive Biases - Definitions
Here are two images of cognitive biases of the ones that are required from the reading guide. The examples are simple and easy to follow:
12 Cognitive Biases That Can Impact Search Committee Decisions
https://www.visualcapitalist.com/50-cognitive-biases-in-the-modern-world/
Bias
Definition
Bias in Action (how this bias applies to disasters)-
Anchoring
This bias is described by individuals relying on an initial piece of information to make decisions. Comment by Eric Kennedy: Nice! Think of the example I gave during tutorial: students first were asked to think of the last two digits of their student number, then guess the number of countries in Africa. The lower the student #, the lower the guess. The higher the student #, the higher the guess. They got /anchored/ towards their initial number!
-During a large-scale disaster, a country may choose to proceed in a manner similar to a different country that went through the same experience, instead of searching for additional information to create the most successful plan. Comment by Eric Kennedy: Yes, these are good: early reactions to the pandemic will shape later ones... although this is also an example of priming.
If you wanted an example that's specific to anchoring, think about the magic "2 meter" number for physical distancing in lines. That number being introduced so early has powerfully affected what we see as "reasonable" physical distancing amounts... if it had started at 5m, we would be in a very different world of assumptions!
-This could also have been observed in how different countries proceeded with closures and containment during the pandemic.
Authority bias
This is defined as the tendency for people to rely more heavily on the opinion of a someone perceive ...
The document discusses the emergence of behavioral finance as an alternative to traditional finance models. Traditional finance assumes rational decision-making, while behavioral finance recognizes psychological and emotional factors that can lead to irrational behavior. Key differences include traditional finance assuming perfect processing of information versus behavioral finance recognizing cognitive biases. Additionally, traditional finance sees framing as inconsequential while behavioral finance finds perceptions influenced by framing. The document then examines specific cognitive biases like representativeness, overconfidence, anchoring, ambiguity aversion, and innumeracy that impact decisions. It also discusses the concepts of prospect theory and mental accounting in relation to framing dependence.
Lesson Eight Moral Development and Moral IntensityLesson Seve.docxsmile790243
Lesson Eight: Moral Development and Moral Intensity
Lesson Seven discussed the different codifications of moral precepts over the course of human history which have attempted to simplify moral prescriptions. Lesson Eight will introduce the various stages of moral development within individuals, as well as the way moral intensity is rationalized on a case-by-case basis.
Moral Development
As we have discussed in previous lessons, ethics rely on morality and a reasoned analysis of the factors that affect human well-being (Kohlberg & Hersh, 1977). However, at this juncture it is important to note that not all individuals are capable of the same level of moral reasoning. Some of the differences in reasoning ability are attributable to age; the more mature that one is, the more likely they are to reach the higher levels of moral development. However, adulthood is not a guarantee that an individual will achieve the most sophisticated levels of moral reasoning. Some will never get there, and this is a significant obstacle to any hope of universally accepted objective morality.
1. Preconventional Reasoning: The preconventional level of moral reasoning is the most primitive. At the preconventional level, choices are assessed based only on personal consequences. In other words, the actor makes choices that render rewards, and refrains from choices that render punishments (Graham, 1995). Preconventional reasoning is as much as non-human animal reasoning typically allows. Granted, it is not uncommon for some mammals to act self-sacrificially to preserve their offspring, and there have been reports of pets putting themselves in harm’s way to protect their human owners, but these are limited contexts. In almost every other situation, animals are driven first and foremost by self-preservation, and secondly, self-optimization. Preconventional reasoning is also the first strategy learned in the sequence of human development. Children typically think about their own consequences when deciding upon behavior. If doing chores is rewarded with an allowance, and coloring on the walls will result in grounding, children are likely to embrace the former and avoid the latter, all other things being equal. Although the vast majority of humans graduate from this level, it is important to note that many adults still regularly make choices that are based predominantly on preconventional reasoning. This is to say, selfish acts are frighteningly common.
2. Conventional Reasoning: The second level of moral reasoning is that of conventional reasoning. One step removed from pure selfishness, the conventional level of reasoning looks not simply to personal consequences (although this is still a factor), but also to social expectations in a societal context (Logsdon & Yuthas, 1997). Instances of conventional moral reasoning can be found almost anywhere one looks. For example, it is generally considered rude to cut other people in a line, so although one’s assessment of persona ...
This document lists several cognitive and social biases that affect human decision-making. Some of the key biases mentioned include: anchoring bias, where people rely too heavily on one piece of information; confirmation bias, the tendency to search for or interpret information in a way that confirms one's preconceptions; and fundamental attribution error, the tendency to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing situational influences. The document categorizes biases as relating to decision-making, behavioral tendencies, and social attribution. Over 40 biases are defined that can influence beliefs, business decisions, research, and social judgments between individuals and groups.
This document discusses various cognitive biases that can affect human judgment, decision-making, and memory. It describes biases in four main areas: cognitive biases in general, decision-making and behavioral biases, biases in probability and belief, and social biases. Many biases result from mental shortcuts or rules of thumb, while others stem from motivational factors or natural limitations in human information processing. Understanding cognitive biases can help improve judgment and decision-making.
This document discusses various cognitive biases that can affect human judgment, decision-making, and memory. It describes biases in four main areas: cognitive biases in general, decision-making and behavioral biases, biases in probability and belief, and social biases. Many biases result from mental shortcuts or rules of thumb, while others stem from motivational factors or natural limitations in human information processing. Understanding cognitive biases can help improve judgment and decision-making.
The document analyzes the cognitive and behavioral factors behind the 2008 financial crisis. It discusses how herd behavior, the availability heuristic, and groupthink led to ill-advised decisions that propagated the crisis. Herd behavior explained the housing bubble as investors followed each other into the overheated market. It also contributed to the boom and bust of the mortgage-backed securities market. The availability heuristic and groupthink affected decision making at financial institutions and government agencies. Letting Lehman Brothers fail unexpectedly triggered a global recession due to a loss of confidence propagating through herd behavior.
The document discusses the concept of "black swans", which are rare and unpredictable events with large consequences. It describes how Nassim Taleb introduced the term and argues that humans have difficulty predicting such events due to cognitive biases. Major historical events like wars and financial crises are given as examples of black swans that were unexpected but influenced the course of history.
The document discusses various factors that can reduce the validity of clinical interpretations and predictions. It describes common cognitive biases clinicians may fall prey to, such as oversimplifying complex patients, over-interpreting limited data, relying on stereotypes, and discounting evidence that does not conform to preconceived notions. It emphasizes the importance of considering all available data, including strengths, validating interpretations with others involved in the patient's care, using structured assessment methods, and avoiding vague concepts in clinical analysis and decision-making.
A Pyramid ofDecision ApproachesPaul J.H. Schoemaker J. E.docxransayo
There are four general approaches to decision making outlined in the pyramid model, ranging from intuitive to highly analytical. Intuition, while sometimes effective, is prone to random inconsistency and systematic distortion. Rules are more structured but can also be distorted and fail to adapt to changes. Case-based reasoning examines prior similar situations but risks poor analogies. The most analytical approach is quantitative modeling, which minimizes biases but requires significant data and analysis. Overall the pyramid suggests combining approaches based on the situation, with more analytical methods used for high-stake or complex decisions.
This document discusses deception detection and the challenges of determining when someone is lying. It makes three key points:
1) Humans are generally poor at detecting deception, performing only slightly better than chance. We rely on incorrect cues and overestimate our own abilities.
2) While some think looking someone in the eye helps determine lies, research shows liars can control facial expressions better while truth-tellers are less aware, so eye contact reduces accuracy.
3) Asking direct questions about lying, like "are you telling the truth," may increase transparency compared to indirect questions, as it puts more pressure on liars to maintain deception while honest people have natural emotional responses. However, it also risks liars
Many decisions are based on beliefs concerning the likelihoo.docxalfredacavx97
Many decisions are based on beliefs
concerning the likelihood of uncertain
events such as the outcome of an elec-
tion, the guilt of a defendant, or the
future value of the dollar. These beliefs
are usually expressed in statements such
as "I think that . .. ," "chances are
. . .," "it is unlikely that . .. ," and
so forth. Occasionally, beliefs concern-
ing uncertain events are expressed in
numerical form as odds or subjective
probabilities. What determines such be-
liefs? How do people assess the prob-
ability of an uncertain event or the
value of an uncertain quantity? This
article shows that people rely on a
limited number of heuristic principles
which reduce the complex tasks of as-
sessing probabilities and predicting val-
ues to simpler judgmental operations.
In general, these heuristics are quite
useful, but sometimes they lead to severe
and systematic errors.
The subjective assessment of proba-
bility resembles the subjective assess-
ment of physical quantities such as
distance or size. These judgments are
all based on data of limited validity,
which are processed according to heu-
ristic rules. For example, the apparent
distance of an object is determined in
part by its clarity. The more sharply the
object is seen, the closer it appears to
be. This rule has some validity, because
in any given scene the more distant
objects are seen less sharply than nearer
objects. However, the reliance on this
rule leads to systematic errors in the
estimation of distance. Specifically, dis-
tances are often overestimated when
visibility is poor because the contours
of objects are blurred. On the other
hand, distances are often underesti-
mated when visibility is good because
the objects are seen sharply. Thus, the
reliance on clarity as an indication of
distance leads to common biases. Such
biases are also found in the intuitive
judgment of probability. This article
describes three heuristics that are em-
ployed to assess probabilities and to
predict values. Biases to which these
heuristics lead are enumerated, and the
applied and theoretical implications of
these observations are discussed.
Representativeness
Many of the probabilistic questions
with which people are concerned belong
to one of the following types: What is
the probability that object A belongs to
class B? What is the probability that
event A originates from process B?
What is the probability that process B
will generate event A? In answering
such questions, people typically rely on
the representativeness heuristic, in
which probabilities are evaluated by the
degree to which A is representative of
B, that is, by the degree to which A
resembles B. For example, when A is
highly representative of B, the proba-
bility that A originates from B is judged
to be high. On the other hand, if A is
not similar to B, the probability that A
originates from B is judged to be low.
For an illustration of judgment b.
This document outlines 13 principles of behavioral economics: anchoring, availability, chunking, commitment, framing, goal dilution, loss aversion, overweighting small probabilities, hyperbolic discounting, reciprocation, status quo bias, and social proofing. For each principle, a brief theory is provided along with a short example to illustrate how it applies to decision making. The document serves to explain key concepts in behavioral economics through concise definitions and relatable scenarios.
This document discusses low probability, high impact events that are difficult to predict due to complexity but can have severe consequences. It argues that focusing on reducing vulnerability to rare events is important, and that historical predictions often fail because unprecedented phenomena may occur. Insurance is suggested as one way to remedy risks from low probability events. The document also notes that facts and circumstances change over time, and that accurately predicting the magnitude or duration of events is challenging.
This document discusses low probability, high impact events that are difficult to predict due to complexity but can have severe consequences. It advocates focusing on reducing vulnerability to rare events through measures like insurance, and acknowledging that historical data may fail to predict novel phenomena. Standard statistical models based on past distributions are susceptible to surprises from destructive outliers. Large losses can potentially occur from socioeconomic or statistical drivers like hostile policies or departures from textbooks examples.
The Psychology of Thinking About the Past and FutureChris Martin
The document discusses recent research on how people think about the past and future. It covers immune neglect and affective forecasting, which are people's inability to accurately predict how much an event will affect them emotionally over time. It also discusses the planning fallacy, which is people's tendency to underestimate how long tasks will take them to complete. The presentation includes discussion questions on these topics.
Week 3 - Instructor Guidance
Week 3: Inductive Reasoning
This week’s guidance will cover the following topics:
1. The Nature of Inductive Reasoning
2. Appeals to Authority
3. Inductive Generalizations
4. Statistical Syllogisms
5. Arguments from Analogy
6. Inferences to the Best Explanation
7. Causal Reasoning
8. Things to Do This Week
The Nature of Inductive Reasoning
Will the sun rise tomorrow morning? Of course it will, but how do you know? The reasoning seems to go as follows:
Premise 1: The sun has risen every morning throughout known history
Conclusion: Therefore, the sun will rise tomorrow
Deductively, this argument is invalid, for it is logically possible that the earth could stop spinning tonight. Does that mean that the argument is no good? Of course not. In fact, its premise makes the conclusion is virtually certain. This is an example of a very good argument that is not intended to be deductively valid. That is because it is actually an inductive argument.
An argument is inductive if it does not attempt to be valid, but intends to give strong evidence for the truth of its conclusion.
Many might see inductive reasoning as inferior to deductive reasoning, but that is not generally the case. In fact, inductive arguments often provide much better arguments for the truths of their conclusions than deductive ones. The deductively valid version of our argument about the sun, for example, goes:
Premise 1: The sun will always rise in the morning
Conclusion: Therefore the sun will rise tomorrow morning
This second argument, while valid, actually gives less evidence for the conclusion because its second premise is false (the sun will eventually expand to engulf the earth and then collapse). Therefore the deductive argument is unsound and so offers little evidence for the conclusion, whereas the original inductive argument made the conclusion virtually certain. In other words, inductive reasoning in general can be even better than deductive reasoning in many cases; the trick is to determine which inductive arguments are good and which ones are not so good.Strength versus Weakness
Just as it is the goal of deductive reasoning to be valid, it is the goal of a inductive reasoning to be
strong
. An inductive argument is strong in case its premises, if true, would make the conclusion very likely to be true as well. The above argument about the sun rising is very strong. Most inductive arguments are less strong, all the way along a spectrum between strength and weakness. Here are three with varying degrees of inductive strength:
Weak:
Premise 1: John is tall and in college.
Conclusion: Therefore, he probably plays on the basketball team.
Moderate:
Premise 1: The Lions are a 14 point favorite.
Conclusion: So they will probably win.
Strong:
Premise 1: All of the TV meteorologists report a 99% chance of rain tomorrow.
Conclusion: So it will probably rain tomorrow.
Note that the degree of strength of an inductive argument is independent of whether the.
5th International Disaster and Risk Conference IDRC 2014 Integrative Risk Management - The role of science, technology & practice 24-28 August 2014 in Davos, Switzerland
This document summarizes key points from Nate Silver's book "The Signal and the Noise" regarding predictive modeling and forecasting. It discusses how Bayesian reasoning using priors can improve predictions by explicitly incorporating uncertainty. Examples are given where rating agencies, economists, and pundits made poor predictions by failing to account for factors outside their models. The document advocates applying Bayesian methods to statistical models to estimate parameters and improve predictions.
This document discusses various logical fallacies, including the "full moon fallacy" which is the tendency to attribute increased accidents, crime rates, etc. to the full moon despite a lack of evidence. It provides examples of arguments committing fallacious reasoning by making unjustified causal links between the full moon and accidents or behaviors. The document warns readers not to fall prey to fallacious thinking and assumptions, but rather to conduct root cause analyses of accidents logically using tools like cause-and-effect diagrams to identify actual causal factors, not assume blame, and ultimately prevent future accidents.
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy)APDEMS370AlyciaGold776
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy)
AP/DEMS3706 Note Share
Hello everyone! Think of this space as a crowdsourced notebook . . . everyone is welcome to take and share DEMS3706 lecture and reading notes here. -[;.
Module One - Rational, Irrational, or Something Else? 2
Cognitive Biases - Definitions 2
Bounded Rationality (Tversky, Kahneman) 6
Representativeness 6
Availability Bias 7
Adjustment and Anchoring 8
Cultural Cognition (Kahan, Braman) 8
DEMS3706 Lecture #1 10
DEMS3706 Lecture #2 (Cultural Cognition) 11
Module Two - Uncertainty & Prediction 13
Prediction, Cognition and the Brain (Bubic, von Cramon, Schubotz) 13
“A 30% Chance of Rain Tomorrow”: How Does the Public Understand Probabilistic Weather Forecasts? (Gigerenzer et al.) 16
Don’t Believe the COVID-19 Models (Tufekci) 18
Lecture #1 20
Lecture #2 21
Module Three - Fear, Anxiety, and All Things Scary 25
Lecture #1 25
Module Four - Decision-making Under Pressure 29
Lecture #1 29
Module Five - Expertise & Thinking as an Institution 33
54Lecture #1 33
Module Six - PTSD & Mental Health 35
Disasters and Humans (DEMS3706 SU2020, Dr. Eric Kennedy) 1
Module One - Rational, Irrational, or Something Else?Cognitive Biases - Definitions
Here are two images of cognitive biases of the ones that are required from the reading guide. The examples are simple and easy to follow:
12 Cognitive Biases That Can Impact Search Committee Decisions
https://www.visualcapitalist.com/50-cognitive-biases-in-the-modern-world/
Bias
Definition
Bias in Action (how this bias applies to disasters)-
Anchoring
This bias is described by individuals relying on an initial piece of information to make decisions. Comment by Eric Kennedy: Nice! Think of the example I gave during tutorial: students first were asked to think of the last two digits of their student number, then guess the number of countries in Africa. The lower the student #, the lower the guess. The higher the student #, the higher the guess. They got /anchored/ towards their initial number!
-During a large-scale disaster, a country may choose to proceed in a manner similar to a different country that went through the same experience, instead of searching for additional information to create the most successful plan. Comment by Eric Kennedy: Yes, these are good: early reactions to the pandemic will shape later ones... although this is also an example of priming.
If you wanted an example that's specific to anchoring, think about the magic "2 meter" number for physical distancing in lines. That number being introduced so early has powerfully affected what we see as "reasonable" physical distancing amounts... if it had started at 5m, we would be in a very different world of assumptions!
-This could also have been observed in how different countries proceeded with closures and containment during the pandemic.
Authority bias
This is defined as the tendency for people to rely more heavily on the opinion of a someone perceive ...
The document discusses the emergence of behavioral finance as an alternative to traditional finance models. Traditional finance assumes rational decision-making, while behavioral finance recognizes psychological and emotional factors that can lead to irrational behavior. Key differences include traditional finance assuming perfect processing of information versus behavioral finance recognizing cognitive biases. Additionally, traditional finance sees framing as inconsequential while behavioral finance finds perceptions influenced by framing. The document then examines specific cognitive biases like representativeness, overconfidence, anchoring, ambiguity aversion, and innumeracy that impact decisions. It also discusses the concepts of prospect theory and mental accounting in relation to framing dependence.
Lesson Eight Moral Development and Moral IntensityLesson Seve.docxsmile790243
Lesson Eight: Moral Development and Moral Intensity
Lesson Seven discussed the different codifications of moral precepts over the course of human history which have attempted to simplify moral prescriptions. Lesson Eight will introduce the various stages of moral development within individuals, as well as the way moral intensity is rationalized on a case-by-case basis.
Moral Development
As we have discussed in previous lessons, ethics rely on morality and a reasoned analysis of the factors that affect human well-being (Kohlberg & Hersh, 1977). However, at this juncture it is important to note that not all individuals are capable of the same level of moral reasoning. Some of the differences in reasoning ability are attributable to age; the more mature that one is, the more likely they are to reach the higher levels of moral development. However, adulthood is not a guarantee that an individual will achieve the most sophisticated levels of moral reasoning. Some will never get there, and this is a significant obstacle to any hope of universally accepted objective morality.
1. Preconventional Reasoning: The preconventional level of moral reasoning is the most primitive. At the preconventional level, choices are assessed based only on personal consequences. In other words, the actor makes choices that render rewards, and refrains from choices that render punishments (Graham, 1995). Preconventional reasoning is as much as non-human animal reasoning typically allows. Granted, it is not uncommon for some mammals to act self-sacrificially to preserve their offspring, and there have been reports of pets putting themselves in harm’s way to protect their human owners, but these are limited contexts. In almost every other situation, animals are driven first and foremost by self-preservation, and secondly, self-optimization. Preconventional reasoning is also the first strategy learned in the sequence of human development. Children typically think about their own consequences when deciding upon behavior. If doing chores is rewarded with an allowance, and coloring on the walls will result in grounding, children are likely to embrace the former and avoid the latter, all other things being equal. Although the vast majority of humans graduate from this level, it is important to note that many adults still regularly make choices that are based predominantly on preconventional reasoning. This is to say, selfish acts are frighteningly common.
2. Conventional Reasoning: The second level of moral reasoning is that of conventional reasoning. One step removed from pure selfishness, the conventional level of reasoning looks not simply to personal consequences (although this is still a factor), but also to social expectations in a societal context (Logsdon & Yuthas, 1997). Instances of conventional moral reasoning can be found almost anywhere one looks. For example, it is generally considered rude to cut other people in a line, so although one’s assessment of persona ...
This document lists several cognitive and social biases that affect human decision-making. Some of the key biases mentioned include: anchoring bias, where people rely too heavily on one piece of information; confirmation bias, the tendency to search for or interpret information in a way that confirms one's preconceptions; and fundamental attribution error, the tendency to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing situational influences. The document categorizes biases as relating to decision-making, behavioral tendencies, and social attribution. Over 40 biases are defined that can influence beliefs, business decisions, research, and social judgments between individuals and groups.
This document discusses various cognitive biases that can affect human judgment, decision-making, and memory. It describes biases in four main areas: cognitive biases in general, decision-making and behavioral biases, biases in probability and belief, and social biases. Many biases result from mental shortcuts or rules of thumb, while others stem from motivational factors or natural limitations in human information processing. Understanding cognitive biases can help improve judgment and decision-making.
This document discusses various cognitive biases that can affect human judgment, decision-making, and memory. It describes biases in four main areas: cognitive biases in general, decision-making and behavioral biases, biases in probability and belief, and social biases. Many biases result from mental shortcuts or rules of thumb, while others stem from motivational factors or natural limitations in human information processing. Understanding cognitive biases can help improve judgment and decision-making.
The document analyzes the cognitive and behavioral factors behind the 2008 financial crisis. It discusses how herd behavior, the availability heuristic, and groupthink led to ill-advised decisions that propagated the crisis. Herd behavior explained the housing bubble as investors followed each other into the overheated market. It also contributed to the boom and bust of the mortgage-backed securities market. The availability heuristic and groupthink affected decision making at financial institutions and government agencies. Letting Lehman Brothers fail unexpectedly triggered a global recession due to a loss of confidence propagating through herd behavior.
The document discusses the concept of "black swans", which are rare and unpredictable events with large consequences. It describes how Nassim Taleb introduced the term and argues that humans have difficulty predicting such events due to cognitive biases. Major historical events like wars and financial crises are given as examples of black swans that were unexpected but influenced the course of history.
The document discusses various factors that can reduce the validity of clinical interpretations and predictions. It describes common cognitive biases clinicians may fall prey to, such as oversimplifying complex patients, over-interpreting limited data, relying on stereotypes, and discounting evidence that does not conform to preconceived notions. It emphasizes the importance of considering all available data, including strengths, validating interpretations with others involved in the patient's care, using structured assessment methods, and avoiding vague concepts in clinical analysis and decision-making.
A Pyramid ofDecision ApproachesPaul J.H. Schoemaker J. E.docxransayo
There are four general approaches to decision making outlined in the pyramid model, ranging from intuitive to highly analytical. Intuition, while sometimes effective, is prone to random inconsistency and systematic distortion. Rules are more structured but can also be distorted and fail to adapt to changes. Case-based reasoning examines prior similar situations but risks poor analogies. The most analytical approach is quantitative modeling, which minimizes biases but requires significant data and analysis. Overall the pyramid suggests combining approaches based on the situation, with more analytical methods used for high-stake or complex decisions.
This document discusses deception detection and the challenges of determining when someone is lying. It makes three key points:
1) Humans are generally poor at detecting deception, performing only slightly better than chance. We rely on incorrect cues and overestimate our own abilities.
2) While some think looking someone in the eye helps determine lies, research shows liars can control facial expressions better while truth-tellers are less aware, so eye contact reduces accuracy.
3) Asking direct questions about lying, like "are you telling the truth," may increase transparency compared to indirect questions, as it puts more pressure on liars to maintain deception while honest people have natural emotional responses. However, it also risks liars
Many decisions are based on beliefs concerning the likelihoo.docxalfredacavx97
Many decisions are based on beliefs
concerning the likelihood of uncertain
events such as the outcome of an elec-
tion, the guilt of a defendant, or the
future value of the dollar. These beliefs
are usually expressed in statements such
as "I think that . .. ," "chances are
. . .," "it is unlikely that . .. ," and
so forth. Occasionally, beliefs concern-
ing uncertain events are expressed in
numerical form as odds or subjective
probabilities. What determines such be-
liefs? How do people assess the prob-
ability of an uncertain event or the
value of an uncertain quantity? This
article shows that people rely on a
limited number of heuristic principles
which reduce the complex tasks of as-
sessing probabilities and predicting val-
ues to simpler judgmental operations.
In general, these heuristics are quite
useful, but sometimes they lead to severe
and systematic errors.
The subjective assessment of proba-
bility resembles the subjective assess-
ment of physical quantities such as
distance or size. These judgments are
all based on data of limited validity,
which are processed according to heu-
ristic rules. For example, the apparent
distance of an object is determined in
part by its clarity. The more sharply the
object is seen, the closer it appears to
be. This rule has some validity, because
in any given scene the more distant
objects are seen less sharply than nearer
objects. However, the reliance on this
rule leads to systematic errors in the
estimation of distance. Specifically, dis-
tances are often overestimated when
visibility is poor because the contours
of objects are blurred. On the other
hand, distances are often underesti-
mated when visibility is good because
the objects are seen sharply. Thus, the
reliance on clarity as an indication of
distance leads to common biases. Such
biases are also found in the intuitive
judgment of probability. This article
describes three heuristics that are em-
ployed to assess probabilities and to
predict values. Biases to which these
heuristics lead are enumerated, and the
applied and theoretical implications of
these observations are discussed.
Representativeness
Many of the probabilistic questions
with which people are concerned belong
to one of the following types: What is
the probability that object A belongs to
class B? What is the probability that
event A originates from process B?
What is the probability that process B
will generate event A? In answering
such questions, people typically rely on
the representativeness heuristic, in
which probabilities are evaluated by the
degree to which A is representative of
B, that is, by the degree to which A
resembles B. For example, when A is
highly representative of B, the proba-
bility that A originates from B is judged
to be high. On the other hand, if A is
not similar to B, the probability that A
originates from B is judged to be low.
For an illustration of judgment b.
This document outlines 13 principles of behavioral economics: anchoring, availability, chunking, commitment, framing, goal dilution, loss aversion, overweighting small probabilities, hyperbolic discounting, reciprocation, status quo bias, and social proofing. For each principle, a brief theory is provided along with a short example to illustrate how it applies to decision making. The document serves to explain key concepts in behavioral economics through concise definitions and relatable scenarios.
This document discusses low probability, high impact events that are difficult to predict due to complexity but can have severe consequences. It argues that focusing on reducing vulnerability to rare events is important, and that historical predictions often fail because unprecedented phenomena may occur. Insurance is suggested as one way to remedy risks from low probability events. The document also notes that facts and circumstances change over time, and that accurately predicting the magnitude or duration of events is challenging.
This document discusses low probability, high impact events that are difficult to predict due to complexity but can have severe consequences. It advocates focusing on reducing vulnerability to rare events through measures like insurance, and acknowledging that historical data may fail to predict novel phenomena. Standard statistical models based on past distributions are susceptible to surprises from destructive outliers. Large losses can potentially occur from socioeconomic or statistical drivers like hostile policies or departures from textbooks examples.
The Psychology of Thinking About the Past and FutureChris Martin
The document discusses recent research on how people think about the past and future. It covers immune neglect and affective forecasting, which are people's inability to accurately predict how much an event will affect them emotionally over time. It also discusses the planning fallacy, which is people's tendency to underestimate how long tasks will take them to complete. The presentation includes discussion questions on these topics.
Week 3 - Instructor Guidance
Week 3: Inductive Reasoning
This week’s guidance will cover the following topics:
1. The Nature of Inductive Reasoning
2. Appeals to Authority
3. Inductive Generalizations
4. Statistical Syllogisms
5. Arguments from Analogy
6. Inferences to the Best Explanation
7. Causal Reasoning
8. Things to Do This Week
The Nature of Inductive Reasoning
Will the sun rise tomorrow morning? Of course it will, but how do you know? The reasoning seems to go as follows:
Premise 1: The sun has risen every morning throughout known history
Conclusion: Therefore, the sun will rise tomorrow
Deductively, this argument is invalid, for it is logically possible that the earth could stop spinning tonight. Does that mean that the argument is no good? Of course not. In fact, its premise makes the conclusion is virtually certain. This is an example of a very good argument that is not intended to be deductively valid. That is because it is actually an inductive argument.
An argument is inductive if it does not attempt to be valid, but intends to give strong evidence for the truth of its conclusion.
Many might see inductive reasoning as inferior to deductive reasoning, but that is not generally the case. In fact, inductive arguments often provide much better arguments for the truths of their conclusions than deductive ones. The deductively valid version of our argument about the sun, for example, goes:
Premise 1: The sun will always rise in the morning
Conclusion: Therefore the sun will rise tomorrow morning
This second argument, while valid, actually gives less evidence for the conclusion because its second premise is false (the sun will eventually expand to engulf the earth and then collapse). Therefore the deductive argument is unsound and so offers little evidence for the conclusion, whereas the original inductive argument made the conclusion virtually certain. In other words, inductive reasoning in general can be even better than deductive reasoning in many cases; the trick is to determine which inductive arguments are good and which ones are not so good.Strength versus Weakness
Just as it is the goal of deductive reasoning to be valid, it is the goal of a inductive reasoning to be
strong
. An inductive argument is strong in case its premises, if true, would make the conclusion very likely to be true as well. The above argument about the sun rising is very strong. Most inductive arguments are less strong, all the way along a spectrum between strength and weakness. Here are three with varying degrees of inductive strength:
Weak:
Premise 1: John is tall and in college.
Conclusion: Therefore, he probably plays on the basketball team.
Moderate:
Premise 1: The Lions are a 14 point favorite.
Conclusion: So they will probably win.
Strong:
Premise 1: All of the TV meteorologists report a 99% chance of rain tomorrow.
Conclusion: So it will probably rain tomorrow.
Note that the degree of strength of an inductive argument is independent of whether the.
5th International Disaster and Risk Conference IDRC 2014 Integrative Risk Management - The role of science, technology & practice 24-28 August 2014 in Davos, Switzerland
Similar to Cognitive Biases in Project Management (20)
2. Aim
Cognitive biases of project managers can
lead to perceptual distortion and
inaccurate judgment affecting business
and economic decisions and human
behavior in general.
Subjecting these biases to scientific
investigations and independently
verifiable facts is the aim of the
presentation
3. Ignoring Regression to Mean
The tendency to expect extreme
events
to be followed by similar extreme
events.
In reality, extreme events are most
likely to be followed by an average
event.
4. Anecdote
Daniel Kahneman was awarded Nobel
Prize in Economic Sciences in 2002.
The notable thing about this Nobel prize
was that Daniel Kahneman was not an
Economist but a Psychologist
5. Confirmation bias
The tendency to search for or interpret
information in a way that confirms to one's
preconceptions.
6. Gambler's fallacy
The tendency to think that future
probabilities are altered by past events.
This bias results from an erroneous
conceptualization of the Law of large
numbers (Trial size for expectancy given
by Bernoulli’s theorem) .
7. Experimenter's or Expectation
bias
The tendency for experimenters to
believe in data that agree with their
expectations for the outcome, and to
disbelieve or downgrade the data that
appear to conflict with those
expectations.
8. Framing Effect
Drawing different conclusions from
the same information, depending on
how or by whom that information is
presented.
9. Knowledge Bias
The tendency of people to choose the
option they know best rather than the
best option.
10. Normalcy Bias
The failure to plan for a disaster which
has never happened before.
11. Outcome Bias
The tendency to judge a decision by
its eventual outcome instead of quality
of the decision at the time it was
made.
This bias manifests in the review of
project decisions.
12. Well Travelled Road Effect
Underestimation of the duration taken
to traverse oft-traveled routes and
overestimation of the duration taken to
traverse less familiar routes.
14. False Positives
The probability that ‘A’ will occur if ‘B’
occurs will generally differ from the
probability that ‘B’ will occur if ‘A’
occurs.
The probability that an event will occur
if or given that other event occur is
called Conditional Probability.
15. Hindsight Bias
The tendency to see past events as being
predictable at the time those events
happened.
In day-to-day life the past often seems
obvious even when we could not have
predicted.
This bias manifests in the review of project
decisions.
16. Anecdote
Army Chief of Staff General George
Marshal was faulted by U.S Congress
Committee for having missed all the
“signs” of a coming attack of Pearl
harbor.
19. “False Positive” - Tested positive but HIV
Negative.
“True Positive” - Tested positive and HIV
positive
"True Negative” -Tested negative and
HIV negative
“False Negative” - Tested negative but
HIV positive
20. As per the rule of
compounding probabilities, no
finite number of partial proofs
will ever add up to a certainty.
21. Doctor’s Confusion
“He would test positive if he was
not HIV infected”
with the chances that
“He would not be HIV infected if
tested positive”
22. The study of randomness tells
us that the crystal ball view of
events is possible,
unfortunately, only after they
happen.
23. If you get 45 heads in the first
100 tosses, the coin would not
develop a bias towards tail for
the rest of the tosses to catch
up!
Editor's Notes
Introduction
I became interested in this subject as I observed during the course of various assignments; varied interpretation of events associated with accomplishments and failures. I observed failed projects, missions and failed engineering systems were subjected to enquiries and investigations, wherein I felt that investigating successful missions or an operating system would give same or more information, if preventing failures in future were the aim.
I was amused when I read in the news paper recently that an expert team were subjecting the burnt down bogie (in an accident) of Tamil Nadu express to some serious examination. It occurred to me that; to prevent train bogies from catching fire in future one can subject any other randomly selected bogies of good sample size to investigation not necessarily the one which caught fire.
I compiled some information and interesting facts on judgment based on our perception and intuition called cognitive biases prevalent amongst us; so that we can recognize them when we practice or encounter them.
Coming to the brief presentation I’ve prepared
There are two distinct views in project management practices:
The rational view which focuses on management techniques and the other focuses on behavioral view. The difference between the two is significant: one looks at how projects should be managed, the other at what actually happens on projects. The gap between the two can sometimes spell the difference between project success and failure. In many failed projects, the failure can be traced back to poor decisions, and the decisions themselves traced back to cognitive biases: i.e. Errors in judgment based on perceptions.
I would like to explain the issue with an anecdote
Daniel Kahneman a psychologist worked
on psychology of judgment and decision-making, behavioral economics and hedonic psychology. He was awarded Nobel Prize in Economic Sciences in 2002 for his work in Prospect theory which is a behavioral economic theory.
His interest on the subject was the result of an curious incident in his life:
In 1960, Kahnman, then a junior psychology professor at Hebrew University, lectured to a group of Israeli air force flight instructors on the conventional wisdom of behavior modification and its application to psychology of flight training. Kahnman drove home the point that rewarding positive behavior works but punishing mistakes does not.
One of the flight instructor said that his experience contradicts it, when I praised people for a well executed maneuvers, and the next they always do worse. And I’ve screamed at people for badly executed maneuvers, and by and large the next time they improve. Don’t tell me that reward works and punishment doesn’t work. My experience contradicts it, other flight instructors agreed. Kahnman pondered over the apparent paradox. The answer lies in a phenomenon called regression towards mean. That is, in a series of random events an extraordinary event is most likely to be followed by a more ordinary one, purely by chance.
Any exceptionally good or poor performance was thus mostly a matter of luck. For the instructors it appeared that their action contributed to the pilots performance. In reality it made no difference.
The error in intuition spurred Kahneman’s thinking in the people’s misjudgments when faced with uncertainty. His research over the next thirty years proved that even amongst sophisticated subjects, whether in military, business or medical professions; people’s intuition very often fails them when it came to random process.
The inconsistency between logic of probability and people’s assessments of uncertain events became the subject of study for Kahneman.
We do have the tendency to interpret information and preferentially seek evidence to confirm own preconceived notions and worse also interpret ambiguous evidence in favor of our ideas .
It has been observed that If a set of research or experimental data without evidence for a conclusive outcome were provided to two groups for analysis, the group invariably interpreted the pattern to be an compelling evidence to their preconceived notions.
We therefore should learn to spend as much time looking for evidence that we are wrong as we spend searching for reasons we are correct.
Another mistaken notion connected with the law of large numbers is the idea that an event is more or less likely to occur because it has or has not happened recently. The idea that the odds of an event with a fixed probability increase or decrease depending on recent occurrence of the event is called ‘Gamblers Fallacy’, the root of ideas as “his luck has run out” and “he is due”
For example if you get 45 heads in the first 100 tosses, the coin would not develop a bias towards tail to catch up!
People however expect good luck to follow a bad luck, or worry that bad will follow good.
It is one of those contradictions of life that although, measurements always carries uncertainty, the uncertainty in measurement is rarely disclosed when measurements are quoted. I’hd personally quoted fine measurements in the order of 0.01 mm in many bore alignments without indicating its tolerances and its associated uncertainties.
Kahneman demonstrated systematic reversals of preference when the same problem is presented in different ways.
A self evident bias we are all victims to, but rarely acknowledge it.
A case in point is; our national disaster management team visited Andaman and Nicobar islands and sensitized local population on measures to safeguard themselves during the disasters not covering tsunami. It was a year before tsunami stuck. We do it all the time in our own projects.
If a decision results in a negative outcome, this does not mean that decision was wrong. A decision has to be weighed for its value when it is taken not later when the results are out!
When you examine a pert, you should know where to apply moderation based on who’d prepared it, if you know him.
I heard. Weather forecast indicated that the probability of raining on a Saturday is 50% and that of the next day Sunday was again 50%; and someone concluded that it will surely rail that weekend.
Ancient Romans in their justice department employed a concept ‘Half Proofs’ which applied to a evidence and testimony in which there was no compelling reason to believe or disbelieve. In their law, two half proofs constituted a complete proof for conviction. This might sound reasonable to a mind unaccustomed to quantitative thought.
But the correct application of law of probability is like this: The chance of two independent half proofs being wrong is 1 in 4, so two half proofs constitute one minus one fourth .i.e. three-fourth of a proof, not a whole proof. Romans added where they should have multiplied.
Similarly, the correct probability of rain during the weekend is One minus the probability of not raining on both the days being 25%, which gives the answer as 75%.
Therefore, as per the rule of compounding probabilities, no finite number of partial proofs will ever add up to a certainty.
Probability that ‘A’ will occur if ‘B’ occur will generally differ from the probability that ‘B’ will occur if ‘A’ occurs. This Conditional Probability may appear evident and commonsensical. However, applying this in real life situation is rather tricky. To understand this concept I will use the real life incident I ve read.
Leonard Mlodinow a theoretical physicist was tested positive for HIV during a routine blood test for life insurance. On his query on the odds of the test, his doctor informed him that it is 1–in-1,000 being the statistic inaccuracy of the test .i.e. chance of he being HIV infected is 99.9%
After overcoming initial shock, Leonard Mlodinow reasoned that, the doctor confused the chances that
“He would test positive if he was not HIV infected” with the chances that “He would not be HIV infected if tested positive”
If this appears tricky, consider this “The probability of a randomly chosen English speaking person to be a British, is very different from the probability of a randomly chosen British to be speaking English”. This example may make sense intuitively.
The physicist calculated that the actual chance of he being HIV infected was about 9% in contrast to the doctor’s assessment of 99.9%.
This is a case of “False Positive” that is Tested positive but HIV Negative. The other cases in conditional probability are “True Positive” - Tested positive and HIV positive, "True Negative” - tested negative and HIV negative and “False Negative” - tested negative but HIV positive
Is also be called the "I-knew-it-all-along" effect
In any complex string of events in which each event unfolds with some element of uncertainty, there is a fundamental asymmetry between past and future.
A anecdote I’ve chosen to highlight this bias from an incident during WW-II.
In Oct 1941 a message from Tokyo to a Japanese spy was intercepted and decoded by U.S which sought the Pearl Harbor be divided into five areas where the U.S. war ships are concentrated with specific emphasis on destroyers and carriers.
After few days, U.S lost track of radio communication from all Japanese carriers and therefore its whereabouts.
In Nov and early Dec same year, U.S noticed Japanese warships changed call signs second time in a month wherein it was normaly changed every six months . This made knowing the whereabouts of Japanese ships harder for U.S.
Two days later a message from Tokyo to all diplomatic posts across many countries including London and Washington to destroy their codes and important documents immediately was intercepted and decoded by U.S.
On 05 Dec, FBI intercepted a telephone call from a cook at the Japanese consulate in Hawaii reporting with great excitement that the officials were burning documents.
On 06 Dec Pearl harbor was attacked crippling U.S Navy.
Army Chief of Staff General George Marshal was severely faulted by U.S Congress Committee for having missed all the “signs” of a coming attack of Pearl harbor.
General George Marshal was no fool but neither did he have a crystal ball. In addition to the quoted reports, General George Marshal was also privy to huge amount of intelligence reports, each bringing alarming or mysterious messages obscuring a clear vision of the future.
The study of randomness tells us that the crystal ball view of events is possible, unfortunately, only after they happen.
Cognitive biases play a major role in our decision making.
Researchers have concluded that people have a very poor conception of randomness and uncertainty; they do not recognize it when they see it and they cannot produce it when they try it, and what’s worse we routinely misjudge the role of chance in our lives and make decisions that are demonstrably misaligned with our own best interests.
It is easy to believe that ideas that worked were good ideas, that plans that succeeded were well designed, and that ideas and plans that did not were ill conceived.
While ability does not guarantee achievement, nor achievement proportional to ability, it is important to keep in mind the role of chance in our project management.
With all your wisdom, at times you may be required to pretend to go with the bias as;
I understand that you will find it hard to stop firing a project manager who has failed repeatedly saying that he just happened to be on the “wrong end of a Bernoulli Series” nor will you earn friends if you term a project managers repeated success as a just a “Random Fluctuation”