SlideShare a Scribd company logo
Dynamic Quantum
Decision Models
Jennifer S.Trueblood
University of California, Irvine
Thursday, September 5, 13
Outline
1. Disjunction Effect
2. Comparing Quantum and Markov Models with Prisoner’s
Dilemma Game
Thursday, September 5, 13
Disjunction Effect
Thursday, September 5, 13
Savage’s Sure Thing Principle
• Suppose
• when is the state of the world, you prefer action A over B
• when is the state of the world, you also prefer action A
over B
• Therefore you should prefer A over B even when S is
unknown
S
¯S
• People violate the Sure Thing Principle (Tversky & Shafir, 1992)
Thursday, September 5, 13
Disjunction Effect using Tversky & Shafir (1992)
Gambling Paradigm
• Chance to play the following gamble twice:
• Even chance to win $250 or lose $100
• Condition Win:
• Subjects told ‘Suppose you won the first play’
• Result: 69% choose to gamble
• Condition Lost:
• Subjects told ‘Suppose you lost the first play’
• Result: 59% choose to gamble
• Condition Unknown:
• Subjects told:‘Don’t know if you won or lost’
• Result: 35% choose to gamble
Thursday, September 5, 13
Failure of a 2-D Markov Model
Law of Total Probability:
p(G|U) = p(W|U)p(G|W) + p(L|U)p(G|L)
Thursday, September 5, 13
Failure of a 2-D Markov Model
Law of Total Probability:
p(G|U) = p(W|U)p(G|W) + p(L|U)p(G|L)
p(G|W) = 0.69 > p(G|U) > p(G|L) = 0.59
But Tversky and Shafir (1992) found that
p(G|U) = .35 < p(G | L) = 0.59 < p(G |W) = 0.69
violating the law of total probability
Thursday, September 5, 13
2-D Quantum Model
Law of Total Amplitude:
p(G|U) = || < W|U >< G|W > + < L|U >< G|L > ||2
amplitude for transitioning
to the “lose” state from
the “unknown” state
Thursday, September 5, 13
Quantum Model AccountViolation of
Sure Thing Principle
= || < W|U > ||2
|| < G|W > ||2
+ || < L|U > ||2
|| < G|L > ||2
+ Int
p(G|U) = || < W|U >< G|W > + < L|U >< G|L > ||2
Int = 2 · Re[< W|U >< G|W >< L|U >< G|L >]
To account for Tversky and Shafir (1992)
we require Int < 0
Thursday, September 5, 13
Tversky and Shafir’s Intuition?
• If you win on first play, you play again because you have extra
“house” money
• If you lose on first play, you play again because you need to
make up for your losses
• If you don’t know, these two reasons interfere and leaving
you without any reason coming to mind
Thursday, September 5, 13
Failure of 2-D Quantum Model!
• Quantum Model must satisfy Double stochasticity
• In particular
• ||<G | W>||2 + ||<G|L>||2 = 1
• But Tversky & Shafir found that
• p(G | W) = 0.69 and p(G|L) = 0.59
• Violates double stochasticity!
Thursday, September 5, 13
2-D Transition Matrix
General 2-D
transition matrix
•Columns of T must sum to1
•Rows of T do not have to sum to 1
Thursday, September 5, 13
Markov Process
•Obeys law of total probability, but allows for general
transition matrix
Thursday, September 5, 13
Quantum Process
•Obeys law of total amplitude and not law of total
probability. But U must transform a unit length vector
Ψ(0) into another unit length vector Ψ(t)
•To preserve lengths, U must be unitary

hN|Si
hG|Si
=

hN|Wi hN|Li
hG|Wi hG|Li
·

hW|Si
hL|Si
=

hN|Wi · hW|Si + hN|Li · hL|Si
hG|Wi · hW|Si + hG|Li · hL|Si
Thursday, September 5, 13
Quantum Unitary Matrix
Unitary Matrix
Transition Matrix
•T must be Doubly stochastic: Both rows and
columns of T must sum to unity
Thursday, September 5, 13
Disjunction Effect using Prisoner Dilemma
Game (Shafir & Tversky, 1992)
Thursday, September 5, 13
• Condition 1:You know the other defected, and
now you must decide whether to defect or
cooperate
• Condition 2:You know the other cooperated,
and you must decide whether to defect or
cooperate
• Condition 3:You do not know, and you must
decide whether to defect or cooperate
Disjunction Effect using Prisoner Dilemma
Game (Shafir & Tversky, 1992)
Thursday, September 5, 13
Results from 4 Experiments
(Entries show % to defect)
Study
Known to
defect
Known to
cooperate
Unknown
Shafir & Tversky
(1992)
97 84 63
Croson (1999) 67 32 30
Li & Taplan
(2002)
83 66 60
Busemeyer et
al. (2006)
91 84 66
Violates the law of total
probability
Violates the law of double
stochasticity
Thursday, September 5, 13
Another Failure: Both 2-D Models
fail to explain PD Game results
• The Markov model fails because the results
once again violate the law of total probability
• The quantum model fails because the results
once again violate the law of double
stochasticity
Thursday, September 5, 13
Compatible vs. Incompatible
Measures
• The failed QP model assumes beliefs and
actions are incompatible
• Previously we assumed that beliefs and actions
were represented by different bases within the
same 2-D vector space
• Now we need to switch to a compatible
representation which requires a 4-D space.
Thursday, September 5, 13
Inference-Action State Space
4 dimensional space
Thursday, September 5, 13
Classic Events
Suppose:
Observe start at t=0 in state I1A1
Do not observe during t=1
Observe end at t=2 in state I2A2
Classic Events:
I1A1➝ I1A1➝ I2A2 or
I1A1➝ I2A2➝ I2A2 or
I1A1➝ I2A1➝ I2A2 or
I1A1➝ I1A2➝ I2A2
These 4 are the only possibilities in 2 steps; We just
don’t know which is true
Thursday, September 5, 13
Quantum Events
Suppose:
Observe start at t=0 in state I1A1
Do not observe during t=1
Observe end at t=2 in state I2A2
We cannot say there are only 4 possible ways to get
there;
At t=1, the state is a superposition of all four;
There is deeper uncertainty
Thursday, September 5, 13
Compare 4-D Markov and
Quantum Models for PD game
Thursday, September 5, 13
Markov Model Assumption 1
Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ }
e.g. |DC⟩ ➝ you infer that opponent will defect but you
decide to cooperate
e.g. ΨDC = Initial probability that the
Markov system starts in state |DC⟩
X
i = 1
Thursday, September 5, 13
Initial inferences affected by
prior information (Markov)
Condition 1
Known Defect
Condition 2
Known Coop
Condition 3
Unknown
U = 0.5 D + 0.5 C
Thursday, September 5, 13
Quantum Model Assumption 1
Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ }
e.g. |DC⟩ ➝ you infer that opponent will defect but you
decide to cooperate
e.g. ΨDC = Initial probability
amplitude that the Quantum system
starts in state |DC⟩
Probability = |ΨDC|2
|Ψ|2 = 1
Thursday, September 5, 13
Initial inferences affected by
prior information (Quantum)
Condition 1
Known Defect
Condition 2
Known Coop
Condition 3
Unknown
U =
p
0.5 D +
p
0.5 C
Thursday, September 5, 13
Markov Model Assumption 2
Strategy Selection
Thursday, September 5, 13
Strategies affected by game payoffs
and processing time
dΨ(t)/dt = K·Ψ(t) (Kolmogorov Forward Equation)
Thursday, September 5, 13
Intensity Matrix
K = KA + KB
KA =

KAd 0
0 KAc
KAi =

1 µi
1 µi
KB =
2
6
6
4
1 0 + 0
0 0 0 0
+1 0 0
0 0 0 0
3
7
7
5 +
2
6
6
4
0 0 0 0
0 0 +1
0 0 0 0
0 + 0 1
3
7
7
5
!
µi depends on the
pay-offs associated
with different actions
transforms the state
probabilities to favor
either defection or
cooperation depending
on pay-offs
Cognitive dissonance - beliefs change to be
consistent with actions
Thursday, September 5, 13
Quantum Model Assumption 2
Thursday, September 5, 13
Strategies affected by Game
Payoffs and Processing Time
Thursday, September 5, 13
The Hamiltonian
H = HA + HB
HA =

HAd 0
0 HAc
HAi =
1
p
1 + µ2
i

µi 1
1 µi
µi depends on the
pay-offs associated
with different actions
HB = p
2
2
6
6
4
+1 0 +1 0
0 0 0 0
+1 0 1 0
0 0 0 0
3
7
7
5 +
2
6
6
4
0 0 0 0
0 1 0 +1
0 0 0 0
0 +1 0 +1
3
7
7
5
!
transforms the state
probabilities to favor
either defection or
cooperation depending
on pay-offs
Cognitive dissonance - beliefs change to be
consistent with actions
Thursday, September 5, 13
Markov Model Assumption 3
output vector
e.g. ϕDC = final probability that
the Markov system ends in state
|DC⟩.
measurement operator
for decision to defect
Probability defect = L·ϕ
T · = =
2
6
6
4
DD
DC
CD
CC
3
7
7
5
Thursday, September 5, 13
Markov Prediction
If the opponent is known to defect:
If the opponent is known to cooperate:
Under the unknown condition:
L·ϕD = L·TΨD
L·ϕC = L·TΨC
L·ϕU = L·TΨU = L·T(p·ΨD + q·ΨC)
= p·L·TΨD + q·L·TΨC
= p· L·ϕD + q· L·ϕC
Known to
defect
Known to
cooperate
Unknown
Busemeyer et
al. (2006)
91 84 66
Markov Model 91 84 between 91 and 84
Thursday, September 5, 13
Quantum Model Assumption 3
output vector
e.g. ϕDC = final probability
amplitude that the Quantum
system ends in state |DC⟩.
measurement operator
for decision to defect
Probability defect = |M·ϕ|2
U · = =
2
6
6
4
DD
DC
CD
CC
3
7
7
5
Probability = |ϕDC|2
Thursday, September 5, 13
Quantum Prediction
If the opponent is known to defect:
If the opponent is known to cooperate:
Under the unknown condition:
Known to
defect
Known to
cooperate
Unknown
Busemeyer et
al. (2006)
91 84 66
Markov Model 91 84 69
Thursday, September 5, 13
Quantum Prediction
The probability of defection under the
unknown condition minus the average for
the two known conditions. (Negative
values indicate an interference effect.
Thursday, September 5, 13
ThankYou
• Want to learn more...
Thursday, September 5, 13
Bayesian Analysis of Individual
Data
Thursday, September 5, 13
Model Complexity Issue
• Perhaps quantum probability succeeds where traditional
models fail because it is more complex
• Bayesian model comparison provides a coherent method for
comparing models with respect to both accuracy and
parsimony
Thursday, September 5, 13
Dynamic Consistency
• Dynamic consistency: Final decisions agree with planned decisions (Barkan
and Busemeyer, 2003)
• Two stage gamble
1. Forced to play stage one, but outcome remained unknown
2. Made a plan and final choice about stage two
• Plan:
• If you win, do you plan to gamble on stage two?
• If you lose, do you plan to gamble on stage two?
• Final decision
• After an actual win, do you gamble on stage two?
• After an actual loss, do you now choose to gamble on stage two?
Thursday, September 5, 13
Two Stage Decision Task
Thursday, September 5, 13
Barkan And Busemeyer (2003)
Results
Risk averse
after a win
Risk seeking
after a loss
Thursday, September 5, 13
Two Competing Models
1. Quantum Model
2. Markov model
• Reduction of the quantum model when one key
parameter is set to zero
Thursday, September 5, 13
Quantum Model
• Four outcomes: W = win first gamble, L = lose first gamble
T = take second gamble, R = reject second gamble
• 4-D vector space corresponding to the four possible events:W ∧ T, W ∧ R,
L ∧ T, L ∧ R
F
I
F = U · I
• State of the decision maker:
1.Before first gamble
2.Before second gamble
• From first gamble to second gamble
Thursday, September 5, 13
Unitary Transformation
• From first gamble to second gamble:
F = U · I
allows for changes in beliefs
using one free parameter
calculates the utilities for taking the
gamble using two free parameters (loss
aversion, , and risk aversion, )b a
• The Markov model is a special case of the quantum model when = 0
U = exp( i ·
⇡
2
· (HA + HB))
Thursday, September 5, 13
Comparing Fits
• Fit both models to the dynamic consistency data:
1. Quantum
• Three parameters: a and b to determine the utilities and
for changing beliefs to align with actions
• R2 = .82
( = 0)2. Markov
• R2 = .78
Thursday, September 5, 13
Hierarchical Bayesian Parameter
Estimation
• Used hierarchical Bayesian estimation to evaluate whether or
not H0: for the quantum model
L(Di|✓i)
= 0
q(✓i|⇡)
r(⇡)
Likelihood of data given model parms for person i
Prior probability of parms for person i dependent
on hierarchical parms - binomial distribution
Prior probability over hierarchical parms - uniform
distribution [0, 1]
Thursday, September 5, 13
Distributions
Thursday, September 5, 13
Estimates of Group Level
Parameters
The risk aversion hierarchical
parameter is located below 0.5
indicating somewhat strong risk
aversion
The loss aversion hierarchical
parameter is located above 0.5
indicating higher sensitivity to
losses
Busemeyer, J. R.,Wang, Z.,Trueblood, J. S. (2012).
Hierarchical Bayesian estimation of quantum decision
model parameters. In J. R. Busemeyer et al. (Ed.), QI
2012, LNCS 7620. Berlin, Germany. Springer-Verlag.
Thursday, September 5, 13
Estimate of the Quantum
Parameter
The hierarchical distribution of the
quantum parameter lies below 0.5
implying the mean value is below
zero
Thursday, September 5, 13

More Related Content

Viewers also liked

Ldb Convergenze Parallele_Mantovani_02
Ldb Convergenze Parallele_Mantovani_02Ldb Convergenze Parallele_Mantovani_02
Ldb Convergenze Parallele_Mantovani_02
laboratoridalbasso
 
Ldb Convergenze Parallele_Mantovani_03
Ldb Convergenze Parallele_Mantovani_03Ldb Convergenze Parallele_Mantovani_03
Ldb Convergenze Parallele_Mantovani_03
laboratoridalbasso
 
Ldb Convergenze Parallele_mazzara_01
Ldb Convergenze Parallele_mazzara_01Ldb Convergenze Parallele_mazzara_01
Ldb Convergenze Parallele_mazzara_01laboratoridalbasso
 
Ldb Convergenze Parallele_Colelli_01
Ldb Convergenze Parallele_Colelli_01Ldb Convergenze Parallele_Colelli_01
Ldb Convergenze Parallele_Colelli_01
laboratoridalbasso
 
Ldb Convergenze Parallele_stanziola_01
Ldb Convergenze Parallele_stanziola_01Ldb Convergenze Parallele_stanziola_01
Ldb Convergenze Parallele_stanziola_01laboratoridalbasso
 
Ldb Convergenze Parallele_caminiti_01
Ldb Convergenze Parallele_caminiti_01Ldb Convergenze Parallele_caminiti_01
Ldb Convergenze Parallele_caminiti_01laboratoridalbasso
 
Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11
laboratoridalbasso
 
Ldb Convergenze Parallele_13
Ldb Convergenze Parallele_13Ldb Convergenze Parallele_13
Ldb Convergenze Parallele_13
laboratoridalbasso
 
Ldb Convergenze Parallele_unningham_01
Ldb Convergenze Parallele_unningham_01Ldb Convergenze Parallele_unningham_01
Ldb Convergenze Parallele_unningham_01laboratoridalbasso
 
Ldb Convergenze Parallele_trueblood_03
Ldb Convergenze Parallele_trueblood_03Ldb Convergenze Parallele_trueblood_03
Ldb Convergenze Parallele_trueblood_03
laboratoridalbasso
 
Ldb Convergenze Parallele_trueblood_02
Ldb Convergenze Parallele_trueblood_02Ldb Convergenze Parallele_trueblood_02
Ldb Convergenze Parallele_trueblood_02
laboratoridalbasso
 
Ldb Convergenze Parallele_Mantovani_01
Ldb Convergenze Parallele_Mantovani_01Ldb Convergenze Parallele_Mantovani_01
Ldb Convergenze Parallele_Mantovani_01
laboratoridalbasso
 
Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01
laboratoridalbasso
 
Ldb Convergenze Parallele_sorba_01
Ldb Convergenze Parallele_sorba_01Ldb Convergenze Parallele_sorba_01
Ldb Convergenze Parallele_sorba_01
laboratoridalbasso
 
Ldb Convergenze Parallele_sozzolabbasso_01
Ldb Convergenze Parallele_sozzolabbasso_01Ldb Convergenze Parallele_sozzolabbasso_01
Ldb Convergenze Parallele_sozzolabbasso_01
laboratoridalbasso
 
Ldb Convergenze Parallele_16
Ldb Convergenze Parallele_16Ldb Convergenze Parallele_16
Ldb Convergenze Parallele_16
laboratoridalbasso
 
Ldb Convergenze Parallele_12
Ldb Convergenze Parallele_12Ldb Convergenze Parallele_12
Ldb Convergenze Parallele_12
laboratoridalbasso
 
Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11
laboratoridalbasso
 

Viewers also liked (20)

Ldb Convergenze Parallele_Mantovani_02
Ldb Convergenze Parallele_Mantovani_02Ldb Convergenze Parallele_Mantovani_02
Ldb Convergenze Parallele_Mantovani_02
 
Ldb Convergenze Parallele_Mantovani_03
Ldb Convergenze Parallele_Mantovani_03Ldb Convergenze Parallele_Mantovani_03
Ldb Convergenze Parallele_Mantovani_03
 
Ldb Convergenze Parallele_mazzara_01
Ldb Convergenze Parallele_mazzara_01Ldb Convergenze Parallele_mazzara_01
Ldb Convergenze Parallele_mazzara_01
 
Ldb Convergenze Parallele_Colelli_01
Ldb Convergenze Parallele_Colelli_01Ldb Convergenze Parallele_Colelli_01
Ldb Convergenze Parallele_Colelli_01
 
Ldb Convergenze Parallele_stanziola_01
Ldb Convergenze Parallele_stanziola_01Ldb Convergenze Parallele_stanziola_01
Ldb Convergenze Parallele_stanziola_01
 
Ldb Convergenze Parallele_14
Ldb Convergenze Parallele_14Ldb Convergenze Parallele_14
Ldb Convergenze Parallele_14
 
Ldb Convergenze Parallele_caminiti_01
Ldb Convergenze Parallele_caminiti_01Ldb Convergenze Parallele_caminiti_01
Ldb Convergenze Parallele_caminiti_01
 
Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11
 
Ldb Convergenze Parallele_13
Ldb Convergenze Parallele_13Ldb Convergenze Parallele_13
Ldb Convergenze Parallele_13
 
Ldb Convergenze Parallele_unningham_01
Ldb Convergenze Parallele_unningham_01Ldb Convergenze Parallele_unningham_01
Ldb Convergenze Parallele_unningham_01
 
Ldb Convergenze Parallele_trueblood_03
Ldb Convergenze Parallele_trueblood_03Ldb Convergenze Parallele_trueblood_03
Ldb Convergenze Parallele_trueblood_03
 
Ldb Convergenze Parallele_trueblood_02
Ldb Convergenze Parallele_trueblood_02Ldb Convergenze Parallele_trueblood_02
Ldb Convergenze Parallele_trueblood_02
 
Ldb Convergenze Parallele_Mantovani_01
Ldb Convergenze Parallele_Mantovani_01Ldb Convergenze Parallele_Mantovani_01
Ldb Convergenze Parallele_Mantovani_01
 
Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01Ldb Convergenze Parallele_De barros_01
Ldb Convergenze Parallele_De barros_01
 
Ldb Convergenze Parallele_sorba_01
Ldb Convergenze Parallele_sorba_01Ldb Convergenze Parallele_sorba_01
Ldb Convergenze Parallele_sorba_01
 
Ldb Convergenze Parallele_sozzolabbasso_01
Ldb Convergenze Parallele_sozzolabbasso_01Ldb Convergenze Parallele_sozzolabbasso_01
Ldb Convergenze Parallele_sozzolabbasso_01
 
Ldb Convergenze Parallele_16
Ldb Convergenze Parallele_16Ldb Convergenze Parallele_16
Ldb Convergenze Parallele_16
 
Ldb Convergenze Parallele_12
Ldb Convergenze Parallele_12Ldb Convergenze Parallele_12
Ldb Convergenze Parallele_12
 
Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11Ldb Convergenze Parallele_11
Ldb Convergenze Parallele_11
 
Ldb Convergenze Parallele_17
Ldb Convergenze Parallele_17Ldb Convergenze Parallele_17
Ldb Convergenze Parallele_17
 

Similar to Ldb Convergenze Parallele_trueblood_01

PPT8.ppt
PPT8.pptPPT8.ppt
probability-180324013552.pptx
probability-180324013552.pptxprobability-180324013552.pptx
probability-180324013552.pptx
Vukile Xhego
 
Probability (gr.11)
Probability (gr.11)Probability (gr.11)
Probability (gr.11)
Vukile Xhego
 
Topic 1 __basic_probability_concepts
Topic 1 __basic_probability_conceptsTopic 1 __basic_probability_concepts
Topic 1 __basic_probability_concepts
Maleakhi Agung Wijaya
 
Bayes in competition
Bayes in competitionBayes in competition
Bayes in competition
datasciencenl
 
Chapter 12 Probability and Statistics.ppt
Chapter 12 Probability and Statistics.pptChapter 12 Probability and Statistics.ppt
Chapter 12 Probability and Statistics.ppt
JoyceNolos
 

Similar to Ldb Convergenze Parallele_trueblood_01 (6)

PPT8.ppt
PPT8.pptPPT8.ppt
PPT8.ppt
 
probability-180324013552.pptx
probability-180324013552.pptxprobability-180324013552.pptx
probability-180324013552.pptx
 
Probability (gr.11)
Probability (gr.11)Probability (gr.11)
Probability (gr.11)
 
Topic 1 __basic_probability_concepts
Topic 1 __basic_probability_conceptsTopic 1 __basic_probability_concepts
Topic 1 __basic_probability_concepts
 
Bayes in competition
Bayes in competitionBayes in competition
Bayes in competition
 
Chapter 12 Probability and Statistics.ppt
Chapter 12 Probability and Statistics.pptChapter 12 Probability and Statistics.ppt
Chapter 12 Probability and Statistics.ppt
 

More from laboratoridalbasso

Ldb Rural in Action_CurandiKatz
Ldb Rural in Action_CurandiKatz Ldb Rural in Action_CurandiKatz
Ldb Rural in Action_CurandiKatz
laboratoridalbasso
 
Ldb Rural in Action_Coppola 01
Ldb Rural in Action_Coppola 01Ldb Rural in Action_Coppola 01
Ldb Rural in Action_Coppola 01
laboratoridalbasso
 
Ldb Rural in Action_Coppola 02
Ldb Rural in Action_Coppola 02Ldb Rural in Action_Coppola 02
Ldb Rural in Action_Coppola 02
laboratoridalbasso
 
Ldb neetneedeu panetta 08
Ldb neetneedeu panetta 08 Ldb neetneedeu panetta 08
Ldb neetneedeu panetta 08
laboratoridalbasso
 
Ldb neetneedeu panetta 07
Ldb neetneedeu panetta 07 Ldb neetneedeu panetta 07
Ldb neetneedeu panetta 07
laboratoridalbasso
 
Ldb neetneedeu panetta 06
Ldb neetneedeu panetta 06 Ldb neetneedeu panetta 06
Ldb neetneedeu panetta 06
laboratoridalbasso
 
Ldb neetneedeu panetta 05
Ldb neetneedeu panetta 05 Ldb neetneedeu panetta 05
Ldb neetneedeu panetta 05
laboratoridalbasso
 
Ldb neetneedeu panetta 04
Ldb neetneedeu panetta 04 Ldb neetneedeu panetta 04
Ldb neetneedeu panetta 04
laboratoridalbasso
 
Ldb neetneedeu panetta 03
Ldb neetneedeu panetta 03 Ldb neetneedeu panetta 03
Ldb neetneedeu panetta 03
laboratoridalbasso
 
Ldb neetneedeu cavalhro 01
Ldb neetneedeu cavalhro 01Ldb neetneedeu cavalhro 01
Ldb neetneedeu cavalhro 01
laboratoridalbasso
 
Ldb neetneedeu panetta 01
Ldb neetneedeu panetta 01 Ldb neetneedeu panetta 01
Ldb neetneedeu panetta 01
laboratoridalbasso
 
Ldb neetneedeu_mola 01
Ldb neetneedeu_mola 01Ldb neetneedeu_mola 01
Ldb neetneedeu_mola 01
laboratoridalbasso
 
Ldb neetneedeu panetta 02
Ldb neetneedeu panetta 02Ldb neetneedeu panetta 02
Ldb neetneedeu panetta 02
laboratoridalbasso
 
Ldb Asola, non Verba_Santanocito02
Ldb Asola, non Verba_Santanocito02Ldb Asola, non Verba_Santanocito02
Ldb Asola, non Verba_Santanocito02
laboratoridalbasso
 
Ldb Asola, non Verba_Santanocito01
Ldb Asola, non Verba_Santanocito01Ldb Asola, non Verba_Santanocito01
Ldb Asola, non Verba_Santanocito01
laboratoridalbasso
 
Ldb Asola Non Verba_Attanasio
Ldb Asola Non Verba_AttanasioLdb Asola Non Verba_Attanasio
Ldb Asola Non Verba_Attanasio
laboratoridalbasso
 
#LdbStorytelling_Rural in Action
#LdbStorytelling_Rural in Action#LdbStorytelling_Rural in Action
#LdbStorytelling_Rural in Action
laboratoridalbasso
 
Tre anni di Laboratori dal Basso
Tre anni di Laboratori dal BassoTre anni di Laboratori dal Basso
Tre anni di Laboratori dal Basso
laboratoridalbasso
 
Ldb valecoricerca_lentini_web
Ldb valecoricerca_lentini_webLdb valecoricerca_lentini_web
Ldb valecoricerca_lentini_web
laboratoridalbasso
 
Ldb valecoricerca_indolfi_brevetti_3
Ldb valecoricerca_indolfi_brevetti_3Ldb valecoricerca_indolfi_brevetti_3
Ldb valecoricerca_indolfi_brevetti_3
laboratoridalbasso
 

More from laboratoridalbasso (20)

Ldb Rural in Action_CurandiKatz
Ldb Rural in Action_CurandiKatz Ldb Rural in Action_CurandiKatz
Ldb Rural in Action_CurandiKatz
 
Ldb Rural in Action_Coppola 01
Ldb Rural in Action_Coppola 01Ldb Rural in Action_Coppola 01
Ldb Rural in Action_Coppola 01
 
Ldb Rural in Action_Coppola 02
Ldb Rural in Action_Coppola 02Ldb Rural in Action_Coppola 02
Ldb Rural in Action_Coppola 02
 
Ldb neetneedeu panetta 08
Ldb neetneedeu panetta 08 Ldb neetneedeu panetta 08
Ldb neetneedeu panetta 08
 
Ldb neetneedeu panetta 07
Ldb neetneedeu panetta 07 Ldb neetneedeu panetta 07
Ldb neetneedeu panetta 07
 
Ldb neetneedeu panetta 06
Ldb neetneedeu panetta 06 Ldb neetneedeu panetta 06
Ldb neetneedeu panetta 06
 
Ldb neetneedeu panetta 05
Ldb neetneedeu panetta 05 Ldb neetneedeu panetta 05
Ldb neetneedeu panetta 05
 
Ldb neetneedeu panetta 04
Ldb neetneedeu panetta 04 Ldb neetneedeu panetta 04
Ldb neetneedeu panetta 04
 
Ldb neetneedeu panetta 03
Ldb neetneedeu panetta 03 Ldb neetneedeu panetta 03
Ldb neetneedeu panetta 03
 
Ldb neetneedeu cavalhro 01
Ldb neetneedeu cavalhro 01Ldb neetneedeu cavalhro 01
Ldb neetneedeu cavalhro 01
 
Ldb neetneedeu panetta 01
Ldb neetneedeu panetta 01 Ldb neetneedeu panetta 01
Ldb neetneedeu panetta 01
 
Ldb neetneedeu_mola 01
Ldb neetneedeu_mola 01Ldb neetneedeu_mola 01
Ldb neetneedeu_mola 01
 
Ldb neetneedeu panetta 02
Ldb neetneedeu panetta 02Ldb neetneedeu panetta 02
Ldb neetneedeu panetta 02
 
Ldb Asola, non Verba_Santanocito02
Ldb Asola, non Verba_Santanocito02Ldb Asola, non Verba_Santanocito02
Ldb Asola, non Verba_Santanocito02
 
Ldb Asola, non Verba_Santanocito01
Ldb Asola, non Verba_Santanocito01Ldb Asola, non Verba_Santanocito01
Ldb Asola, non Verba_Santanocito01
 
Ldb Asola Non Verba_Attanasio
Ldb Asola Non Verba_AttanasioLdb Asola Non Verba_Attanasio
Ldb Asola Non Verba_Attanasio
 
#LdbStorytelling_Rural in Action
#LdbStorytelling_Rural in Action#LdbStorytelling_Rural in Action
#LdbStorytelling_Rural in Action
 
Tre anni di Laboratori dal Basso
Tre anni di Laboratori dal BassoTre anni di Laboratori dal Basso
Tre anni di Laboratori dal Basso
 
Ldb valecoricerca_lentini_web
Ldb valecoricerca_lentini_webLdb valecoricerca_lentini_web
Ldb valecoricerca_lentini_web
 
Ldb valecoricerca_indolfi_brevetti_3
Ldb valecoricerca_indolfi_brevetti_3Ldb valecoricerca_indolfi_brevetti_3
Ldb valecoricerca_indolfi_brevetti_3
 

Recently uploaded

PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION FOR SUSTAINABLE GROWTH.docx
PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION  FOR SUSTAINABLE GROWTH.docxPROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION  FOR SUSTAINABLE GROWTH.docx
PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION FOR SUSTAINABLE GROWTH.docx
nehaneha293248
 
一比一原版(ud毕业证书)丹佛大学毕业证如何办理
一比一原版(ud毕业证书)丹佛大学毕业证如何办理一比一原版(ud毕业证书)丹佛大学毕业证如何办理
一比一原版(ud毕业证书)丹佛大学毕业证如何办理
degswa
 
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
Sanfilippo Paladino
 
Entrepreneurial Skills Class 9 IT 402.pptx
Entrepreneurial Skills Class 9 IT 402.pptxEntrepreneurial Skills Class 9 IT 402.pptx
Entrepreneurial Skills Class 9 IT 402.pptx
SapnaPahwa
 
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
ahexau
 
Ecofrico: Leading the Way in Sustainable Hemp Backpacks
Ecofrico: Leading the Way in Sustainable Hemp BackpacksEcofrico: Leading the Way in Sustainable Hemp Backpacks
Ecofrico: Leading the Way in Sustainable Hemp Backpacks
Ecofrico
 
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
fecmz
 
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
degswa
 

Recently uploaded (8)

PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION FOR SUSTAINABLE GROWTH.docx
PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION  FOR SUSTAINABLE GROWTH.docxPROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION  FOR SUSTAINABLE GROWTH.docx
PROMOTING GREEN ENTREPRENEURSHIP AND ECO INNOVATION FOR SUSTAINABLE GROWTH.docx
 
一比一原版(ud毕业证书)丹佛大学毕业证如何办理
一比一原版(ud毕业证书)丹佛大学毕业证如何办理一比一原版(ud毕业证书)丹佛大学毕业证如何办理
一比一原版(ud毕业证书)丹佛大学毕业证如何办理
 
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
Sanfilippo Paladino - From Manager to Leader - Developing Your Leadership Sty...
 
Entrepreneurial Skills Class 9 IT 402.pptx
Entrepreneurial Skills Class 9 IT 402.pptxEntrepreneurial Skills Class 9 IT 402.pptx
Entrepreneurial Skills Class 9 IT 402.pptx
 
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
一比一原版(flinders毕业证书)澳洲弗林德斯大学毕业证如何办理
 
Ecofrico: Leading the Way in Sustainable Hemp Backpacks
Ecofrico: Leading the Way in Sustainable Hemp BackpacksEcofrico: Leading the Way in Sustainable Hemp Backpacks
Ecofrico: Leading the Way in Sustainable Hemp Backpacks
 
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
一比一原版(BCU毕业证)伯明翰城市大学毕业证如何办理
 
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
一比一原版(ucsf毕业证书)加利福尼亚大学旧金山分校毕业证如何办理
 

Ldb Convergenze Parallele_trueblood_01

  • 1. Dynamic Quantum Decision Models Jennifer S.Trueblood University of California, Irvine Thursday, September 5, 13
  • 2. Outline 1. Disjunction Effect 2. Comparing Quantum and Markov Models with Prisoner’s Dilemma Game Thursday, September 5, 13
  • 4. Savage’s Sure Thing Principle • Suppose • when is the state of the world, you prefer action A over B • when is the state of the world, you also prefer action A over B • Therefore you should prefer A over B even when S is unknown S ¯S • People violate the Sure Thing Principle (Tversky & Shafir, 1992) Thursday, September 5, 13
  • 5. Disjunction Effect using Tversky & Shafir (1992) Gambling Paradigm • Chance to play the following gamble twice: • Even chance to win $250 or lose $100 • Condition Win: • Subjects told ‘Suppose you won the first play’ • Result: 69% choose to gamble • Condition Lost: • Subjects told ‘Suppose you lost the first play’ • Result: 59% choose to gamble • Condition Unknown: • Subjects told:‘Don’t know if you won or lost’ • Result: 35% choose to gamble Thursday, September 5, 13
  • 6. Failure of a 2-D Markov Model Law of Total Probability: p(G|U) = p(W|U)p(G|W) + p(L|U)p(G|L) Thursday, September 5, 13
  • 7. Failure of a 2-D Markov Model Law of Total Probability: p(G|U) = p(W|U)p(G|W) + p(L|U)p(G|L) p(G|W) = 0.69 > p(G|U) > p(G|L) = 0.59 But Tversky and Shafir (1992) found that p(G|U) = .35 < p(G | L) = 0.59 < p(G |W) = 0.69 violating the law of total probability Thursday, September 5, 13
  • 8. 2-D Quantum Model Law of Total Amplitude: p(G|U) = || < W|U >< G|W > + < L|U >< G|L > ||2 amplitude for transitioning to the “lose” state from the “unknown” state Thursday, September 5, 13
  • 9. Quantum Model AccountViolation of Sure Thing Principle = || < W|U > ||2 || < G|W > ||2 + || < L|U > ||2 || < G|L > ||2 + Int p(G|U) = || < W|U >< G|W > + < L|U >< G|L > ||2 Int = 2 · Re[< W|U >< G|W >< L|U >< G|L >] To account for Tversky and Shafir (1992) we require Int < 0 Thursday, September 5, 13
  • 10. Tversky and Shafir’s Intuition? • If you win on first play, you play again because you have extra “house” money • If you lose on first play, you play again because you need to make up for your losses • If you don’t know, these two reasons interfere and leaving you without any reason coming to mind Thursday, September 5, 13
  • 11. Failure of 2-D Quantum Model! • Quantum Model must satisfy Double stochasticity • In particular • ||<G | W>||2 + ||<G|L>||2 = 1 • But Tversky & Shafir found that • p(G | W) = 0.69 and p(G|L) = 0.59 • Violates double stochasticity! Thursday, September 5, 13
  • 12. 2-D Transition Matrix General 2-D transition matrix •Columns of T must sum to1 •Rows of T do not have to sum to 1 Thursday, September 5, 13
  • 13. Markov Process •Obeys law of total probability, but allows for general transition matrix Thursday, September 5, 13
  • 14. Quantum Process •Obeys law of total amplitude and not law of total probability. But U must transform a unit length vector Ψ(0) into another unit length vector Ψ(t) •To preserve lengths, U must be unitary  hN|Si hG|Si =  hN|Wi hN|Li hG|Wi hG|Li ·  hW|Si hL|Si =  hN|Wi · hW|Si + hN|Li · hL|Si hG|Wi · hW|Si + hG|Li · hL|Si Thursday, September 5, 13
  • 15. Quantum Unitary Matrix Unitary Matrix Transition Matrix •T must be Doubly stochastic: Both rows and columns of T must sum to unity Thursday, September 5, 13
  • 16. Disjunction Effect using Prisoner Dilemma Game (Shafir & Tversky, 1992) Thursday, September 5, 13
  • 17. • Condition 1:You know the other defected, and now you must decide whether to defect or cooperate • Condition 2:You know the other cooperated, and you must decide whether to defect or cooperate • Condition 3:You do not know, and you must decide whether to defect or cooperate Disjunction Effect using Prisoner Dilemma Game (Shafir & Tversky, 1992) Thursday, September 5, 13
  • 18. Results from 4 Experiments (Entries show % to defect) Study Known to defect Known to cooperate Unknown Shafir & Tversky (1992) 97 84 63 Croson (1999) 67 32 30 Li & Taplan (2002) 83 66 60 Busemeyer et al. (2006) 91 84 66 Violates the law of total probability Violates the law of double stochasticity Thursday, September 5, 13
  • 19. Another Failure: Both 2-D Models fail to explain PD Game results • The Markov model fails because the results once again violate the law of total probability • The quantum model fails because the results once again violate the law of double stochasticity Thursday, September 5, 13
  • 20. Compatible vs. Incompatible Measures • The failed QP model assumes beliefs and actions are incompatible • Previously we assumed that beliefs and actions were represented by different bases within the same 2-D vector space • Now we need to switch to a compatible representation which requires a 4-D space. Thursday, September 5, 13
  • 21. Inference-Action State Space 4 dimensional space Thursday, September 5, 13
  • 22. Classic Events Suppose: Observe start at t=0 in state I1A1 Do not observe during t=1 Observe end at t=2 in state I2A2 Classic Events: I1A1➝ I1A1➝ I2A2 or I1A1➝ I2A2➝ I2A2 or I1A1➝ I2A1➝ I2A2 or I1A1➝ I1A2➝ I2A2 These 4 are the only possibilities in 2 steps; We just don’t know which is true Thursday, September 5, 13
  • 23. Quantum Events Suppose: Observe start at t=0 in state I1A1 Do not observe during t=1 Observe end at t=2 in state I2A2 We cannot say there are only 4 possible ways to get there; At t=1, the state is a superposition of all four; There is deeper uncertainty Thursday, September 5, 13
  • 24. Compare 4-D Markov and Quantum Models for PD game Thursday, September 5, 13
  • 25. Markov Model Assumption 1 Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ } e.g. |DC⟩ ➝ you infer that opponent will defect but you decide to cooperate e.g. ΨDC = Initial probability that the Markov system starts in state |DC⟩ X i = 1 Thursday, September 5, 13
  • 26. Initial inferences affected by prior information (Markov) Condition 1 Known Defect Condition 2 Known Coop Condition 3 Unknown U = 0.5 D + 0.5 C Thursday, September 5, 13
  • 27. Quantum Model Assumption 1 Four basis states: {|DD⟩, |DC⟩, |CD⟩, |CC⟩ } e.g. |DC⟩ ➝ you infer that opponent will defect but you decide to cooperate e.g. ΨDC = Initial probability amplitude that the Quantum system starts in state |DC⟩ Probability = |ΨDC|2 |Ψ|2 = 1 Thursday, September 5, 13
  • 28. Initial inferences affected by prior information (Quantum) Condition 1 Known Defect Condition 2 Known Coop Condition 3 Unknown U = p 0.5 D + p 0.5 C Thursday, September 5, 13
  • 29. Markov Model Assumption 2 Strategy Selection Thursday, September 5, 13
  • 30. Strategies affected by game payoffs and processing time dΨ(t)/dt = K·Ψ(t) (Kolmogorov Forward Equation) Thursday, September 5, 13
  • 31. Intensity Matrix K = KA + KB KA =  KAd 0 0 KAc KAi =  1 µi 1 µi KB = 2 6 6 4 1 0 + 0 0 0 0 0 +1 0 0 0 0 0 0 3 7 7 5 + 2 6 6 4 0 0 0 0 0 0 +1 0 0 0 0 0 + 0 1 3 7 7 5 ! µi depends on the pay-offs associated with different actions transforms the state probabilities to favor either defection or cooperation depending on pay-offs Cognitive dissonance - beliefs change to be consistent with actions Thursday, September 5, 13
  • 32. Quantum Model Assumption 2 Thursday, September 5, 13
  • 33. Strategies affected by Game Payoffs and Processing Time Thursday, September 5, 13
  • 34. The Hamiltonian H = HA + HB HA =  HAd 0 0 HAc HAi = 1 p 1 + µ2 i  µi 1 1 µi µi depends on the pay-offs associated with different actions HB = p 2 2 6 6 4 +1 0 +1 0 0 0 0 0 +1 0 1 0 0 0 0 0 3 7 7 5 + 2 6 6 4 0 0 0 0 0 1 0 +1 0 0 0 0 0 +1 0 +1 3 7 7 5 ! transforms the state probabilities to favor either defection or cooperation depending on pay-offs Cognitive dissonance - beliefs change to be consistent with actions Thursday, September 5, 13
  • 35. Markov Model Assumption 3 output vector e.g. ϕDC = final probability that the Markov system ends in state |DC⟩. measurement operator for decision to defect Probability defect = L·ϕ T · = = 2 6 6 4 DD DC CD CC 3 7 7 5 Thursday, September 5, 13
  • 36. Markov Prediction If the opponent is known to defect: If the opponent is known to cooperate: Under the unknown condition: L·ϕD = L·TΨD L·ϕC = L·TΨC L·ϕU = L·TΨU = L·T(p·ΨD + q·ΨC) = p·L·TΨD + q·L·TΨC = p· L·ϕD + q· L·ϕC Known to defect Known to cooperate Unknown Busemeyer et al. (2006) 91 84 66 Markov Model 91 84 between 91 and 84 Thursday, September 5, 13
  • 37. Quantum Model Assumption 3 output vector e.g. ϕDC = final probability amplitude that the Quantum system ends in state |DC⟩. measurement operator for decision to defect Probability defect = |M·ϕ|2 U · = = 2 6 6 4 DD DC CD CC 3 7 7 5 Probability = |ϕDC|2 Thursday, September 5, 13
  • 38. Quantum Prediction If the opponent is known to defect: If the opponent is known to cooperate: Under the unknown condition: Known to defect Known to cooperate Unknown Busemeyer et al. (2006) 91 84 66 Markov Model 91 84 69 Thursday, September 5, 13
  • 39. Quantum Prediction The probability of defection under the unknown condition minus the average for the two known conditions. (Negative values indicate an interference effect. Thursday, September 5, 13
  • 40. ThankYou • Want to learn more... Thursday, September 5, 13
  • 41. Bayesian Analysis of Individual Data Thursday, September 5, 13
  • 42. Model Complexity Issue • Perhaps quantum probability succeeds where traditional models fail because it is more complex • Bayesian model comparison provides a coherent method for comparing models with respect to both accuracy and parsimony Thursday, September 5, 13
  • 43. Dynamic Consistency • Dynamic consistency: Final decisions agree with planned decisions (Barkan and Busemeyer, 2003) • Two stage gamble 1. Forced to play stage one, but outcome remained unknown 2. Made a plan and final choice about stage two • Plan: • If you win, do you plan to gamble on stage two? • If you lose, do you plan to gamble on stage two? • Final decision • After an actual win, do you gamble on stage two? • After an actual loss, do you now choose to gamble on stage two? Thursday, September 5, 13
  • 44. Two Stage Decision Task Thursday, September 5, 13
  • 45. Barkan And Busemeyer (2003) Results Risk averse after a win Risk seeking after a loss Thursday, September 5, 13
  • 46. Two Competing Models 1. Quantum Model 2. Markov model • Reduction of the quantum model when one key parameter is set to zero Thursday, September 5, 13
  • 47. Quantum Model • Four outcomes: W = win first gamble, L = lose first gamble T = take second gamble, R = reject second gamble • 4-D vector space corresponding to the four possible events:W ∧ T, W ∧ R, L ∧ T, L ∧ R F I F = U · I • State of the decision maker: 1.Before first gamble 2.Before second gamble • From first gamble to second gamble Thursday, September 5, 13
  • 48. Unitary Transformation • From first gamble to second gamble: F = U · I allows for changes in beliefs using one free parameter calculates the utilities for taking the gamble using two free parameters (loss aversion, , and risk aversion, )b a • The Markov model is a special case of the quantum model when = 0 U = exp( i · ⇡ 2 · (HA + HB)) Thursday, September 5, 13
  • 49. Comparing Fits • Fit both models to the dynamic consistency data: 1. Quantum • Three parameters: a and b to determine the utilities and for changing beliefs to align with actions • R2 = .82 ( = 0)2. Markov • R2 = .78 Thursday, September 5, 13
  • 50. Hierarchical Bayesian Parameter Estimation • Used hierarchical Bayesian estimation to evaluate whether or not H0: for the quantum model L(Di|✓i) = 0 q(✓i|⇡) r(⇡) Likelihood of data given model parms for person i Prior probability of parms for person i dependent on hierarchical parms - binomial distribution Prior probability over hierarchical parms - uniform distribution [0, 1] Thursday, September 5, 13
  • 52. Estimates of Group Level Parameters The risk aversion hierarchical parameter is located below 0.5 indicating somewhat strong risk aversion The loss aversion hierarchical parameter is located above 0.5 indicating higher sensitivity to losses Busemeyer, J. R.,Wang, Z.,Trueblood, J. S. (2012). Hierarchical Bayesian estimation of quantum decision model parameters. In J. R. Busemeyer et al. (Ed.), QI 2012, LNCS 7620. Berlin, Germany. Springer-Verlag. Thursday, September 5, 13
  • 53. Estimate of the Quantum Parameter The hierarchical distribution of the quantum parameter lies below 0.5 implying the mean value is below zero Thursday, September 5, 13