SlideShare a Scribd company logo
1 of 8
Download to read offline
SUBJECTIVE PROBABILITIES NEED NOT BE
                      SHARP

                                    Jake Chandler∗


   Championed by the likes of Isaac Levi (1980), Peter Walley (1991) and
many others, so-called ‘imprecise’ probabilistic models of rational decison-
making have become increasingly influential in the past few decades, per-
meating disciplines ranging from philosophy to statistics, through engineer-
ing, artificial intelligence and economics.1 These models relax the rational-
ity requirements of standard Bayesian decision theory by representing the
credal states of rational agents, not by a unique probabilistically coherent
credence function, but by a–typically convex–set of such functions–a so-
called ‘credal set’–and opting for some generalisation or other of the stan-
dard Bayesian decision rule. It is argued by proponents of such models that
these notably better capture the rationality constraints that operate on choice
behaviour.
   In a recent paper, however, Adam Elga (2010) argues that this liberalisa-
tion of Bayesian orthodoxy has gone a step too far. More specifically, he
claims that the imprecise view falters in its handling of sequential choice
problems, licensing behaviours that are intuitively rationally reprehensible.
The purpose of this note is to point out a number of inadequacies in his ar-
gument, whilst noting that the sequential choice problems that he considers
still pose problems for certain imprecise models that lie at the very liberal
end of the decision-theoretic spectrum.
   Elga’s objection, in a nutshell, proceeds as follows. He first asks us to
consider the following pair of decision problems:
       The agent faces a choice between A, accepting a bet that pays out
       −10$ in case H is true and 15$ in case H is not and ¬A, the status
       quo, that pays out 0$ irrespective of whether H is the case.
       The agent faces a choice between between B, accepting a bet that pays
   ∗
     Address for correspondence: Center for Logic and Analytic Philosophy, HIW, KU Leuven,
Kardinaal Mercierplein 2, 3000 Leuven, Belgium. Email: jacob.chandler[at]hiw.kuleuven.be
   1
     For a survey of the imprecise approach, see for instance Miranda (2008) or Troffaes (2007).

                                              1
15$ in case H is true and −10$ in case H is not, and ¬B, the status
        quo, that pays out 0$ irrespective of whether H is the case.
As Elga notes, choosing both A and B secures a sure payoff of 5$, a prospect
that is clearly more desirable than the one resulting from choosing both ¬A
and ¬B instead, i.e. making the decision to decline both gambles and run
with the status quo. He then suggests the following principle which, at least
at first glance, could look plausible:
        (P) ‘Any perfectly rational agent who is sequentially offered bets A
        and B in the above circumstances (full disclosure in advance about
        the whole setup, no change of belief in H during the whole process,
        utilities linear in dollars) will accept at least one of the bets.’ (ibid,
        p. 4)
The problem however, according to Elga, is that whilst (P) is satisfied by
the standard Bayesian model, it is not borne out by any of the most popular
imprecise models of rational choice. Whatmore, he argues, a number of
alternatives that do bear it out, are implausible on independent grounds. If
all this is correct, needless to say, the upshot would seem to be that the
prospects for the imprecise model are rather poor.
   Fortunately for the friend of imprecise credences, however, Elga’s argu-
ment is unpersuasive. For one thing, whilst it is indeed true, as Elga claims,
that a number of rules, including E-admissibility and Γ-maximin2 do vio-
late (P), at least one well-known rule, which he omits to discuss, turns out
to satisfy it. This is the somewhat ‘optimistic’:
        Γ-Maximax: an act A is rationally permissible if and only if there is no
        alternative act B such that the highest expected utility of B according
        to any credence function in the credal set is strictly greater than the
        highest expected utility of A according to any credence function in
        the credal set.
It is easy to see that, according to this principle, if the agent declines the
first bet, then she will be compelled to accept the second. Indeed, after
having declined the first bet and having updated her initial credal set ac-
cordingly, whereas (a) according to all credence functions in the agent’s
resulting credal set, declining the bet will have an expected utility of 0, (b)
for some credence function in that set, accepting the second bet will have a
strictly positive expected utility.
   To see why (b) holds, assume for reductio that, for all credence functions
in the credal set immediately prior to reaching the second decision problem,
  2
      See below for an informal presentation of these

                                                 2
the second bet has an expected utility that is not strictly positive. Given the
payoffs, this amounts to the claim that, for any such function Cr, Cr(H) ≤
2/5. Then, given the condition that stipulates that there is ‘no change of
belief in H during the whole process’, it immediately follows that for any
credence function Cr in the credal set immediately prior to reaching the first
decision problem, we also have Cr(H) ≤ 2/5, From this we can then infer
that, according to any such credence function, the first bet had an expected
utility that was strictly positive, and hence, by Γ-Maximax, was mandatory
to accept. But, by hypothesis, the first bet was not accepted and the agent is
assumed to be rational. Contradiction.
   But even if we set this issue to one side, problems remain. Indeed, there
seem to be strong grounds to hold that (P) is simply too strong a constraint
in the first place. More specifically, since the bets display a certain inter-
dependence in the payoffs, the rationality of the choice made at the first
decision point may well hinge in part on what the agent thinks that she will
do at the second. In particular, consider the issue from the perspective of
the popular ‘pessimistic’ counterpart of Γ-Maximax, endorsed by Gilboa &
Schmeidler (1989):
       Γ-Maximin: an act A is rationally permissible if and only if there is no
       alternative act B such that the lowest expected utility of B according
       to any credence function in the credal set is strictly greater than the
       lowest expected utility of A according to any credence function in the
       credal set.3
It is clear that beliefs about future actions make a crucial difference here. To
illustrate the point, let us assume that our agent finds herself in a maximal
state of agnosticism with respect to H, so that the credences in H in the
initial credal set span the interval [0, 1].
   Assume for example that, for whatever reason, at the first decision point,
(a) the agent believes that if she declines the first bet, she will accept the
second, and that, conversely, if she accepts the first bet, she will decline
the second. Since, with respect to the initial credal set, the lowest expected
utility associated with declining the first bet and accepting the second is the
same as the one associated with accepting the first bet and declining the
second, namely −10$, Γ-Maximin leaves it open to her as to whether or not
to accept the first bet.
   But now assume that, at the first decision point, (b) the agent believes that
she will accept the first bet if and only if she will accept the second. Clearly,
   3
     Note that Γ-Maximin, whilst popular, is not uncontentious. Seidenfeld (2004), in particular,
criticises it on the basis that it can sometimes license a negative valuation of cost-free information.


                                                  3
Γ-Maximin recommends accepting the first bet: with respect to the initial
credal set, the lowest expected utility associated with accepting both bets
is 5$, whereas the lowest expected utility associated with declining both
bets is 0$. In this sequential choice problem, according to Γ-Maximin, the
recommendations of rationality are crucially sensitive to the agent’s beliefs
about her future choices.
   We have seen that if the agent is state of mind (b), reasoning by backward
induction, she will follow Elga’s recommendation to accept at least one
of the bets. But under which conditions is there any rational compulsion
whatsoever for her to hold these views about her future self? Well, one
could easily imagine her reasoning as follows:
     Suppose that I decline the first bet. What will I do when I get to the
     second decision point? Well, given that I will have declined the first
     bet, and will be none the wiser about whether or not H, I will think to
     myself that if I decline the second bet, the expected utility will be 0$
     come what may, whereas if I accept the second bet, for all I know, the
     expected utility could be −10$. Being a sensible person, I will err on
     the side of caution and decline the second bet. So If I decline the first
     bet, I will decline the second one as well.
     But now suppose that I accept the first bet. What then? Well, again,
     given that I will have accepted the first bet, and will be none the wiser
     about whether or not H, I will think to myself that if I decline the
     second bet, the expected utility could well be −10$, for all I know,
     whereas if I accept the second bet, I am guaranteed an expected payoff
     of 5$. Being a sensible person, I will err on the side of caution and
     accept the second bet: If I accept the first bet, I will accept the second
     one as well.
Underlying the agent’s reasoning are a number of assumptions about her
future state of mind, many of which were left implicit. These include: (i)
the belief that her opinions with respect to H would remain unchanged be-
tween the first and the second decision point, (ii) the belief that she would
remember, her first decision at the time of the second and (iii) the belief that
her future self would follow the recommendations of Γ-Maximin.
  It is of course far from clear to what extent the agent’s holding these
views is rationally mandatory. But the above considerations suggest a more
convincing, weaker, alternative to (P):
     (P− ) Any perfectly rational agent who is sequentially offered bets A
     and B in the above circumstances (full disclosure in advance about the

                                      4
whole setup (FD), no change of belief in H during the whole process
       (NC), utilities linear in dollars, certainty of future rationality (CFR),
       introspection (I), certainty of perfect recall (CPR) and certainty of
       lack of change of belief in H (CNC)) will accept at least one of the
       bets.4
I take this principle to be intuitively compelling. Indeed, more generally,
the following would seem to hold: if you are rational and can count on your
future self making a rational choice that is informed by your decisions, you
will avoid making a sequence of choices that is strictly dispreferrable to
some available alternative. Whatmore, clearly, to the extent that Elga deems
even (P) to be plausible, he will grant that (P− ) is so too.
   But now, once the additional conditions are suitably cashed out in formal
terms, it can be easily verified that Γ-Maximin satisfies (P− ). The supple-
mentary conditions are naturally interpreted as follows, where Γ denotes
the credal set of the agent at the time immediately prior to making the first
decision, and Γ∗ denotes the credal set of the agent at the time immediately
prior to making the second:
       (CFR) implies that, for any function in Γ, the credence accorded to
       the agent’s making, at the time of the second decision problem, a deci-
       sion in line with the relevant principle of rational choice (for instance
       Γ-Maximax or Γ-Maximin, depending on one’s prefered recommen-
       dation) is 1.
       (I) implies that, for any function Cr in Γ, the credence accorded to the
       agent’s credal set being Γ is 1.
       (CNC) entails, in conjunction with the previous condition, that, for
       any function Cr in Γ, if there exists a function Cr in Γ such that
       Cr (H) = r, then the credence assigned by Cr to their being a function
       Cr in Γ∗ such that Cr (H) = r is 1.
       (CPR) requires it to be the case that, for any function Cr in Γ and
       any of the options available in the first decision problem, Cr assigns
       a credence of 1 to (a) its being the case that, for any function Cr
       in Γ∗ , Cr assigns a credence of 1 to the option having been chosen,
       conditional on (b) its having been chosen.
   4
    Strictly-speaking, these assumptions can be somewhat weakened but the strong versions suf-
fice to make the present point.




                                              5
(FD), which Elga left somewhat unspecified, simply amounts to its
        being the case that, for any function in Γ, the credence accorded to the
        second decision problem being encountered is 1
With this in hand, we can now provide an informal sketch of the proof:
        Let Γ be such that, for every real number r in an arbitrary interval [a,
        b], there is a function Cr in Γ such that Cr(H) = r. It will be shown
        that, depending on the value of a, one or the other of the two following
        disjuncts hold: either chosing ¬A will mandate chosing B, or chosing
        A will be mandatory in the first place.
        Either (1) a > 2/5 or (2) a ≤ 2/5. Assume (1). Now, by (NC), if
        ¬A were to be chosen at the first point, according to Γ∗ , a choice of
        B at the second point would have, at worst, an expected utility of
        15a − 10(1 − a) = 25a − 10$, whereas a choice of ¬B would have, at
        worst, an expected utility of 0$. Since (1) is equivalent to 25a−10 > 0,
        it follows from Γ-maximin that if ¬A were to be chosen at the first
        point, then B would be chosen at the second. This is the first of our
        disjuncts.
        So now assume (2), i.e. that a ≤ 2/5. This is equivalent to 0 ≥ 25a−10.
        With respect to Γ, the left hand term of this last expression is the
        lowest expected payoff for chosing ¬A and then ¬B and the right hand
        term is the lowest expected payoff for chosing ¬A and then B. It
        follows that, whatever credences are assigned by the various functions
        in Γ to B’s being chosen, conditional on ¬A’s being chosen, according
        to each of these functions, the expected utility of chosing ¬A will be
        no higher than 0.
        Now by (FD), (CPR), (I) and (CNC), it follows that (3): Every cre-
        dence function in Γ accords a credence of 1, conditional on A’s being
        chosen at the first decision point, to the proposition that, according to
        Γ∗ , a choice of ¬B at the second point would have, at worst, an ex-
        pected utility of −10b + 15(1 − b) = 15 − 25b$, whereas a choice of B
        would have, at worst, an expected utility of 5$.
        By (CFR) and Γ-Maximin, it then follows from (3) that, whatever cre-
        dences are assigned by the various functions in Γ to B’s being chosen,
        conditional on ¬A’s being chosen, according to each of these func-
        tions, the expected utility of chosing ¬A will be no lower than 5. It
        then follows by Γ-Maximin that ¬A will not be chosen. We now have
        the second disjunct. QED5
  5
      Note that it is easy to see from this that, as suggested above, if we set a = 0 and b = 1, we

                                                 6
It is worth noting, however, that (P− ) is violated by at least three further
rules. These include:
       E-Admissibility: an act A is permissible if and only if there is a cre-
       dence function Cr in the agent’s credal set such that there is not al-
       ternative act B such that the expected utility of B according to Cr is
       strictly greater than the expected utility B according to Cr.
       Maximality: an act A is permissible if and only if there is no alternative
       act B such that for any credence function Cr in the agent’s credal set,
       the expected utility of B according to Cr is higher than the expected
       utility of A according to Cr.
       Interval Dominance: an act A is permissible if and only if there is no
       alternative act B such that such that the lowest expected utility of B
       according to any credence function in the credal set is strictly greater
       than the highest expected utility of A according to any credence func-
       tion in the credal set.
Regarding the first rule, consider for instance the case of extreme agnosti-
cism in which a = 0 and b = 1. As we have seen above, since a ≤ 2/5,
chosing ¬A leaves it open whether or not B will subsequently be chosen.
Whatmore, for all credence functions in Γ, there is no constraint on the cre-
dence that B will be chosen at the second point conditional either on A or
on ¬A having been chosen at the first. Consider for instance the scenario
in which ¬A was chosen at the first decision point. There exists a credence
function in Γ∗ according to which choosing B has a higher expected utility
than choosing ¬B (let the credence accorded to H be 0). But there also
exists a credence function in which this relation is inverted (let the cre-
dence accorded to H be 1). Consider now the scenario in which A was
chosen. Again, there exists a credence function in Γ∗ such that choosing B
has a higher expected utility than choosing ¬B (let the credence accorded
to H be 0). But there also exists a credence function in which this relation
is inverted (let the credence accorded to H be 1). From this observation,
it quickly follows that E-Admissibility violates (P− ). But then the rule is
therefore also violated by both Maximality and Interval Dominance, since
it is well known that if an act is permissible according to the first rule, it is
thereby permissible according to the subsequent two.6
recover the result that chosing A, followed by chosing B, is rationally mandated by Γ-Maximin,
given the assumptions in place.
   6
     See Troffaes (2007). In addition, note that E-admissibility is striclty more permissive than
Γ-Maximax and that Maximality is strictly more permissive than Γ-Maximin. There is no


                                               7
To conclude, Elga’s todeserklärung for the imprecise view is somewhat
premature: the case for sharpness has simply not been made. First of all, (P)
happens to be satisfied by Γ-Maximax, a fact that Elga omitted to mention.
Secondly, it seems plausible to argue that (P) was too strong in the first
place and that its more sensible replacement, (P− ), is notably satisfied by
the fairly popular Γ-Maximin. Interestingly, however, it is worth noting that
a number of well-known decision rules do not satisfy even the weakened
principle. This arguably provides prima facie grounds to reject them in
favour of something more stringent.

Acknowledgments

I am grateful to Richard Bradley, Igor Douven and Wlodek Rabinowicz for
helpful feedback on an earlier version of this paper.

References

Elga, A. (2010). Subjective Probabilities Should be Sharp. Philosophers
Imprint 10(5), pp. 1–11.
Levi, I. (1980). The Enterprise of Knowledge: An Essay on Knowledge,
Credal Probability and Chance. Cam. Mass.: MIT Press.
Gilboa, I. and D. Schmeidler (1989). Maxmin Expected Utility with Non-
Unique Prior. J. Math. Econ. 18, pp. 141-53.
Miranda, E. (2008). A Survey of the Theory of Coherent Lower Previsions.
International Journal of Approximate Reasoning, 48(2), pp. 628–58.
Seidenfeld, T. (2004). A Contrast between Two Decision Rules for Use with
(Convex) Sets of Probabilities: γ-Maximin versus E-Admissibility. Syn-
these 140(1/2), pp. 69-88.
Troffaes, M. C. M. (2007). Decision Making under Uncertainty Using Im-
precise Probabilities. International Journal of Approximate Reasoning 45,
pp. 17-29.
Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. Lon-
don: Chapman and Hall.

such straightforward relationship, however, between E-Admissibility and Γ-Maximin: some E-
admissible acts are not Γ-Maximinimal and vice versa.

                                            8

More Related Content

Similar to J CHANDLER - Subjective Probabilities Need Not Be Sharp

The logic of complexity v.00e
The logic of complexity v.00eThe logic of complexity v.00e
The logic of complexity v.00eRicardo Alvira
 
Making Brackets Work at Mediation
Making Brackets Work at MediationMaking Brackets Work at Mediation
Making Brackets Work at MediationSteve Joseph
 
Chapter 29The Fourfold Pattern Whenever you form a global ev.docx
Chapter 29The Fourfold Pattern Whenever you form a global ev.docxChapter 29The Fourfold Pattern Whenever you form a global ev.docx
Chapter 29The Fourfold Pattern Whenever you form a global ev.docxcravennichole326
 
Hypothesis TestingThe Right HypothesisIn business, or an.docx
Hypothesis TestingThe Right HypothesisIn business, or an.docxHypothesis TestingThe Right HypothesisIn business, or an.docx
Hypothesis TestingThe Right HypothesisIn business, or an.docxadampcarr67227
 
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docx
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docxDIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docx
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docxduketjoy27252
 
Testing of hypotheses
Testing of hypothesesTesting of hypotheses
Testing of hypothesesRajThakuri
 
Before starting on this assignment, make sure to carefully review .docx
Before starting on this assignment, make sure to carefully review .docxBefore starting on this assignment, make sure to carefully review .docx
Before starting on this assignment, make sure to carefully review .docxAASTHA76
 
The Civil Standard of Proof-What is it, actuallyGrop me.docx
The Civil Standard of Proof-What is it, actuallyGrop me.docxThe Civil Standard of Proof-What is it, actuallyGrop me.docx
The Civil Standard of Proof-What is it, actuallyGrop me.docxrtodd643
 
Math3010 week 4
Math3010 week 4Math3010 week 4
Math3010 week 4stanbridge
 
SOCIAL DILEMMA and dilemma of prisoner.pdf
SOCIAL DILEMMA and dilemma of prisoner.pdfSOCIAL DILEMMA and dilemma of prisoner.pdf
SOCIAL DILEMMA and dilemma of prisoner.pdfUjjwalNiroula1
 
Test of hypothesis
Test of hypothesisTest of hypothesis
Test of hypothesisJaspreet1192
 

Similar to J CHANDLER - Subjective Probabilities Need Not Be Sharp (17)

Economics Homework Help.pptx
Economics Homework Help.pptxEconomics Homework Help.pptx
Economics Homework Help.pptx
 
The logic of complexity v.00e
The logic of complexity v.00eThe logic of complexity v.00e
The logic of complexity v.00e
 
BEHAVIORAL ECONOMICS
BEHAVIORAL ECONOMICSBEHAVIORAL ECONOMICS
BEHAVIORAL ECONOMICS
 
Making Brackets Work at Mediation
Making Brackets Work at MediationMaking Brackets Work at Mediation
Making Brackets Work at Mediation
 
IPPTCh010.pptx
IPPTCh010.pptxIPPTCh010.pptx
IPPTCh010.pptx
 
Chapter 29The Fourfold Pattern Whenever you form a global ev.docx
Chapter 29The Fourfold Pattern Whenever you form a global ev.docxChapter 29The Fourfold Pattern Whenever you form a global ev.docx
Chapter 29The Fourfold Pattern Whenever you form a global ev.docx
 
Hypothesis TestingThe Right HypothesisIn business, or an.docx
Hypothesis TestingThe Right HypothesisIn business, or an.docxHypothesis TestingThe Right HypothesisIn business, or an.docx
Hypothesis TestingThe Right HypothesisIn business, or an.docx
 
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docx
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docxDIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docx
DIMONDWRIT 200ASSIGNMENT SHEETRhetorical Analysis Statement.docx
 
Purely Procedural Preferences - Beyond Procedural Equity and Reciprocity
Purely Procedural Preferences - Beyond Procedural Equity and ReciprocityPurely Procedural Preferences - Beyond Procedural Equity and Reciprocity
Purely Procedural Preferences - Beyond Procedural Equity and Reciprocity
 
Testing of hypotheses
Testing of hypothesesTesting of hypotheses
Testing of hypotheses
 
Before starting on this assignment, make sure to carefully review .docx
Before starting on this assignment, make sure to carefully review .docxBefore starting on this assignment, make sure to carefully review .docx
Before starting on this assignment, make sure to carefully review .docx
 
The Civil Standard of Proof-What is it, actuallyGrop me.docx
The Civil Standard of Proof-What is it, actuallyGrop me.docxThe Civil Standard of Proof-What is it, actuallyGrop me.docx
The Civil Standard of Proof-What is it, actuallyGrop me.docx
 
Math3010 week 4
Math3010 week 4Math3010 week 4
Math3010 week 4
 
SOCIAL DILEMMA and dilemma of prisoner.pdf
SOCIAL DILEMMA and dilemma of prisoner.pdfSOCIAL DILEMMA and dilemma of prisoner.pdf
SOCIAL DILEMMA and dilemma of prisoner.pdf
 
11주차
11주차11주차
11주차
 
Christian decision making
Christian decision makingChristian decision making
Christian decision making
 
Test of hypothesis
Test of hypothesisTest of hypothesis
Test of hypothesis
 

Recently uploaded

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 

Recently uploaded (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 

J CHANDLER - Subjective Probabilities Need Not Be Sharp

  • 1. SUBJECTIVE PROBABILITIES NEED NOT BE SHARP Jake Chandler∗ Championed by the likes of Isaac Levi (1980), Peter Walley (1991) and many others, so-called ‘imprecise’ probabilistic models of rational decison- making have become increasingly influential in the past few decades, per- meating disciplines ranging from philosophy to statistics, through engineer- ing, artificial intelligence and economics.1 These models relax the rational- ity requirements of standard Bayesian decision theory by representing the credal states of rational agents, not by a unique probabilistically coherent credence function, but by a–typically convex–set of such functions–a so- called ‘credal set’–and opting for some generalisation or other of the stan- dard Bayesian decision rule. It is argued by proponents of such models that these notably better capture the rationality constraints that operate on choice behaviour. In a recent paper, however, Adam Elga (2010) argues that this liberalisa- tion of Bayesian orthodoxy has gone a step too far. More specifically, he claims that the imprecise view falters in its handling of sequential choice problems, licensing behaviours that are intuitively rationally reprehensible. The purpose of this note is to point out a number of inadequacies in his ar- gument, whilst noting that the sequential choice problems that he considers still pose problems for certain imprecise models that lie at the very liberal end of the decision-theoretic spectrum. Elga’s objection, in a nutshell, proceeds as follows. He first asks us to consider the following pair of decision problems: The agent faces a choice between A, accepting a bet that pays out −10$ in case H is true and 15$ in case H is not and ¬A, the status quo, that pays out 0$ irrespective of whether H is the case. The agent faces a choice between between B, accepting a bet that pays ∗ Address for correspondence: Center for Logic and Analytic Philosophy, HIW, KU Leuven, Kardinaal Mercierplein 2, 3000 Leuven, Belgium. Email: jacob.chandler[at]hiw.kuleuven.be 1 For a survey of the imprecise approach, see for instance Miranda (2008) or Troffaes (2007). 1
  • 2. 15$ in case H is true and −10$ in case H is not, and ¬B, the status quo, that pays out 0$ irrespective of whether H is the case. As Elga notes, choosing both A and B secures a sure payoff of 5$, a prospect that is clearly more desirable than the one resulting from choosing both ¬A and ¬B instead, i.e. making the decision to decline both gambles and run with the status quo. He then suggests the following principle which, at least at first glance, could look plausible: (P) ‘Any perfectly rational agent who is sequentially offered bets A and B in the above circumstances (full disclosure in advance about the whole setup, no change of belief in H during the whole process, utilities linear in dollars) will accept at least one of the bets.’ (ibid, p. 4) The problem however, according to Elga, is that whilst (P) is satisfied by the standard Bayesian model, it is not borne out by any of the most popular imprecise models of rational choice. Whatmore, he argues, a number of alternatives that do bear it out, are implausible on independent grounds. If all this is correct, needless to say, the upshot would seem to be that the prospects for the imprecise model are rather poor. Fortunately for the friend of imprecise credences, however, Elga’s argu- ment is unpersuasive. For one thing, whilst it is indeed true, as Elga claims, that a number of rules, including E-admissibility and Γ-maximin2 do vio- late (P), at least one well-known rule, which he omits to discuss, turns out to satisfy it. This is the somewhat ‘optimistic’: Γ-Maximax: an act A is rationally permissible if and only if there is no alternative act B such that the highest expected utility of B according to any credence function in the credal set is strictly greater than the highest expected utility of A according to any credence function in the credal set. It is easy to see that, according to this principle, if the agent declines the first bet, then she will be compelled to accept the second. Indeed, after having declined the first bet and having updated her initial credal set ac- cordingly, whereas (a) according to all credence functions in the agent’s resulting credal set, declining the bet will have an expected utility of 0, (b) for some credence function in that set, accepting the second bet will have a strictly positive expected utility. To see why (b) holds, assume for reductio that, for all credence functions in the credal set immediately prior to reaching the second decision problem, 2 See below for an informal presentation of these 2
  • 3. the second bet has an expected utility that is not strictly positive. Given the payoffs, this amounts to the claim that, for any such function Cr, Cr(H) ≤ 2/5. Then, given the condition that stipulates that there is ‘no change of belief in H during the whole process’, it immediately follows that for any credence function Cr in the credal set immediately prior to reaching the first decision problem, we also have Cr(H) ≤ 2/5, From this we can then infer that, according to any such credence function, the first bet had an expected utility that was strictly positive, and hence, by Γ-Maximax, was mandatory to accept. But, by hypothesis, the first bet was not accepted and the agent is assumed to be rational. Contradiction. But even if we set this issue to one side, problems remain. Indeed, there seem to be strong grounds to hold that (P) is simply too strong a constraint in the first place. More specifically, since the bets display a certain inter- dependence in the payoffs, the rationality of the choice made at the first decision point may well hinge in part on what the agent thinks that she will do at the second. In particular, consider the issue from the perspective of the popular ‘pessimistic’ counterpart of Γ-Maximax, endorsed by Gilboa & Schmeidler (1989): Γ-Maximin: an act A is rationally permissible if and only if there is no alternative act B such that the lowest expected utility of B according to any credence function in the credal set is strictly greater than the lowest expected utility of A according to any credence function in the credal set.3 It is clear that beliefs about future actions make a crucial difference here. To illustrate the point, let us assume that our agent finds herself in a maximal state of agnosticism with respect to H, so that the credences in H in the initial credal set span the interval [0, 1]. Assume for example that, for whatever reason, at the first decision point, (a) the agent believes that if she declines the first bet, she will accept the second, and that, conversely, if she accepts the first bet, she will decline the second. Since, with respect to the initial credal set, the lowest expected utility associated with declining the first bet and accepting the second is the same as the one associated with accepting the first bet and declining the second, namely −10$, Γ-Maximin leaves it open to her as to whether or not to accept the first bet. But now assume that, at the first decision point, (b) the agent believes that she will accept the first bet if and only if she will accept the second. Clearly, 3 Note that Γ-Maximin, whilst popular, is not uncontentious. Seidenfeld (2004), in particular, criticises it on the basis that it can sometimes license a negative valuation of cost-free information. 3
  • 4. Γ-Maximin recommends accepting the first bet: with respect to the initial credal set, the lowest expected utility associated with accepting both bets is 5$, whereas the lowest expected utility associated with declining both bets is 0$. In this sequential choice problem, according to Γ-Maximin, the recommendations of rationality are crucially sensitive to the agent’s beliefs about her future choices. We have seen that if the agent is state of mind (b), reasoning by backward induction, she will follow Elga’s recommendation to accept at least one of the bets. But under which conditions is there any rational compulsion whatsoever for her to hold these views about her future self? Well, one could easily imagine her reasoning as follows: Suppose that I decline the first bet. What will I do when I get to the second decision point? Well, given that I will have declined the first bet, and will be none the wiser about whether or not H, I will think to myself that if I decline the second bet, the expected utility will be 0$ come what may, whereas if I accept the second bet, for all I know, the expected utility could be −10$. Being a sensible person, I will err on the side of caution and decline the second bet. So If I decline the first bet, I will decline the second one as well. But now suppose that I accept the first bet. What then? Well, again, given that I will have accepted the first bet, and will be none the wiser about whether or not H, I will think to myself that if I decline the second bet, the expected utility could well be −10$, for all I know, whereas if I accept the second bet, I am guaranteed an expected payoff of 5$. Being a sensible person, I will err on the side of caution and accept the second bet: If I accept the first bet, I will accept the second one as well. Underlying the agent’s reasoning are a number of assumptions about her future state of mind, many of which were left implicit. These include: (i) the belief that her opinions with respect to H would remain unchanged be- tween the first and the second decision point, (ii) the belief that she would remember, her first decision at the time of the second and (iii) the belief that her future self would follow the recommendations of Γ-Maximin. It is of course far from clear to what extent the agent’s holding these views is rationally mandatory. But the above considerations suggest a more convincing, weaker, alternative to (P): (P− ) Any perfectly rational agent who is sequentially offered bets A and B in the above circumstances (full disclosure in advance about the 4
  • 5. whole setup (FD), no change of belief in H during the whole process (NC), utilities linear in dollars, certainty of future rationality (CFR), introspection (I), certainty of perfect recall (CPR) and certainty of lack of change of belief in H (CNC)) will accept at least one of the bets.4 I take this principle to be intuitively compelling. Indeed, more generally, the following would seem to hold: if you are rational and can count on your future self making a rational choice that is informed by your decisions, you will avoid making a sequence of choices that is strictly dispreferrable to some available alternative. Whatmore, clearly, to the extent that Elga deems even (P) to be plausible, he will grant that (P− ) is so too. But now, once the additional conditions are suitably cashed out in formal terms, it can be easily verified that Γ-Maximin satisfies (P− ). The supple- mentary conditions are naturally interpreted as follows, where Γ denotes the credal set of the agent at the time immediately prior to making the first decision, and Γ∗ denotes the credal set of the agent at the time immediately prior to making the second: (CFR) implies that, for any function in Γ, the credence accorded to the agent’s making, at the time of the second decision problem, a deci- sion in line with the relevant principle of rational choice (for instance Γ-Maximax or Γ-Maximin, depending on one’s prefered recommen- dation) is 1. (I) implies that, for any function Cr in Γ, the credence accorded to the agent’s credal set being Γ is 1. (CNC) entails, in conjunction with the previous condition, that, for any function Cr in Γ, if there exists a function Cr in Γ such that Cr (H) = r, then the credence assigned by Cr to their being a function Cr in Γ∗ such that Cr (H) = r is 1. (CPR) requires it to be the case that, for any function Cr in Γ and any of the options available in the first decision problem, Cr assigns a credence of 1 to (a) its being the case that, for any function Cr in Γ∗ , Cr assigns a credence of 1 to the option having been chosen, conditional on (b) its having been chosen. 4 Strictly-speaking, these assumptions can be somewhat weakened but the strong versions suf- fice to make the present point. 5
  • 6. (FD), which Elga left somewhat unspecified, simply amounts to its being the case that, for any function in Γ, the credence accorded to the second decision problem being encountered is 1 With this in hand, we can now provide an informal sketch of the proof: Let Γ be such that, for every real number r in an arbitrary interval [a, b], there is a function Cr in Γ such that Cr(H) = r. It will be shown that, depending on the value of a, one or the other of the two following disjuncts hold: either chosing ¬A will mandate chosing B, or chosing A will be mandatory in the first place. Either (1) a > 2/5 or (2) a ≤ 2/5. Assume (1). Now, by (NC), if ¬A were to be chosen at the first point, according to Γ∗ , a choice of B at the second point would have, at worst, an expected utility of 15a − 10(1 − a) = 25a − 10$, whereas a choice of ¬B would have, at worst, an expected utility of 0$. Since (1) is equivalent to 25a−10 > 0, it follows from Γ-maximin that if ¬A were to be chosen at the first point, then B would be chosen at the second. This is the first of our disjuncts. So now assume (2), i.e. that a ≤ 2/5. This is equivalent to 0 ≥ 25a−10. With respect to Γ, the left hand term of this last expression is the lowest expected payoff for chosing ¬A and then ¬B and the right hand term is the lowest expected payoff for chosing ¬A and then B. It follows that, whatever credences are assigned by the various functions in Γ to B’s being chosen, conditional on ¬A’s being chosen, according to each of these functions, the expected utility of chosing ¬A will be no higher than 0. Now by (FD), (CPR), (I) and (CNC), it follows that (3): Every cre- dence function in Γ accords a credence of 1, conditional on A’s being chosen at the first decision point, to the proposition that, according to Γ∗ , a choice of ¬B at the second point would have, at worst, an ex- pected utility of −10b + 15(1 − b) = 15 − 25b$, whereas a choice of B would have, at worst, an expected utility of 5$. By (CFR) and Γ-Maximin, it then follows from (3) that, whatever cre- dences are assigned by the various functions in Γ to B’s being chosen, conditional on ¬A’s being chosen, according to each of these func- tions, the expected utility of chosing ¬A will be no lower than 5. It then follows by Γ-Maximin that ¬A will not be chosen. We now have the second disjunct. QED5 5 Note that it is easy to see from this that, as suggested above, if we set a = 0 and b = 1, we 6
  • 7. It is worth noting, however, that (P− ) is violated by at least three further rules. These include: E-Admissibility: an act A is permissible if and only if there is a cre- dence function Cr in the agent’s credal set such that there is not al- ternative act B such that the expected utility of B according to Cr is strictly greater than the expected utility B according to Cr. Maximality: an act A is permissible if and only if there is no alternative act B such that for any credence function Cr in the agent’s credal set, the expected utility of B according to Cr is higher than the expected utility of A according to Cr. Interval Dominance: an act A is permissible if and only if there is no alternative act B such that such that the lowest expected utility of B according to any credence function in the credal set is strictly greater than the highest expected utility of A according to any credence func- tion in the credal set. Regarding the first rule, consider for instance the case of extreme agnosti- cism in which a = 0 and b = 1. As we have seen above, since a ≤ 2/5, chosing ¬A leaves it open whether or not B will subsequently be chosen. Whatmore, for all credence functions in Γ, there is no constraint on the cre- dence that B will be chosen at the second point conditional either on A or on ¬A having been chosen at the first. Consider for instance the scenario in which ¬A was chosen at the first decision point. There exists a credence function in Γ∗ according to which choosing B has a higher expected utility than choosing ¬B (let the credence accorded to H be 0). But there also exists a credence function in which this relation is inverted (let the cre- dence accorded to H be 1). Consider now the scenario in which A was chosen. Again, there exists a credence function in Γ∗ such that choosing B has a higher expected utility than choosing ¬B (let the credence accorded to H be 0). But there also exists a credence function in which this relation is inverted (let the credence accorded to H be 1). From this observation, it quickly follows that E-Admissibility violates (P− ). But then the rule is therefore also violated by both Maximality and Interval Dominance, since it is well known that if an act is permissible according to the first rule, it is thereby permissible according to the subsequent two.6 recover the result that chosing A, followed by chosing B, is rationally mandated by Γ-Maximin, given the assumptions in place. 6 See Troffaes (2007). In addition, note that E-admissibility is striclty more permissive than Γ-Maximax and that Maximality is strictly more permissive than Γ-Maximin. There is no 7
  • 8. To conclude, Elga’s todeserklärung for the imprecise view is somewhat premature: the case for sharpness has simply not been made. First of all, (P) happens to be satisfied by Γ-Maximax, a fact that Elga omitted to mention. Secondly, it seems plausible to argue that (P) was too strong in the first place and that its more sensible replacement, (P− ), is notably satisfied by the fairly popular Γ-Maximin. Interestingly, however, it is worth noting that a number of well-known decision rules do not satisfy even the weakened principle. This arguably provides prima facie grounds to reject them in favour of something more stringent. Acknowledgments I am grateful to Richard Bradley, Igor Douven and Wlodek Rabinowicz for helpful feedback on an earlier version of this paper. References Elga, A. (2010). Subjective Probabilities Should be Sharp. Philosophers Imprint 10(5), pp. 1–11. Levi, I. (1980). The Enterprise of Knowledge: An Essay on Knowledge, Credal Probability and Chance. Cam. Mass.: MIT Press. Gilboa, I. and D. Schmeidler (1989). Maxmin Expected Utility with Non- Unique Prior. J. Math. Econ. 18, pp. 141-53. Miranda, E. (2008). A Survey of the Theory of Coherent Lower Previsions. International Journal of Approximate Reasoning, 48(2), pp. 628–58. Seidenfeld, T. (2004). A Contrast between Two Decision Rules for Use with (Convex) Sets of Probabilities: γ-Maximin versus E-Admissibility. Syn- these 140(1/2), pp. 69-88. Troffaes, M. C. M. (2007). Decision Making under Uncertainty Using Im- precise Probabilities. International Journal of Approximate Reasoning 45, pp. 17-29. Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. Lon- don: Chapman and Hall. such straightforward relationship, however, between E-Admissibility and Γ-Maximin: some E- admissible acts are not Γ-Maximinimal and vice versa. 8