Duncan Pritchard’s modal luck/risk model in Epistemic Luck (2005) addresses a real problem, which can be illustrated as
follows:
Say that Lottie has a lottery ticket that has a 1/50,000,000
chance of winning in an upcoming draw. And say that while walking home in
relatively safe conditions, I have a 1/10,000,000 chance of being struck by
lightning. I’m convinced that it would irrational for Lottie to decide to tear
up her ticket before the draw, but it would be rational for me to think I won’t
be struck by lightning and thus decide to walk home.
However, note that these are cases of prediction,
such that what we are concerned with justifying here is not knowledge at all
but confidence, i.e. justified likely prediction. Even if Lottie’s
prediction that she won’t win turns out to be correct, the sense in which she
“knew” the outcome is different from the sense of concurrent knowledge
of an already true belief*. So, Pritchard’s model addresses the problem with
relying solely on probability to justify confidence, not knowledge, as safe.
I’m not sure if we can always rely solely on probability with the safety
principle in justifying concurrent true belief— I haven’t seen any instances
where this would be a problem, but there may be some.
Note that there could be a problem
for Pritchard’s Modal Distance model, as a measure of physical difference
between possible worlds, if there were exceptions to apparent closeness in
particular instances, such as a particular instance where the energy required
to make a given lottery draw when the needed ball is at the bottom of the pile
is greater than the energy required in a particular instance for a loose piece
of paper to jam the basement door in a well-maintained garbage chute. I thought that this problem
could be solved by putting the Modal Distance together with probability, but
the way I initially tried to do this was wrong**. The correct solution is to instead use
the Average Modal Distance, as the sum of the Modal Distances of
all the outcomes in close possible worlds over the number of outcomes: (MD1 +
MD2 + … + MDn)/n.
In any case, we can calculate the Modal Distance between two
possible outcomes in an event as follows:
Where: P := conditions needing to obtain
for any given outcome A to occur in an event E
Q := conditions needing to obtain for a target
outcome B to occur in E
O := the total set of outcomes of E in
close possible worlds
p := {x|(x ∈ P) &
(P ⊆ A)}
q := {y|(y ∈ Q) &
(Q ⊆ B)}
Modal Distance of A from B: MD[A,
B] := |(p - q∩p)| i.e.
the quantity of p minus the quantity of the intersection
of q with p
In the lottery case, if the quantity of p, i.e.
the relevant conditions needing to obtain for a non-winning draw A to
occur, is 1.0, and the quantity of the intersection of q, i.e. the
relevant conditions needing to obtain for the winning draw B to
occur, with p is .95, then the Modal Distance of a non-winning
draw from a winning draw is .05. In the lightning case, if the quantity of p,
i.e. the relevant conditions needing to obtain for a safe walk home A to
occur, is 1.0, and the quantity of the intersection of q, i.e. the
relevant conditions needing to obtain for a deadly lightning strike B to
occur, with p is .05, then the Modal Distance of a safe walk
home from a deadly lightning strike is .95.
Thus, while the probability of being struck by lightning is
actually greater than the probability of winning the lottery, the Modal
Distance between winning and not winning the lottery is much closer than the
Modal Distance between walking home safely and being struck by lightning. This
latter disparity seems to explain why it would be irrational to tear up the
ticket but rational to walk home.
But how does Modal Distance explain this?
When should Modal Distance be appealed to? After all, we would still want to
say that buying a lottery ticket with 1/50,000,000 odds is not a safe bet and
thus is not a rational purchase, no matter how modally close any ticket is to
winning.
I have a
couple thoughts on how to answer these questions:
1. One solution that I don't particularly like is to just
say that while modal distance can be measured quantitatively and does have an
impact on our intuitions about rational decision making, e.g. in the lottery
versus the lightning cases, these intuitions are actually emotion-based rather
than strictly rational. If that were so, a purely rational subject (e.g. an
AI), once fully apprised of all the external circumstances, would not
have these intuitions and would base its decisions solely on simple probability
over possible worlds.
Take Pritchard's bullet example: say the bullet that
misses a soldier named Duncan by several meters is fired by a very accurate
sniper who is sitting on a boat that just happens to be rocked by a wave at the
moment he pulls the trigger; this boat had been steady all day and the wave at
that moment is a fluke with a very low probability. Then, say the bullet that
misses Duncan by only a few centimeters is fired by a sniper with a bent scope
such that he routinely misses by a few centimeters, and thus his missing by a
few centimeters has a very high probability. In this case, even if Duncan were
to become aware of these facts, he might still want to say that the bullet that
missed him by a few centimeters put him at greater risk than the one that
missed him by several meters.
To explain this, Duncan might appeal to a concept of
physical danger that we could only translate through something like the modal
distance model, not through strict probability. But this physical danger
concept might only be employed to capture a subjective emotional response to a
situation as one experiences it in the moment, and not a strictly rational
assessment. Perhaps we could avoid this conclusion by appealing to the idea
of access to possible worlds within a given knowledge frame.
Within the knowledge frame of Duncan's experience, he has access to all possible
worlds resulting in the two bullets missing him, and it is rational for him to
compare the Average Modal Distances of the shots over all those worlds,
unrestricted by the facts of the sniper being rocked by a chance wave and the
other sniper having a bent scope. This knowledge frame consideration might land
us back in internalist territory, though, which I was trying to avoid
because I prefer the strictly externalist Robust Anti-luck Epistemology
account.
2. Another solution is to take a closer look at the lottery
case in particular and point out some considerations that might make this case
exceptional. With a
lottery draw where Lottie has already purchased a ticket, one can do a cost-benefit
analysis: while there is a very small chance that she will have a huge
payoff, now that she already has the ticket, it costs her nothing to keep it
and check the result-- or at any rate, it actually costs her less energy to
keep the ticket and check the result (in whatever fashion is most convenient to
her) than it does to tear the ticket up and throw it away. So tearing up the
ticket is irrational because it a loss to her. The value of the ticket before
the draw may be much less than the $2 or whatever she paid for it, but because
the payoff is so large, it's probably worth more than a penny-- it might be
worth something between a nickel and dime. Most people don't throw away nickels
and dimes. On the other hand, my being able to walk home might be of
considerable benefit to me, much more, quantitatively, than the cost of risking
the extremely low probability of being struck by lightning.
I like this solution better than previous one because it
remains an externalist account, but I don't like that it deflates the
interesting distinction between modal probability and modal (physical)
potential through basically a game theory analysis***.
_____
*How to interpret “knowledge” of future events is of course an ancient problem going back to Aristotle’s “sea battle” example, but more recent developments in modal logic look to have solved this problem.
**Initially I had thought that modal distance could just be
put together with probability over possible worlds in a simple way, as a ratio,
but I found that this yielded bad results, e.g. it would have said
that a given outcome in a lottery ball draw with fifty possible outcomes was
modally closer than a given outcome in a coin flip with two
possible outcomes, even though the inertia displacement from outcome to outcome
is roughly the same for both scenarios.
***Actually, I'm surprised Pritchard doesn't bring up game theory in his account of luck and risk. I wonder how he sees game theory fitting in with his account? I'm not sure that he really wants to rule out probability for modal distance altogether so much as say that the safety principle can't be reduced to a simple probability, i.e. 1/(# of possible outcomes). A game theoretical account could of course fully accommodate Bayesian induction.