Monday, September 30, 2019

A Less Mysterious Solution to the Paradox of Analysis

Moore's paradox of analysis strikes me as having a simpler resolution than what recent commenters,  such as Richard Fumerton in "The Paradox of Analysis," have proposed. The major premise of the paradox, as Fumerton states it,
  
"If 'X is good' just means 'X is F' then the question 'Is what is F good?' has the same meaning as the question 'Is what is F, F?' " (478) 

is unsound because it assumes that an object being "good" is an identity statement (or at least, more charitably, that what is sought is an identity statement). One arrives at the paradox by assuming that an analysis just involves a chain of identities. In fact, what one wants here is an equivalence, conditions such that if and only if they obtain, something is good-- not an identity with the same intensional value under a different name. This latter notion just perpetuates the basic error of the Meno.

For example, an informative analysis of "good" could ask how its extension is formed by treating it as a binary relation and asking what sub-properties it restricts its members to, as restrictions on the equivalence class of x := the set of things that are good. Taking good as a relation G, we might propose, for instance, that when x is mapped to the set u of things with a positive utility value by the relation G, the statement "x is good" is true.

A logical analysis along the lines of the above without a necessary commitment to any one semantic interpretation could avoid being either merely lexicographical or mysteriously phenomenological. 

I'm not sure if it would be satisfying for all conceptual analyses to be only the product of equivalences, but I would claim that there is no satisfying conceptual analysis that is only the product of an identity. Indeed, since this is already an intuition most people have, Moore's reduction of conceptual analysis to an identity statement is what makes the paradox of analysis work. The only problem is that this reduction is wrong. In most cases, we do want an equivalence statement with a conceptual analysis to tell us how and when a concept obtains. (Maybe one could evoke Aristotle's four causes here and say that an informative analysis should not just give us a noun phrase that the concept noun stands in for but should give us one or more of these four constitutive explanations.)

For example, if the concept is "triangle," there might be an identity relation with "three-sided polygon," but this would not be informative-- it would just be a definition, a substitution of a noun phrase for the concept noun, not an analysis-- while adding, that "a triangle is any three-sided polygon such that it has three internal angles that sum to 180 degrees" would be informative (e.g., by this analytic definition we could claim that a three-sided polygon in non-Euclidean space might not be a triangle). So, one could say that a conceptual analysis can include identity definitions but cannot be constituted only by identity definitions.

Also, because a satisfying conceptual analysis, such as the analysis of "triangle," can be constituted only from a combination of equivalence statements and identity statements, we can at least say that there are some paradigm cases of analysis that do not require some phenomenal state in addition to succeed.

Thoughts on Modal Distance

Duncan Pritchard’s modal luck/risk model in Epistemic Luck (2005) addresses a real problem, which can be illustrated as follows:

Say that Lottie has a lottery ticket that has a 1/50,000,000 chance of winning in an upcoming draw. And say that while walking home in relatively safe conditions, I have a 1/10,000,000 chance of being struck by lightning. I’m convinced that it would irrational for Lottie to decide to tear up her ticket before the draw, but it would be rational for me to think I won’t be struck by lightning and thus decide to walk home.

However, note that these are cases of prediction, such that what we are concerned with justifying here is not knowledge at all but confidence, i.e. justified likely prediction. Even if Lottie’s prediction that she won’t win turns out to be correct, the sense in which she “knew” the outcome is different from the sense of concurrent knowledge of an already true belief*. So, Pritchard’s model addresses the problem with relying solely on probability to justify confidence, not knowledge, as safe. I’m not sure if we can always rely solely on probability with the safety principle in justifying concurrent true belief— I haven’t seen any instances where this would be a problem, but there may be some.

Note that there could be a problem for Pritchard’s Modal Distance model, as a measure of physical difference between possible worlds, if there were exceptions to apparent closeness in particular instances, such as a particular instance where the energy required to make a given lottery draw when the needed ball is at the bottom of the pile is greater than the energy required in a particular instance for a loose piece of paper to jam the basement door in a well-maintained garbage chute. I thought that this problem could be solved by putting the Modal Distance together with probability, but the way I initially tried to do this was wrong**. The correct solution is to instead use the Average Modal Distance, as the sum of the Modal Distances of all the outcomes in close possible worlds over the number of outcomes: (MD1 + MD2 + … + MDn)/n.

In any case, we can calculate the Modal Distance between two possible outcomes in an event as follows:


Where: := conditions needing to obtain for any given outcome to occur in an event E

:= conditions needing to obtain for a target outcome B to occur in E

:= the total set of outcomes of E in close possible worlds

p := {x|( P)  & ( A)}

q := {y|( Q)  & ( B)}

Modal Distance of from B: MD[A, B] := |(p - qp)| i.e. the quantity of minus the quantity of the intersection of with p


In the lottery case, if the quantity of p, i.e. the relevant conditions needing to obtain for a non-winning draw A to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for the winning draw B to occur, with is .95, then the Modal Distance of a non-winning draw from a winning draw is .05. In the lightning case, if the quantity of p, i.e. the relevant conditions needing to obtain for a safe walk home to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for a deadly lightning strike to occur, with is .05, then the Modal Distance of a safe walk home from a deadly lightning strike is .95.

Thus, while the probability of being struck by lightning is actually greater than the probability of winning the lottery, the Modal Distance between winning and not winning the lottery is much closer than the Modal Distance between walking home safely and being struck by lightning. This latter disparity seems to explain why it would be irrational to tear up the ticket but rational to walk home.

But how does Modal Distance explain this? When should Modal Distance be appealed to? After all, we would still want to say that buying a lottery ticket with 1/50,000,000 odds is not a safe bet and thus is not a rational purchase, no matter how modally close any ticket is to winning.

I have a couple thoughts on how to answer these questions:

1. One solution that I don't particularly like is to just say that while modal distance can be measured quantitatively and does have an impact on our intuitions about rational decision making, e.g. in the lottery versus the lightning cases, these intuitions are actually emotion-based rather than strictly rational. If that were so, a purely rational subject (e.g. an AI), once fully apprised of all the external circumstances, would not have these intuitions and would base its decisions solely on simple probability over possible worlds.

Take Pritchard's bullet example: say the bullet that misses a soldier named Duncan by several meters is fired by a very accurate sniper who is sitting on a boat that just happens to be rocked by a wave at the moment he pulls the trigger; this boat had been steady all day and the wave at that moment is a fluke with a very low probability. Then, say the bullet that misses Duncan by only a few centimeters is fired by a sniper with a bent scope such that he routinely misses by a few centimeters, and thus his missing by a few centimeters has a very high probability. In this case, even if Duncan were to become aware of these facts, he might still want to say that the bullet that missed him by a few centimeters put him at greater risk than the one that missed him by several meters.

To explain this, Duncan might appeal to a concept of physical danger that we could only translate through something like the modal distance model, not through strict probability. But this physical danger concept might only be employed to capture a subjective emotional response to a situation as one experiences it in the moment, and not a strictly rational assessment. Perhaps we could avoid this conclusion by appealing to the idea of access to possible worlds within a given knowledge frame. Within the knowledge frame of Duncan's experience, he has access to all possible worlds resulting in the two bullets missing him, and it is rational for him to compare the Average Modal Distances of the shots over all those worlds, unrestricted by the facts of the sniper being rocked by a chance wave and the other sniper having a bent scope. This knowledge frame consideration might land us back in internalist territory, though, which I was trying to avoid because I prefer the strictly externalist Robust Anti-luck Epistemology account.  

2. Another solution is to take a closer look at the lottery case in particular and point out some considerations that might make this case exceptional. With a lottery draw where Lottie has already purchased a ticket, one can do a cost-benefit analysis: while there is a very small chance that she will have a huge payoff, now that she already has the ticket, it costs her nothing to keep it and check the result-- or at any rate, it actually costs her less energy to keep the ticket and check the result (in whatever fashion is most convenient to her) than it does to tear the ticket up and throw it away. So tearing up the ticket is irrational because it a loss to her. The value of the ticket before the draw may be much less than the $2 or whatever she paid for it, but because the payoff is so large, it's probably worth more than a penny-- it might be worth something between a nickel and dime. Most people don't throw away nickels and dimes. On the other hand, my being able to walk home might be of considerable benefit to me, much more, quantitatively, than the cost of risking the extremely low probability of being struck by lightning.

I like this solution better than previous one because it remains an externalist account, but I don't like that it deflates the interesting distinction between modal probability and modal (physical) potential through basically a game theory analysis***. 

_____  

*How to interpret “knowledge” of future events is of course an ancient problem going back to Aristotle’s “sea battle” example, but more recent developments in modal logic look to have solved this problem.

**Initially I had thought that modal distance could just be put together with probability over possible worlds in a simple way, as a ratio, but I found that this yielded bad results, e.g. it would have said that a given outcome in a lottery ball draw with fifty possible outcomes was modally closer than a given outcome in a coin flip with two possible outcomes, even though the inertia displacement from outcome to outcome is roughly the same for both scenarios.

***Actually, I'm surprised Pritchard doesn't bring up game theory in his account of luck and risk. I wonder how he sees game theory fitting in with his account? I'm not sure that he really wants to rule out probability for modal distance altogether so much as say that the safety principle can't be reduced to a simple probability, i.e. 1/(# of possible outcomes). A game theoretical account could of course fully accommodate Bayesian induction. 

The Gettier Getaway: a Gettier Example

Here is a Gettier-type (per Gettier's "Is Justified True Belief Knowledge?") case I came up with:

A car speeds past you. Directly behind this car follows a speeding police cruiser with its siren on. The speeding cars, one after the other, round a corner and disappear from sight.

You believe on the basis of this incident that the driver of the speeding car is being pursued by the police, perhaps for a criminal act. You are justified in doing so. And in fact, the driver is being pursued by the police for a criminal act.

However, when you saw the driver, he was not trying to escape the police cruiser behind him but was in fact speeding back to his apartment because, after he had robbed a bank earlier that day, he remembered he had left his oven on. The police cruiser behind him was not chasing him but had actually been dispatched to go to the bank the driver had just robbed, in order to pursue the robber (the officer had no idea that that robber was right in front of her). This officer noticed that the car in front of her was speeding, but since she had more important crimes to worry about than traffic violations, she was not pursuing it at that moment.

(i) You believe that the driver is being pursued by the police.

(ii) It is true that the driver is being pursued by the police.

(iii) You are justified in believing the driver is being pursued by the police.

But because what you saw does not actually constitute evidence of his being pursued, you do not know that the driver is being pursued by the police.

This case is different in an important respect from both the first Gettier case and cases like the sheep case (qv. Chisholm) and the clock case (qv. Russell). In those cases, the major premise of the valid inference is a sound implication, but the minor premise is false.


First Gettier Case

(i) If Jones (who has ten coins in pocket) will get the job (p), the person who will get the job has ten coins in pocket (q). [p  q]

(ii) Jones will get the job. [p]

The person who will get the job has ten coins in pocket. [q]

In fact, Jones will not get the job [~p], though the person who will get the job has ten coins in pocket. [q]

Thus, Smith has made a valid but unsound inference due to the minor premise being false.


Sheep Case

(i) If Roddy sees a sheep in the field (p), there is a sheep in field (q). [p  q]

(ii) Roddy sees a sheep in the field. [p]

There is a sheep in the field. [q]

In fact, Roddy does not see a sheep in the field (he sees a sheep-like object) [~p], though there is a sheep in the field (that he does not see). [q]

Thus, Roddy has made a valid but unsound inference due to the minor premise being false.


Clock Case

(i) If the clock is working (p), the time it reads is the actual time (q). [p  q]

(ii) The clock is working. [p]

The time it reads is the actual time. [q]

In fact, the clock is not working [~p], though the time it reads is the actual time (it just so happens). [q]

Thus, a valid but unsound inference has been made due to the minor premise being false.


Gettier Getaway Case

(i) If a speeding car is closely followed by a police car with its siren on (p), the driver of the speeding car is being pursued by the police (q). [p  q]

(ii) A speeding car is closely followed by a police car with its siren on. [p]

The driver of the speeding car is being pursued by the police. [q]

In fact the speeding car is closely followed by a police car with its siren on [p], and the driver of the speeding car is being pursued by the police. [q]


Since, unlike the other cases, the minor premise in the Gettier getaway case is true, and you have made a valid inference, something else must have gone wrong. It is the major premise that is unsound. We find that [p  q] in this case is not necessarily true. It is false in some cases*.

Major premise cases, I think, take us closer to the problem with the “no false lemmas” objection than minor premise cases because with them we see that to be Gettier-proof, the major premise must be necessarily true in all possible worlds, not just in almost all instances. This is a problem because this would seem to set too high a threshold for justification. How could one ever obtain certainty that one’s major premise is necessarily true in all possible worlds? And how many natural inferential bases would be ruled out due to not being necessarily true?

_____

*The garbage chute case (Sosa) is the most similar in this respect to the Gettier getaway, except that (i) the garbage chute case is inference-based instead of directly perceptual, as the would-be knowledge holder does not see the basement but only infers its state from partial evidence, and (ii) its consequent has to be made false [~q] in order to show that it can go wrong.