Monday, September 30, 2019

A Less Mysterious Solution to the Paradox of Analysis

Moore's paradox of analysis strikes me as having a simpler resolution than what recent commenters,  such as Richard Fumerton in "The Paradox of Analysis," have proposed. The major premise of the paradox, as Fumerton states it,
  
"If 'X is good' just means 'X is F' then the question 'Is what is F good?' has the same meaning as the question 'Is what is F, F?' " (478) 

is unsound because it assumes that an object being "good" is an identity statement (or at least, more charitably, that what is sought is an identity statement). One arrives at the paradox by assuming that an analysis just involves a chain of identities. In fact, what one wants here is an equivalence, conditions such that if and only if they obtain, something is good-- not an identity with the same intensional value under a different name. This latter notion just perpetuates the basic error of the Meno.

For example, an informative analysis of "good" could ask how its extension is formed by treating it as a binary relation and asking what sub-properties it restricts its members to, as restrictions on the equivalence class of x := the set of things that are good. Taking good as a relation G, we might propose, for instance, that when x is mapped to the set u of things with a positive utility value by the relation G, the statement "x is good" is true.

A logical analysis along the lines of the above without a necessary commitment to any one semantic interpretation could avoid being either merely lexicographical or mysteriously phenomenological. 

I'm not sure if it would be satisfying for all conceptual analyses to be only the product of equivalences, but I would claim that there is no satisfying conceptual analysis that is only the product of an identity. Indeed, since this is already an intuition most people have, Moore's reduction of conceptual analysis to an identity statement is what makes the paradox of analysis work. The only problem is that this reduction is wrong. In most cases, we do want an equivalence statement with a conceptual analysis to tell us how and when a concept obtains. (Maybe one could evoke Aristotle's four causes here and say that an informative analysis should not just give us a noun phrase that the concept noun stands in for but should give us one or more of these four constitutive explanations.)

For example, if the concept is "triangle," there might be an identity relation with "three-sided polygon," but this would not be informative-- it would just be a definition, a substitution of a noun phrase for the concept noun, not an analysis-- while adding, that "a triangle is any three-sided polygon such that it has three internal angles that sum to 180 degrees" would be informative (e.g., by this analytic definition we could claim that a three-sided polygon in non-Euclidean space might not be a triangle). So, one could say that a conceptual analysis can include identity definitions but cannot be constituted only by identity definitions.

Also, because a satisfying conceptual analysis, such as the analysis of "triangle," can be constituted only from a combination of equivalence statements and identity statements, we can at least say that there are some paradigm cases of analysis that do not require some phenomenal state in addition to succeed.

Thoughts on Modal Distance

Duncan Pritchard’s modal luck/risk model in Epistemic Luck (2005) addresses a real problem, which can be illustrated as follows:

Say that Lottie has a lottery ticket that has a 1/50,000,000 chance of winning in an upcoming draw. And say that while walking home in relatively safe conditions, I have a 1/10,000,000 chance of being struck by lightning. I’m convinced that it would irrational for Lottie to decide to tear up her ticket before the draw, but it would be rational for me to think I won’t be struck by lightning and thus decide to walk home.

However, note that these are cases of prediction, such that what we are concerned with justifying here is not knowledge at all but confidence, i.e. justified likely prediction. Even if Lottie’s prediction that she won’t win turns out to be correct, the sense in which she “knew” the outcome is different from the sense of concurrent knowledge of an already true belief*. So, Pritchard’s model addresses the problem with relying solely on probability to justify confidence, not knowledge, as safe. I’m not sure if we can always rely solely on probability with the safety principle in justifying concurrent true belief— I haven’t seen any instances where this would be a problem, but there may be some.

Note that there could be a problem for Pritchard’s Modal Distance model, as a measure of physical difference between possible worlds, if there were exceptions to apparent closeness in particular instances, such as a particular instance where the energy required to make a given lottery draw when the needed ball is at the bottom of the pile is greater than the energy required in a particular instance for a loose piece of paper to jam the basement door in a well-maintained garbage chute. I thought that this problem could be solved by putting the Modal Distance together with probability, but the way I initially tried to do this was wrong**. The correct solution is to instead use the Average Modal Distance, as the sum of the Modal Distances of all the outcomes in close possible worlds over the number of outcomes: (MD1 + MD2 + … + MDn)/n.

In any case, we can calculate the Modal Distance between two possible outcomes in an event as follows:


Where: := conditions needing to obtain for any given outcome to occur in an event E

:= conditions needing to obtain for a target outcome B to occur in E

:= the total set of outcomes of E in close possible worlds

p := {x|( P)  & ( A)}

q := {y|( Q)  & ( B)}

Modal Distance of from B: MD[A, B] := |(p - qp)| i.e. the quantity of minus the quantity of the intersection of with p


In the lottery case, if the quantity of p, i.e. the relevant conditions needing to obtain for a non-winning draw A to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for the winning draw B to occur, with is .95, then the Modal Distance of a non-winning draw from a winning draw is .05. In the lightning case, if the quantity of p, i.e. the relevant conditions needing to obtain for a safe walk home to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for a deadly lightning strike to occur, with is .05, then the Modal Distance of a safe walk home from a deadly lightning strike is .95.

Thus, while the probability of being struck by lightning is actually greater than the probability of winning the lottery, the Modal Distance between winning and not winning the lottery is much closer than the Modal Distance between walking home safely and being struck by lightning. This latter disparity seems to explain why it would be irrational to tear up the ticket but rational to walk home.

But how does Modal Distance explain this? When should Modal Distance be appealed to? After all, we would still want to say that buying a lottery ticket with 1/50,000,000 odds is not a safe bet and thus is not a rational purchase, no matter how modally close any ticket is to winning.

I have a couple thoughts on how to answer these questions:

1. One solution that I don't particularly like is to just say that while modal distance can be measured quantitatively and does have an impact on our intuitions about rational decision making, e.g. in the lottery versus the lightning cases, these intuitions are actually emotion-based rather than strictly rational. If that were so, a purely rational subject (e.g. an AI), once fully apprised of all the external circumstances, would not have these intuitions and would base its decisions solely on simple probability over possible worlds.

Take Pritchard's bullet example: say the bullet that misses a soldier named Duncan by several meters is fired by a very accurate sniper who is sitting on a boat that just happens to be rocked by a wave at the moment he pulls the trigger; this boat had been steady all day and the wave at that moment is a fluke with a very low probability. Then, say the bullet that misses Duncan by only a few centimeters is fired by a sniper with a bent scope such that he routinely misses by a few centimeters, and thus his missing by a few centimeters has a very high probability. In this case, even if Duncan were to become aware of these facts, he might still want to say that the bullet that missed him by a few centimeters put him at greater risk than the one that missed him by several meters.

To explain this, Duncan might appeal to a concept of physical danger that we could only translate through something like the modal distance model, not through strict probability. But this physical danger concept might only be employed to capture a subjective emotional response to a situation as one experiences it in the moment, and not a strictly rational assessment. Perhaps we could avoid this conclusion by appealing to the idea of access to possible worlds within a given knowledge frame. Within the knowledge frame of Duncan's experience, he has access to all possible worlds resulting in the two bullets missing him, and it is rational for him to compare the Average Modal Distances of the shots over all those worlds, unrestricted by the facts of the sniper being rocked by a chance wave and the other sniper having a bent scope. This knowledge frame consideration might land us back in internalist territory, though, which I was trying to avoid because I prefer the strictly externalist Robust Anti-luck Epistemology account.  

2. Another solution is to take a closer look at the lottery case in particular and point out some considerations that might make this case exceptional. With a lottery draw where Lottie has already purchased a ticket, one can do a cost-benefit analysis: while there is a very small chance that she will have a huge payoff, now that she already has the ticket, it costs her nothing to keep it and check the result-- or at any rate, it actually costs her less energy to keep the ticket and check the result (in whatever fashion is most convenient to her) than it does to tear the ticket up and throw it away. So tearing up the ticket is irrational because it a loss to her. The value of the ticket before the draw may be much less than the $2 or whatever she paid for it, but because the payoff is so large, it's probably worth more than a penny-- it might be worth something between a nickel and dime. Most people don't throw away nickels and dimes. On the other hand, my being able to walk home might be of considerable benefit to me, much more, quantitatively, than the cost of risking the extremely low probability of being struck by lightning.

I like this solution better than previous one because it remains an externalist account, but I don't like that it deflates the interesting distinction between modal probability and modal (physical) potential through basically a game theory analysis***. 

_____  

*How to interpret “knowledge” of future events is of course an ancient problem going back to Aristotle’s “sea battle” example, but more recent developments in modal logic look to have solved this problem.

**Initially I had thought that modal distance could just be put together with probability over possible worlds in a simple way, as a ratio, but I found that this yielded bad results, e.g. it would have said that a given outcome in a lottery ball draw with fifty possible outcomes was modally closer than a given outcome in a coin flip with two possible outcomes, even though the inertia displacement from outcome to outcome is roughly the same for both scenarios.

***Actually, I'm surprised Pritchard doesn't bring up game theory in his account of luck and risk. I wonder how he sees game theory fitting in with his account? I'm not sure that he really wants to rule out probability for modal distance altogether so much as say that the safety principle can't be reduced to a simple probability, i.e. 1/(# of possible outcomes). A game theoretical account could of course fully accommodate Bayesian induction. 

The Gettier Getaway: a Gettier Example

Here is a Gettier-type (per Gettier's "Is Justified True Belief Knowledge?") case I came up with:

A car speeds past you. Directly behind this car follows a speeding police cruiser with its siren on. The speeding cars, one after the other, round a corner and disappear from sight.

You believe on the basis of this incident that the driver of the speeding car is being pursued by the police, perhaps for a criminal act. You are justified in doing so. And in fact, the driver is being pursued by the police for a criminal act.

However, when you saw the driver, he was not trying to escape the police cruiser behind him but was in fact speeding back to his apartment because, after he had robbed a bank earlier that day, he remembered he had left his oven on. The police cruiser behind him was not chasing him but had actually been dispatched to go to the bank the driver had just robbed, in order to pursue the robber (the officer had no idea that that robber was right in front of her). This officer noticed that the car in front of her was speeding, but since she had more important crimes to worry about than traffic violations, she was not pursuing it at that moment.

(i) You believe that the driver is being pursued by the police.

(ii) It is true that the driver is being pursued by the police.

(iii) You are justified in believing the driver is being pursued by the police.

But because what you saw does not actually constitute evidence of his being pursued, you do not know that the driver is being pursued by the police.

This case is different in an important respect from both the first Gettier case and cases like the sheep case (qv. Chisholm) and the clock case (qv. Russell). In those cases, the major premise of the valid inference is a sound implication, but the minor premise is false.


First Gettier Case

(i) If Jones (who has ten coins in pocket) will get the job (p), the person who will get the job has ten coins in pocket (q). [p  q]

(ii) Jones will get the job. [p]

The person who will get the job has ten coins in pocket. [q]

In fact, Jones will not get the job [~p], though the person who will get the job has ten coins in pocket. [q]

Thus, Smith has made a valid but unsound inference due to the minor premise being false.


Sheep Case

(i) If Roddy sees a sheep in the field (p), there is a sheep in field (q). [p  q]

(ii) Roddy sees a sheep in the field. [p]

There is a sheep in the field. [q]

In fact, Roddy does not see a sheep in the field (he sees a sheep-like object) [~p], though there is a sheep in the field (that he does not see). [q]

Thus, Roddy has made a valid but unsound inference due to the minor premise being false.


Clock Case

(i) If the clock is working (p), the time it reads is the actual time (q). [p  q]

(ii) The clock is working. [p]

The time it reads is the actual time. [q]

In fact, the clock is not working [~p], though the time it reads is the actual time (it just so happens). [q]

Thus, a valid but unsound inference has been made due to the minor premise being false.


Gettier Getaway Case

(i) If a speeding car is closely followed by a police car with its siren on (p), the driver of the speeding car is being pursued by the police (q). [p  q]

(ii) A speeding car is closely followed by a police car with its siren on. [p]

The driver of the speeding car is being pursued by the police. [q]

In fact the speeding car is closely followed by a police car with its siren on [p], and the driver of the speeding car is being pursued by the police. [q]


Since, unlike the other cases, the minor premise in the Gettier getaway case is true, and you have made a valid inference, something else must have gone wrong. It is the major premise that is unsound. We find that [p  q] in this case is not necessarily true. It is false in some cases*.

Major premise cases, I think, take us closer to the problem with the “no false lemmas” objection than minor premise cases because with them we see that to be Gettier-proof, the major premise must be necessarily true in all possible worlds, not just in almost all instances. This is a problem because this would seem to set too high a threshold for justification. How could one ever obtain certainty that one’s major premise is necessarily true in all possible worlds? And how many natural inferential bases would be ruled out due to not being necessarily true?

_____

*The garbage chute case (Sosa) is the most similar in this respect to the Gettier getaway, except that (i) the garbage chute case is inference-based instead of directly perceptual, as the would-be knowledge holder does not see the basement but only infers its state from partial evidence, and (ii) its consequent has to be made false [~q] in order to show that it can go wrong.

Wednesday, July 31, 2019

POLYSEMOUS POLYGRAPHY

Consider the sentence:

As Violet sat by the bank, after her trip over the spring, she saw the jam she had made for herself from the dates with her contemplative pupil.

This sentence contains at least 64 different narratives, depending on one of two meanings for the six words: bank, trip, spring, jam, dates, and pupil. (2^6 = 64) Every reading is coherent. 

As Violet sat by the bank (building or slope), after her trip (voyage or tumble) over the spring (season or aquifer), she saw the jam (problem or condiment) she had made for herself from the dates (trysts or fruit) with her contemplative pupil (student or inner-eye).

Call this Oulipian method of composition Polysemous Polygraphy.




Thursday, February 28, 2019

Reality Is Impossible

Things that are true are not possibly true. They are just true!
Likewise, things that are false are not possibly false. They are just false!
If something is necessarily true, then it is true, which means that it is not possibly true. And if something is necessarily false, then it is false, which means that it is not possibly false.
Yet Alethic Modal Logic tells us that something being necessarily true just means that it is not possibly not true.
Does this definition, or pair of definitions for Necessity N and Possibility P, [(Na = -P-a) & (Pa = -N-a)], make sense, given our two Reality Principles, (1) [(a > -Pa) & (-a > -P-a)] and (2) [(Na > a) & (N-a > -a)]?
Let's see:  

Wednesday, February 27, 2019

False by Definition


With the Liar Statement—i.e. “This statement is false,” which we can translate as “p = this statement (p) is false = (p = ~p)”— the definition of p is false, not p itself. What does it mean for a definition to be false? It means that it is formally invalid.  This in turn means that it is not permitted within our logic to use a definition of this form. Note that the statement of a definition implies the statement that it defines, but it is not identical to the statement it defines. I.e. Def. p > (p = -p).

Conjecture: p [(Def. p > (p = -p)) > -(Def. p)]          (1)

Def. q > ( q = -q)                                  S
Def. q                                   S
q = -q                                MP
q                                  S
-q                                ES
q & -q                        CONJ
-q                                      IP
q                                      ES
q & -q                             CONJ
-(Def. q)                                            IP
(Def. q > (q = -q)) > -(Def. q)                    CP
p [(Def. p > (p = -p)) > -(Def. p)]                       UG

(1), QED                                                     

We might say that all statements of self-reference involve such a definition statement implicitly because every self-reference statement necessarily requires a self-stipulation, which is of course a type of stipulation, i.e. a definition. However, some self-referential statements, e.g. “This statement is a written statement,” i.e. “p = this statement (p) is a written statement (w) = (p = w),” involve a valid form of definition, Def. p > (p = w), such that the definition is not false. 

In any case, the above proves that the Liar Statement is falsidical and not a real antinomic paradox.

Thursday, January 31, 2019

Legislation: a Card Game


Legislation

E.B. Nelson, 1.31.19 (c)

A Card Game for 3 to 8 Players


Components:

16 Representative cards; 81 Bill cards; Pro and Con Pledge Token sets numbered for players 1-8; Agenda Point Tokens


Rules:

You play as an elected Representative of Congress. Your goal is to get as many Bills that promote your Agenda passed as you can over the course of the legislative Session, while blocking all Bills that are detrimental to your Agenda.

You first draw a Representative to play as from the Representative Deck. The Representative Deck must be constructed according to the number of players so that there is an even distribution of Representatives per Agendas (see the Deck list below).

Next, shuffle the deck of 81 Bills and deal 9 face down to each player. You will look at these cards but conceal them from the other players.

On the first turn, players decide which of their cards they wish to keep, if any, and which they wish to discard. Discard into your own Discard Pile in front of you.

On the second turn, players draw the number of cards needed to make a full hand of 9 from any of the other players’ Discard Piles and/or the remaining Deck. On each subsequent turn, at the beginning of your turn, you can discard into your Discard Pile one card and pick up one card from any of the other piles, or you can choose not to discard.

On the third turn, place 3 Bills in front of you face up that you wish to place On Deck for future votes. Once you place a card On Deck, you cannot discard and replace it. It must eventually be voted on.

On the fourth turn, you seek Pledges to vote on your Bills from the other players. You do this by bargaining with each of them by offering to trade cards you have in your hand, to make Pledges on their Bills, or to make anti-Pledges on their opponent’s Bills. Pledges made by you and the other players are final only when your turn ends.

On the fifth turn, choose one of your three Bills On Deck and put it to a vote. Everyone votes, and the vote is “Yay,” “Nay,” or “Abstain.” Representatives can choose to betray their Pledges by voting against their Pledged or anti-Pledged votes or by abstaining, but if they do so, whether once or multiple times on a Vote round, they must skip the next Pledge turn entirely, neither seeking Pledges nor making any (although players can still Pledge on their On Deck Bills per bargains with other players). A Bill pass if it has a positive vote total; otherwise it fails. A “Yay” counts as 1, a “Nay” counts as -1, and an “Abstain” counts as 0. If your Bill passes, you gain the Agenda Points listed for your Agenda on the Bill, as do the other players with that Agenda, and others lose points as listed. Place passed Bills in the Passed Bills Pile. If the Bill does not pass, place it in the Failed Bills Pile. Finally, replace the empty On Deck spot with a new face up Bill.

Next is a Pledge round again, followed by another Vote round, and so on. The game ends when all players run out of Bills in their hands and On Deck, i.e. after 9 votes for each player (optional: players can have hands of 6 instead for a shorter game).

Some cards also have Special Effects if they pass, and Representatives each have a Special Effect they cannot enact once per game. Follow the rules for these Effects as described. The winner is the player with the highest Agenda Point total at the end of the Session.   


Representatives (two of each type):

Single Agenda:
  1. Social Progressive
  2. Social Conservative
  3. Fiscal Progressive
  4. Fiscal Conservative
Dual Agenda:
  1. Social Progressive, Fiscal Progressive
  2. Social Progressive, Fiscal Conservative
  3. Social Conservative, Fiscal Conservative
  4. Social Conservative, Fiscal Progressive


Representative Deck Builds per Number of Players:

Where X= Social Progressive, Y= Social Conservative
Where X= Social Conservative, Y= Social Progressive
Where A= Fiscal Progressive, B= Fiscal Conservative
Where A= Fiscal Conservative, B= Fiscal Progressive
3 Players: X, A, YB
4 Players: All single or all dual
5 Players: XA, YB, XB, Y, A
6 Players: X, Y, A, B, XA, YB
7 Players: XA, XB, YA, YB, X, Y, A
8 Players: All


Bill Deck Chart:

Bill
Agenda Points Scored if Passed per Representative Agenda
# & Name
Social Progressive
Social Conservative
Fiscal Progressive
Fiscal Conservative
1
+1
+1
+1
+1
2
+1
+1
+1
-1
3
+1
+1
+1
0
4
+1
+1
-1
+1
5
+1
+1
-1
-1
6
+1
+1
-1
0
7
+1
+1
0
+1
8
+1
+1
0
-1
9
+1
+1
0
0
10
+1
-1
+1
+1
11
+1
-1
+1
-1
12
+1
-1
+1
0
13
+1
-1
-1
+1
14
+1
-1
-1
-1
15
+1
-1
-1
0
16
+1
-1
0
+1
17
+1
-1
0
-1
18
+1
-1
0
0
19
+1
0
+1
+1
20
+1
0
+1
-1
21
+1
0
+1
0
22
+1
0
-1
+1
23
+1
0
-1
-1
24
+1
0
-1
0
25
+1
0
0
+1
26
+1
0
0
-1
27
+1
0
0
0
28
-1
+1
+1
+1
29
-1
+1
+1
-1
30
-1
+1
+1
0
31
-1
+1
-1
+1
32
-1
+1
-1
-1
33
-1
+1
-1
0
34
-1
+1
0
+1
35
-1
+1
0
-1
36
-1
+1
0
0
37
-1
-1
+1
+1
38
-1
-1
+1
-1
39
-1
-1
+1
0
40
-1
-1
-1
+1
41
-1
-1
-1
-1
42
-1
-1
-1
0
43
-1
-1
0
+1
44
-1
-1
0
-1
45
-1
-1
0
0
46
-1
0
+1
+1
47
-1
0
+1
-1
48
-1
0
+1
0
49
-1
0
-1
+1
50
-1
0
-1
-1
51
-1
0
-1
0
52
-1
0
0
+1
53
-1
0
0
-1
54
-1
0
0
0
55
0
+1
+1
+1
56
0
+1
+1
-1
57
0
+1
+1
0
58
0
+1
-1
+1
59
0
+1
-1
-1
60
0
+1
-1
0
61
0
+1
0
+1
62
0
+1
0
-1
63
0
+1
0
0
64
0
-1
+1
+1
65
0
-1
+1
-1
66
0
-1
+1
0
67
0
-1
-1
+1
68
0
-1
-1
-1
69
0
-1
-1
0
70
0
-1
0
+1
71
0
-1
0
-1
72
0
-1
0
0
73
0
0
+1
+1
74
0
0
+1
-1
75
0
0
+1
0
76
0
0
-1
+1
77
0
0
-1
-1
78
0
0
-1
0
79
0
0
0
+1
80
0
0
0
-1
81
0
0
0
0