Friday, July 31, 2020

The Empty Room

From the massive woman's mouth came a small girl’s voice,

asking us, “Did you see my baby brother?”

She’d surprised us in the hotel lobby, by the stairwell,

saying she’d been a good big sister, till she lost him.

We escaped her wheedling and got to our room at last,

but in the night we were awoken by a child’s crying.

The crying came from our bathroom pipes,

which the proprietor at length told the history of.

“‘Devil’s veins,’ locals called ‘em a century past,”

he said tapping, “when they saw ‘em installed here.”

Half-awake, I heard the child crying again hours later.

I left my wife sleeping to follow it along the pipes.

I’d left the door ajar, not wanting to wake her or leave her without the key.

The pipes ran along the hall ceiling and bent left and down another stair,

splitting like roots toward the basement.

Needing to find those cries in the pipes made me abandon sense—

my wife was missing when I returned.

I close my eyes now and see

the marks her nails made in the sheets as she was dragged out.

The proprietor was disassembling electronics,

looking for bugs and finding only insects, when I ran in.

He told me how we couldn’t leave,

an exchange had been made, none of it was his choice.

To find her, I would have to become the kidnapper of the child I’d searched for,

the horror I’d feared in the walls.

Where other guests on other floors were kept,

I heard one say, hoarsely chuckling, “Never stop believing.”


Occult Streets

We catch a glimpse of our death in the face of a stranger,

down an occult street.

Along an alien alley once,

I heard a child at a window whisper my name without opening his mouth.

A gathering under the window seemed to recognize me

and came at me with open arms.

One of them, a woman in a guard uniform,

clasped my shoulders hard, as if to detain me, then laughed.

She leaned toward a gaunt, faintly familiar man and stroked him,

grinning, watching for my reaction.

Following her inside, I found the child standing beside an urn. 

All the others had withered into husks.

“Tell me what it says,” whispered the child, pointing to the urn’s inscription,

but I refused to look.


Tuesday, June 30, 2020

A Visitor’s Souvenirs

After the mine’s veins were exhausted,

the town shriveled on the mountainside, skeletal, lonely.

Even the last holdouts had lost hope.

But then a surveyor broke through a passage to a long, thin chamber.

The sheriff grabbed the wild-eyed surveyor’s map,

blocking him from contacting his home office.

Alluring memories bloomed in those townsfolk’s minds

as they followed the sheriff into the mine’s mouth.

Only the two church women refused to follow.

They stood muttering warnings about an ancient visitation.

The townsfolk each saw the visitor in the chamber as an old friend,

rather than as a collector of minds.

Today you’ll find the few elderly inhabitants left from that time

to be free from much thought or speech.


Meta-Philosophical Logic


Consider philosophical tropes as approaches to taking a position on a topic. If T is a topic, such as identity, knowledge, value etc., then one can answer a What (object) question, seeking to define T, a How (manner) question, seeking to explain T's workings, a When-Where-Who (time, place, subject) question, seeking to provide T's origin, or a Why (purpose) question, seeking to define T's value or purpose. Call the What route D for definition, the How route P for process, the When-Where-Who route O for origin, and the Why route V for value.

Then for D, there might be several main lines of answers. (1) one could answer in the negative (n), that there is no such thing as T. (2) one could answer reductively (r) and say that T is only some assemblage or name for other objects. (3) one could assent to the full reality of T and seek to define what differentiates (d) it from other like things. Or (4) one could assent to T's reality but instead seek to define it intrinsically (i) according to necessary and sufficient properties. So, Dn(T), Dr(T), Dd(T), or Di(T) are possible definition-approach functions on T.

Likewise, it seems one could perform these moves with P, O, and V—with the added change to i for P where functions and mechanisms, with inputs, outputs, and conditions, would also have to be given along with necessary and sufficient properties; with the change to i for O where a particular event description, with a reason for its historical novelty, would have to be given along with necessary and sufficient properties; and with the change to i for V where an account of T's valued effect would have to be given as a product of its necessary and sufficient properties, or those properties themselves would have to be shown to be valued.

So, we have:

1. Conceptual Definition of T: Dn(T), Dr(T), Dd(T), or Di(T)    
2. Procedural Explanation of T: Pn(T), Pr(T), Pd(T), or Pi*(T)
3. Historical Origin of T: On(T), Or(T), Od(T), or Oi**(T)
4. Practical Value of T: Vn(T), Vr(T), Vd(T), or Vi***(T)

On a given philosophical topic, these seem to be the available philosophical tropes. Then, within each of the four moves for the four approaches, there are a variety of more nuanced sub-choices that may be available. Broadly, one might begin to map out the entire philosophical discourse generated on any given topic by outputting the available primary theories, objections and other responses to those theories, and defenses against the objections, along with various combinations of compatible theories to form new primary theory-objection-defense sets. One could then look for applications for the resulting maps as they intersect with other fields, if there are any.

However, one might begin to worry about the partiality of the set of T’s. Perhaps the implementation of discourses on T’s provides us true knowledge and benefit with respect to real facets of the world—but is there not something myopic here? What is the set of all T’s? What are T’s on such that they are T’s? What do all philosophical topics share and how are they generated? How do they function as topics such that they are philosophically relevant? How do they attach to the world and become applicable? Could our trope functions range over the whole set T rather than an individual T?

It seems that one could (1) negate the set of all philosophical topics as such by denying that there is ever any such thing as a philosophical topic (or philosophical activity onto such); (2) reduce all philosophical topics as such to some other object or field (or some other type of activity onto such); (3) differentiate all philosophical topics as such from other types of topic; or (4) identify the necessary and sufficient properties of a philosophical topic as such. From this, a total meta-philosophical mapping could be produced. This would provide a formal domain description for philosophy.

Possible schemas for functions D, P, O, and V:

n = ("x(Tx à ~x)
r = (T = φ(x, y))
d = ((~(u T) (T = S)) & (u T))
i = (T φ(x, y))

Tuesday, May 26, 2020

A Modal Solution to the Skeptical Paradox


       Here is the infamous Skeptical Paradox: (1) One cannot know that one is not being deceived in a skeptical scenario, such as a dream or a simulation (i.e. one cannot know that the skeptical hypothesis is false); (2) if one knows some everyday things about reality are true (call this E), then one knows that one is not in a skeptical scenario; (3) one knows some everyday things about reality are true. Clearly, if all three premises are true, then one must conclude that one knows the skeptical hypothesis is false and one does not know the skeptical hypothesis is false, a contradiction.

       First, one would be right to suspect that the "cannot" in the first premise is rather strong. Some (e.g. Pritchard, 2013) have argued that the weaker formulation “one does not know one is not in a skeptical scenario” (call this SH) works well enough to get the paradox. However, it is not clear that one could get the paradox from the weaker version. If the skeptic is not claiming that one cannot know, then her interlocutor, for all she knows, does know, since it is possible to know. In order for the skeptic to say that any given person does not know, she must, ipso facto, also claim that one cannot know. Also, as long as a potential resolution is available, we do not have a strict paradox, but only a temporary obstruction. Thus, it seems that to get the paradox per se, we do need the strong version of the first premise. And it is in the strength of this premise, which actually imposes a necessity condition, that we may look for a solution to the paradox.

    Here is my modal solution to the skeptical paradox: The statement, "One cannot know one is not in some skeptical scenario (q)" is actually very strong. In epistemic modal logic, it would be: 

            ~◊KS(~q) 

By De Morgan's law for modal operators, this gives us: 

            □~KS(~q) 

I.e. "It is necessarily the case that one does not know one is not in some skeptical scenario (q)." 

     Why is it necessarily the case? It could be that there would always be a way I could know, in all possible worlds. It could even be that I would always know, if I were. It hasn't been proven to me that these counter claims are false, so I can't assent to the first premise. I have no basis for believing it is necessarily the case. That is, to dispense with the entire problem, one doesn't even have to deny that it is necessarily the case that one doesn’t know. One just has to say that one has no reason to assent to this claim. There is no compelling reason to believe it. This makes the entire problem only a hypothetical problem, a problem only when one assents to something one has no grounds for believing, and not a real epistemic problem. 

      Perhaps one might then claim that if I do assent to the third premise and the whole closure principle, then I can deduce that I am not in a skeptical scenario and thereby know it, given that I have no reason to dispute this. So, I am only defaulting back to the neo-Moorean position by a different route. However, in saying that I am neither denying nor affirming that it is necessarily the case that I don’t know I’m not in a skeptical scenario, I also am affirming that it is possible for me to not have everyday knowledge, though at present I do, contingently. What this contingency is is not asserted or given, as this is a separate question, independent of my assertion of present contingent everyday knowledge (of some everyday fact p).

    That is, we avoid the simple Moorean rejection of skeptical scenarios here by asserting ◊~KS(p), together with the assertion of E, which is the same as asserting that it is not necessary that one possesses everyday knowledge. Call this the contingent everyday knowledge concession to radical skepticism. What is one’s knowledge contingent on? That at some point one could have a reason to assert that ~KS(~q). If some evidence were presented that caused one to assert this, then one could no longer uphold the everyday knowledge claim. This keeps the first conjunct in ◊~KS(~q) & ◊KS(~q) active.

     This fragility of our everyday knowledge, contingent on the possibility of some justifying evidence for not knowing if we are being deceived, seems right. This is the truth that skeptical scenarios reveal. Through our modal solution, we can retain this truth without getting ourselves stuck in a paradox.


     We can state this solution more concisely, as a reformulation of all three premises. First, from two premises, the Epistemic Necessitation Rule and the Contingent Everyday Knowledge assumption, we can prove that the first premise of the Skeptical Paradox is false:

            P1. (KS(p) à KS(~q))                                                Epistemic Necessitation Rule
            P2. (KS(p) & ◊~KS(p)) v (~KS(p) & ◊KS(p))        Contingent Everyday Knowledge
                        1. ~◊KS(~q)                                                        Supposition, Strong SH
                        2. ~KS(~q)                                                        1, Modal De Morgan’s Law
                        3. ~KS(~q)                                                           2, Axiom T
                        4. ~KS(p)                                                            3, P1, Modus Tollens (with Axiom T)
                                    5. (KS(p) & ◊~KS(p))                           Supposition, first disjunct P2
                                    6. KS(p)                                                  5, Simplification
                                    7. KS(p) & ~KS(p)                               4, 6, Conjunction
                        8. ~(KS(p) & ◊~KS(p))                                    5-7, Indirect Proof
                        9. (~KS(p) & ◊KS(p))                                      8, P2, Disjunctive Syllogism
                        10. (~KS(~q) à ~KS(p))                               P1, Transposition
                        11. ~KS(~q) à ~KS(p)                               10, Axiom K
                        12. ~KS(p)                                                      2, 11, Modus Ponens
                        13. ~◊KS(p)                                                      12, Modal De Morgan’s Law
                        14. ◊KS(p)                                                         9, Simplification
                        15. ◊KS(p) & ~◊KS(p)                                    13, 14, Conjunction
            16. ~~◊KS(~q)                                                               1-15, Indirect Proof
            17. ◊KS(~q)                                                                  16, Double Negation
                QED

     The Contingent Everyday Knowledge (CE) premise (KS(p) & ◊~KS(p)) v (~KS(p) & ◊KS(p)) by Axiom D gives us (KS(p) & ◊~KS(p) & ◊KS(p)) v (~KS(p) & ◊~KS(p) & ◊KS(p)) and by Distribution gives us (◊~KS(p) & ◊KS(p)) & (KS(p) v ~KS(p)), so that by Tautology Elimination we get: ◊KS(p) & ◊~KS(p), as another form of CE.

     Meanwhile, instead of the usual use of the closure principle on knowledge of the implication p à ~q, we can say that if p necessarily (by definition) excludes q, then knowledge of p necessarily implies knowledge of the negation of q, since the de re knowledge referred to here is the same, or includes the same. We state this as the Epistemic Necessitation Rule (EN), given  ~q: □(KS(p) à KS(~q)). We need EN above (in P1-17, QED) to show that CE & EN ⊢ ◊KS(~q) (i.e. to disprove the first premise of the Skeptical Paradox). If we also grant that ◊~KS(~q), as the sine non qua of the entire paradox, i.e. as the Skeptical Background Assumption (SB), then we can conjoin this to get  ◊~KS(~q) & ◊KS(~q), which we can call the Weak Skeptical Hypothesis (WSH). Then we can conjoin CE and WSH to get (◊KS(p) & ◊~KS(p)) & (◊KS(~q) & ◊~KS(~q)), which we will call general Epistemic Contingency (EC). Likewise, given the Epistemic Background Assumption (EB) of the possibility of knowledge ◊KS(p), together with EN & WSH, we can derive ◊~KS(p) & ◊KS(p), that is, CE

P1. ◊KS(~q) & ◊~KS(q)             Weak Skeptical Hypothesis (WSH)
P2. □(KS(p) à KS(~q))            Epistemic Necessitation (EN) 
P3. ◊KS(p)                                   Epistemic Background (EB) 
1. □KS(p) à □KS(~q)               P2, Axiom K  
        2. □KS(p)                                Supposition  
        3. □KS(~q)                            1, 2, Modus Ponens  
        4. ◊~KS(q)                               P1, Simplification  
        5. ~□KS(~q)                           4, Modal De Morgan’s Rule  
        6. □KS(~q) & ~□KS(~q)          3, 5, Conjunction 
7. ~□KS(p)                                    2-6, Indirect Proof 
8. ◊~KS(p)                                      7, Modal De Morgan’s Rule 
9. ◊KS(p) & ◊~KS(p)                  P3, 8, Conjunction 
QED

If we then combine SB & EB as the Background Assumption (BA), then given EN & BA, we can show that CE and WSH are materially equivalent.

The Modal Solution:

◊KS(p) & ◊~KS(p)             Contingent Everyday Knowledge (CE)
◊~KS(~q) & ◊KS(~q)         Weak Skeptical Hypothesis (WSH)
◊KS(p) & ◊~KS(~q)           Background Assumption (BA)
□(KS(p) à KS(~q))           Epistemic Necessitation (given p  ~q) (EN)
 _______________

(EN & BA) à  (CE  WSH)
 _______________
 _______________

Where E := KS(p) and SH := ~KS(~q),

(◊E & ◊~E) & (◊SH & ◊~SH)                                      Epistemic Contingency (EC)
(◊E & ◊~E)  (◊SH & ◊~SH)     Epistemic Equivalence (given EN & BA) (EE)

    That is, we have derived from our solution an important result: Allow the necessary background assumptions of the problem, together with the concession that if it is a priori derived that everyday facts exclude a skeptical scenario, then it is necessary that if one knows everyday facts, then one knows one is not in a skeptical scenario. Now, from this, we find that the contingency of everyday knowledge is materially equivalent to the contingency of the skeptical hypothesis. This in turn shows that there is a strong correlation and mutual dependence between everyday knowledge and the skeptical hypothesis in their epistemic contingency.

The Rationality of Knowledge: On the Logical Necessity of the Epistemic Closure Principle


Saturday, February 29, 2020

Notes on Modal Contextualism


Just as the scope of possible worlds under consideration can be varied according to the type of possibility, whether logical, physical, or social, so it can be limited by the application of possibility as evidence for or against a proposition, whether both for and against are allowed, or only for, or only against. So, the skeptical hypothesis has a different modal scope from ordinary hypotheses in the same way that physical possibility has a different modal scope from logical possibility.

Label all those possible worlds that disprove the skeptical hypothesis SH as DP. The set of possible worlds DP is excluded under a SH context, just as a set of possible worlds in which the laws of physics do not obtain are excluded from a physical possibility context. In effect for the SH, the type of possibility is limited to that which proves the SH. All possible evidence that disproves it is categorically excluded. So, this is a new proof-of-SH, PSH, possible worlds set type that we are limited to considering. When we are permitted the broader scope of possible worlds that include DPSH possible worlds, we exit the SH context.

“Suppose you are in a skeptical scenario” is the same as saying, “For the sake of argument, exclude possible worlds that would disprove you are in a skeptical scenario.” Then, to exit the SH context, one only has to say, “I am now including possible worlds that would disprove that I am in a skeptical scenario.”

What Prior Probability Do You Assign to the Hypothesis that Ghosts Exist?


     In the "Of Miracles" section of the Enquiry, Hume makes a proto-Bayesian argument to the effect that we should assign a very low probability to the hypothesis that a miracle has occurred, owing to his account of the laws of nature, which such a miracle would, by definition, contradict. He takes a law of nature to be a regularity observed to be universally consistent across all available relevant evidence, such that if evidence in favor of a miracle that contradicts that law is presented, we must weigh all the prior evidence for the law against the new evidence for the miracle. 

     The existence of a ghost would contradict basic principles of our modern scientific understanding of life, the mind, and the physical fabric of reality. According to Hume's rough model, all of the vast quantity of the mundane observations we have made that have confirmed these principles would weigh against whatever evidence that could be found in favor of the existence of a ghost. 

     Now, it seems that Karl Popper's account, taken simpliciter, would be at a loss in this situation. This is because the corollary to the principle that a universal statement, i.e. a law, can be deductively falsified by a single contrary piece of evidence is that an existential statement, i.e. the statement that there exists at least one instance of a given phenomenon, can be deductively verified by a single affirming piece of evidence. However, this means that the claim that a ghost exists would be deductively verified by a single piece of evidence that, on parity, would be sufficient in form and content to verify the existence of more mundane phenomena. For example, one good-quality photograph in the right context is sufficient to prove that a bird species thought to be extinct still lives; this occurred in 2015 with the blue-eyed ground dove. So, in the right context, one good-quality photograph of a ghost should be enough to prove the existence of a ghost. In practice, however, we find that this is not the case. Despite the long history of doctored photos and other demonstrated fakes alleging to represent evidence of ghosts, a number of fairly good photographs of supposed spectral events exist that have yet found no definite worldly explanation, and scientists and the public are very far from concluding that ghosts exist as a result. So, Popper's theory apparently needs something more here.

     However, I'm not sure that Bayesian Confirmation Theory can fully account for what is happening here either. The problem is that the prior probabilities we assign to P(e) and P(h) will depend not only on our prior beliefs about ghosts-- and this is a serious problem due to the fact that the scarcity of evidence doesn't allow for a ready "washing out" of priors-- but also on what we take each piece of evidence to be. The likelihood of a photograph containing an optical artifact due to lens flare or digital processing might be many times greater than the likelihood of a photograph containing the manifestation of a paranormal phenomenon, but the same image might stand as being either depending on whom we ask. This problem appears to go beyond the scope of prior probabilities, such that even if the inherent prior probability problem of BCT could be resolved, there would still be a larger question as to weighing evidence and determining what it actually represents in the grand scheme of our model of reality. 

Friday, January 31, 2020

"Crazy" Notes as MH Props





Does Inductive Reasoning Originate in a Deductive Fallacy?

Hume in the Enquiry shows that the idea of causal relation as a hidden force behind events is vacuous, since this is an idea with no actual content beyond the temporal-spatial conjunction of the events in question. But if this idea is really empty, how did we end up with it as the central mechanism of our reasoning about the world?

In section VII, Hume suggests some possible answers to this question-- including the conjecture that the idea of causal power being behind things derives from our own power over our bodies, or from some notion of divine power-- but he dismisses each of these answers as circular or inadequate. Instead, Hume proposes that “it is not reasoning” that produces the idea of causal relation, but some more primitive tendency, one that is present even in children. On this view, the problem of induction would be solved through a kind of utility theory that admits inductive inference has no rational basis but points out that the benefit we have gotten from it already, throughout our formation as a species, has shaped us so as to rely on it as a guide to future benefit.

However, another thought is that inductive reasoning in itself may be a product of a common mistake involving deductive implication. The advantage of this view is that it posits induction as prior to and independent of notions of causation and the uniformity of nature-- instead, these notions would follow from it, after it is adopted. More is explained by having these complex ideas be the product of a simple reasoning mechanism than vice versa. The disadvantage is that it leaves induction in an even worse state than Hume's theory, since it claims that induction as such is not only unsupported but fallacious.

Suppose some people start with the experiential data of always encountering two terms together, e.g. red apples and sweetness. They have collected a data set of all the times the terms have occurred together and found that there was no case when the first occurred but not the second. So this makes them wonder: "is there an implication here, such that we can know for sure that 'if red apples, then sweetness' is always true?" Suppose also that the deductive inferential principle of modus ponens is already known, whether instinctively or explicitly, i.e. (1) if p, then q; (2) p; (C) therefore, q. Now, again, they want to know if there is in general such an implication relation (1) between the two phenomena. 

Some pairs of phenomena necessarily have this relation by definition or by a part-whole relation. For instance, because we know that bachelors are by definition unwed males, it is necessarily the case that if someone is an unwed male, then he is a bachelor. Likewise, because a red apple must be a thing that is red, it is necessarily the case that if something is a red apple, then that thing is red. Notice in these cases that we can also always infer the conjunction of the two terms from the antecedent, given the implication (if and only if the implication), i.e. (p → q) ≡ (p → (p & q)). Then, again, suppose people want to know if they can infer that if a thing (t) is a red apple (A), then that thing is sweet (S). Naively, they might note that for all things that are red apples that they have ever encountered, those things have been sweet, i.e. (A(t) & S(t)). This might lead them to mistakenly affirm the consequent, such that A(t) → (A(t) & S(t)). Because this has the same form as (p → (p & q)), they can then conclude A(t) → S(t), that is, if something is a red apple, it is sweet. The problem of course is the deductively fallacious step of affirming the consequent.

But isn't this exactly what goes on with inductive reasoning? We note that two empirical things have always appeared together, in conjunction, and that if the first of the two occurs, it has never been the case that the second did not occur, and from this we infer that the first is the cause of the second, that is, the first implies the second. This is all that is needed to get the basic form of inductive reasoning. But if this is so, then inductive reasoning is just a product of a mistake in deductive reasoning.

This really is not in disagreement with Hume at all, though. It just fleshes out an intermediary step between an even more primitive origin in instinct, which I think is right, and the conception of ideas of cause and uniformity. I think it could be helpful because it shows that there really is no difference between an analytic implication and a law of nature other than the fact that the law of nature is not actually necessary. What it shows is that we don't need the cause and uniformity ideas at all.

Another reason why I see this intermediary step as likely is that affirming the consequent is such a widespread, often automatic mistake elsewhere among humans. The way I state it here, making each step explicit, might suggest that I mean it is actually an explicit, deliberate process, but I am really referring to a more unconscious, instinctual process. We see this same process in all kinds of prejudices and magical thinking. An all too common, but usually implicit, line of thought goes: (1) If all people in a group have a trait, then some people in that group have that trait. (2) Some people in a group have a trait. (3) Therefore all people in that group have that trait. Maybe we can even say animals make this mistake. Take Pavlov's dogs: (1) If there is a bell sound and food, then there is a bell sound. (2) There is a bell sound. (3) Therefore there is a bell sound and food (so I should excitedly bark for the latter).

If this affirming the consequent based thinking is both common and prior to the conscious development of valid deductive reason, meaning that formal, explicit deductive reason is actually the newer innovation, the advent of deductive reason calls into question the validity of the type of thinking one has come to rely on in order to function. This is where the introduction of causal powers and uniformity come into play, as justifications for upholding our everyday inductive practices despite the challenge presented to them by deduction.


Monday, September 30, 2019

A Less Mysterious Solution to the Paradox of Analysis

Moore's paradox of analysis strikes me as having a simpler resolution than what recent commenters,  such as Richard Fumerton in "The Paradox of Analysis," have proposed. The major premise of the paradox, as Fumerton states it,
  
"If 'X is good' just means 'X is F' then the question 'Is what is F good?' has the same meaning as the question 'Is what is F, F?' " (478) 

is unsound because it assumes that an object being "good" is an identity statement (or at least, more charitably, that what is sought is an identity statement). One arrives at the paradox by assuming that an analysis just involves a chain of identities. In fact, what one wants here is an equivalence, conditions such that if and only if they obtain, something is good-- not an identity with the same intensional value under a different name. This latter notion just perpetuates the basic error of the Meno.

For example, an informative analysis of "good" could ask how its extension is formed by treating it as a binary relation and asking what sub-properties it restricts its members to, as restrictions on the equivalence class of x := the set of things that are good. Taking good as a relation G, we might propose, for instance, that when x is mapped to the set u of things with a positive utility value by the relation G, the statement "x is good" is true.

A logical analysis along the lines of the above without a necessary commitment to any one semantic interpretation could avoid being either merely lexicographical or mysteriously phenomenological. 

I'm not sure if it would be satisfying for all conceptual analyses to be only the product of equivalences, but I would claim that there is no satisfying conceptual analysis that is only the product of an identity. Indeed, since this is already an intuition most people have, Moore's reduction of conceptual analysis to an identity statement is what makes the paradox of analysis work. The only problem is that this reduction is wrong. In most cases, we do want an equivalence statement with a conceptual analysis to tell us how and when a concept obtains. (Maybe one could evoke Aristotle's four causes here and say that an informative analysis should not just give us a noun phrase that the concept noun stands in for but should give us one or more of these four constitutive explanations.)

For example, if the concept is "triangle," there might be an identity relation with "three-sided polygon," but this would not be informative-- it would just be a definition, a substitution of a noun phrase for the concept noun, not an analysis-- while adding, that "a triangle is any three-sided polygon such that it has three internal angles that sum to 180 degrees" would be informative (e.g., by this analytic definition we could claim that a three-sided polygon in non-Euclidean space might not be a triangle). So, one could say that a conceptual analysis can include identity definitions but cannot be constituted only by identity definitions.

Also, because a satisfying conceptual analysis, such as the analysis of "triangle," can be constituted only from a combination of equivalence statements and identity statements, we can at least say that there are some paradigm cases of analysis that do not require some phenomenal state in addition to succeed.

Thoughts on Modal Distance

Duncan Pritchard’s modal luck/risk model in Epistemic Luck (2005) addresses a real problem, which can be illustrated as follows:

Say that Lottie has a lottery ticket that has a 1/50,000,000 chance of winning in an upcoming draw. And say that while walking home in relatively safe conditions, I have a 1/10,000,000 chance of being struck by lightning. I’m convinced that it would irrational for Lottie to decide to tear up her ticket before the draw, but it would be rational for me to think I won’t be struck by lightning and thus decide to walk home.

However, note that these are cases of prediction, such that what we are concerned with justifying here is not knowledge at all but confidence, i.e. justified likely prediction. Even if Lottie’s prediction that she won’t win turns out to be correct, the sense in which she “knew” the outcome is different from the sense of concurrent knowledge of an already true belief*. So, Pritchard’s model addresses the problem with relying solely on probability to justify confidence, not knowledge, as safe. I’m not sure if we can always rely solely on probability with the safety principle in justifying concurrent true belief— I haven’t seen any instances where this would be a problem, but there may be some.

Note that there could be a problem for Pritchard’s Modal Distance model, as a measure of physical difference between possible worlds, if there were exceptions to apparent closeness in particular instances, such as a particular instance where the energy required to make a given lottery draw when the needed ball is at the bottom of the pile is greater than the energy required in a particular instance for a loose piece of paper to jam the basement door in a well-maintained garbage chute. I thought that this problem could be solved by putting the Modal Distance together with probability, but the way I initially tried to do this was wrong**. The correct solution is to instead use the Average Modal Distance, as the sum of the Modal Distances of all the outcomes in close possible worlds over the number of outcomes: (MD1 + MD2 + … + MDn)/n.

In any case, we can calculate the Modal Distance between two possible outcomes in an event as follows:


Where: := conditions needing to obtain for any given outcome to occur in an event E

:= conditions needing to obtain for a target outcome B to occur in E

:= the total set of outcomes of E in close possible worlds

p := {x|( P)  & ( A)}

q := {y|( Q)  & ( B)}

Modal Distance of from B: MD[A, B] := |(p - qp)| i.e. the quantity of minus the quantity of the intersection of with p


In the lottery case, if the quantity of p, i.e. the relevant conditions needing to obtain for a non-winning draw A to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for the winning draw B to occur, with is .95, then the Modal Distance of a non-winning draw from a winning draw is .05. In the lightning case, if the quantity of p, i.e. the relevant conditions needing to obtain for a safe walk home to occur, is 1.0, and the quantity of the intersection of q, i.e. the relevant conditions needing to obtain for a deadly lightning strike to occur, with is .05, then the Modal Distance of a safe walk home from a deadly lightning strike is .95.

Thus, while the probability of being struck by lightning is actually greater than the probability of winning the lottery, the Modal Distance between winning and not winning the lottery is much closer than the Modal Distance between walking home safely and being struck by lightning. This latter disparity seems to explain why it would be irrational to tear up the ticket but rational to walk home.

But how does Modal Distance explain this? When should Modal Distance be appealed to? After all, we would still want to say that buying a lottery ticket with 1/50,000,000 odds is not a safe bet and thus is not a rational purchase, no matter how modally close any ticket is to winning.

I have a couple thoughts on how to answer these questions:

1. One solution that I don't particularly like is to just say that while modal distance can be measured quantitatively and does have an impact on our intuitions about rational decision making, e.g. in the lottery versus the lightning cases, these intuitions are actually emotion-based rather than strictly rational. If that were so, a purely rational subject (e.g. an AI), once fully apprised of all the external circumstances, would not have these intuitions and would base its decisions solely on simple probability over possible worlds.

Take Pritchard's bullet example: say the bullet that misses a soldier named Duncan by several meters is fired by a very accurate sniper who is sitting on a boat that just happens to be rocked by a wave at the moment he pulls the trigger; this boat had been steady all day and the wave at that moment is a fluke with a very low probability. Then, say the bullet that misses Duncan by only a few centimeters is fired by a sniper with a bent scope such that he routinely misses by a few centimeters, and thus his missing by a few centimeters has a very high probability. In this case, even if Duncan were to become aware of these facts, he might still want to say that the bullet that missed him by a few centimeters put him at greater risk than the one that missed him by several meters.

To explain this, Duncan might appeal to a concept of physical danger that we could only translate through something like the modal distance model, not through strict probability. But this physical danger concept might only be employed to capture a subjective emotional response to a situation as one experiences it in the moment, and not a strictly rational assessment. Perhaps we could avoid this conclusion by appealing to the idea of access to possible worlds within a given knowledge frame. Within the knowledge frame of Duncan's experience, he has access to all possible worlds resulting in the two bullets missing him, and it is rational for him to compare the Average Modal Distances of the shots over all those worlds, unrestricted by the facts of the sniper being rocked by a chance wave and the other sniper having a bent scope. This knowledge frame consideration might land us back in internalist territory, though, which I was trying to avoid because I prefer the strictly externalist Robust Anti-luck Epistemology account.  

2. Another solution is to take a closer look at the lottery case in particular and point out some considerations that might make this case exceptional. With a lottery draw where Lottie has already purchased a ticket, one can do a cost-benefit analysis: while there is a very small chance that she will have a huge payoff, now that she already has the ticket, it costs her nothing to keep it and check the result-- or at any rate, it actually costs her less energy to keep the ticket and check the result (in whatever fashion is most convenient to her) than it does to tear the ticket up and throw it away. So tearing up the ticket is irrational because it a loss to her. The value of the ticket before the draw may be much less than the $2 or whatever she paid for it, but because the payoff is so large, it's probably worth more than a penny-- it might be worth something between a nickel and dime. Most people don't throw away nickels and dimes. On the other hand, my being able to walk home might be of considerable benefit to me, much more, quantitatively, than the cost of risking the extremely low probability of being struck by lightning.

I like this solution better than previous one because it remains an externalist account, but I don't like that it deflates the interesting distinction between modal probability and modal (physical) potential through basically a game theory analysis***. 

_____  

*How to interpret “knowledge” of future events is of course an ancient problem going back to Aristotle’s “sea battle” example, but more recent developments in modal logic look to have solved this problem.

**Initially I had thought that modal distance could just be put together with probability over possible worlds in a simple way, as a ratio, but I found that this yielded bad results, e.g. it would have said that a given outcome in a lottery ball draw with fifty possible outcomes was modally closer than a given outcome in a coin flip with two possible outcomes, even though the inertia displacement from outcome to outcome is roughly the same for both scenarios.

***Actually, I'm surprised Pritchard doesn't bring up game theory in his account of luck and risk. I wonder how he sees game theory fitting in with his account? I'm not sure that he really wants to rule out probability for modal distance altogether so much as say that the safety principle can't be reduced to a simple probability, i.e. 1/(# of possible outcomes). A game theoretical account could of course fully accommodate Bayesian induction.