increase of depth or involves sentences whose quanti¬cational depth is greater

than one (or, at most, two). This means that this entire discussion has been

conducted by reference only to the very simplest and least informative types of

explanation. (For this discussion, see Salmon 1984 and 1989 as well as Schurtz

(1995).) No wonder that this discussion has often had the aura of triviality

about it. This is part of what was meant earlier by saying that previous analysts

have used only a rather shallow logic.

However, this does not mean that even simple explanatory interpolation

formulas cannot be suggestive. Once I was looking for an example for my stu-

dents and happened to think of that old chestnut, the “curious incident of the

dog in the night-time” in Conan Doyle. There, are ad explanandum premises

say (i) that there was a trained watchdog in the stable, in brief (∃x)D(x), and

(ii) that no dog barked at the thief, in short (∀x) D(x) ⊃ ∼B(x,t)). The general

truths about the situation are (i) that the master of all the dogs in stable was

the stable master, in short:

(∀x)(∀y)((D(x) ⊃ M(y,x)) ⊃ s = y) (2.6)

and (ii) that the only person a watchdog does not bark at is its master, in short:

(∀x)(∀y)((D(x) & ∼ B(x,y)) ⊃ M(y,x)). (2.7)

The conclusion is that the stable master was the thief, in other words (symbols)

s = t. The tableau for A (T ⊃ D) that I constructed looks somewhat like this

Tableau 5:

Logical Explanations 173

tableau 5

(3) ((∀x)(∀y)((D(x) & M(y,x)) ⊃ s = y) &

(1) (∃x)D(x)

(2) (∀x)(D(x) ⊃ ∼B(x,t)) (∀x)(∀ y)((D(x) & ∼B(x,y)) ⊃ M(y,x))) ⊃

s=t

(8) s = t from (3)

(4) D( ) from (1)

(5) D( ) ⊃ ∼B( ,t) from (2) (9) (∃x)(∃y)(D(x) & M(y,x) & s = y) ∨

(∃x)(∃y)(D(x) & ∼B(x,y) & ∼M(y,x))

from (3)

(6) ∼D( ) (7) ∼B( ,t) (10) (∃x)(∃y)(D(x) & M(y,x) & s = y)

Closure from (5) from (9)

(11) (∃x)(∃y)(D(x) & ∼B(x,y) & ∼M(y,x))

Bridge

to (17) from (9)

and (19)

(12) D( ) & M(t, ) & s = t from (10)

(13) D( ) & ∼B( ,t) & ∼M(t, ) from (11)

(16) s = t

(14) D( ) (15) M(t, )

from from from

(12) (12) (12)

Bridge Closure

(18) ∼B( ,t) (19) ∼M(t, )

(17) D( )

from from from

(13) (13) (13)

Bridge Bridge Closure

Here I managed to surprise myself pleasantly. The left-hand interpolation

formula IL (which in this case equals IR ) is

(∃x)(D(x)& ∼ B(x,t) (2.8)

In other words, the solution of the problem”the explanation of what

happened”lies in the fact that there was a watch dog in the stables that did

not bark”which is the very key to the solution Sherlock Holmes offers to the

police inspector.

Another comment is that there are different”conceptually different”

elements in an explanation, even if we restrict ourselves to why-explanations.

Even in the deductive case, we have two interpolation formulas with different

characteristic properties. The left-hand formula IL tells what individuals there

are in models of F and which laws characterizing those models apply to which

unknowns that together make those models also models of G. The right-hand

interpolation IR tells what unknowns there are in the models of G and which

individuals in the models of F serve to provide the existence of the individu-

als that together make models of G include the models of F. The explanation

results from putting these two together, in other words, from the validity of the

Socratic Epistemology

174

consequence relation IL IR ”or perhaps from the obviousness of this rela-

tion. This gives us a further insight into the structure of deductive explanation.

When we move to empirical explanation, the explanations so far consid-

ered are given to us through the interpolation formulas for the consequence

relation:

(T ⊃ E)

A (2.9)

Such explanations tells us what it is about A that necessitates (jointly with the

background theory T) that E. But we can apply our interpolation theorem

instead to

(A ⊃ E)

T (2.10)

Then we obtain an account of what it is about T that necessitates a con¬guration

of individuals instantiating A to instantiate also E. This “what it is about T” is

typically what might be called a local law L, which shows what the more general

law T amounts to in the explanatory situation. Producing the interpolation

formula for (2.10) will qualify as an explanation in a perfectly good sense,

although in a sense different from the one in which an interpolation formula

for (2.9) produces an explanation.

It could even be thought that the task of explanation involves ¬nding, not

only A, but L. For in some scienti¬c explanations, and in even more everyday

explanations, the ultimate general theory T is not known, or for some other

reason cannot operate as a starting point of an explanation.

This suggests a way of looking at covering-law theories of explanation. They

can be viewed as results of failing to distinguish the two senses of explanation.

In the latter sense related to (2.10), explaining can indeed amount to the search

of a law that accounts for the connection between A and E. But this law need

not be of the form of general implication, and most importantly it does not

spell out what it is about the initial conditions or boundary conditions that

necessitates the explanandum.

In the former sense, related to (2.9), explaining does not mean looking for a

covering law or any other kind of general law or theory. Rather, an explanation

amounts to seeing what it is about a con¬guration of individuals that is a model

of A that makes it also a model of E. Such an explanation is, as can be seen

from the analyses presented earlier, in effect a dependence analysis. Causal

explanations can be considered a special case of such dependence explanations.

The fact that successful explanations in this sense produce as a byproduct

covering laws in the form of general implications does not mean that they are

the tools of explanation.

These results help to answer a question that has been discussed by some

philosophers: Are there explanations in mathematics? The ¬rst part of the

answer is: Of course, in the same sense as any non-trivial (“theorematic”)

deduction involves an explanation. However, explanations by means of addi-

tional information is possible in mathematics only in the second sense, starting

Logical Explanations 175

from (2.10) and yielding a “local law.” Explanations of the ¬rst sense, starting

from (2.9) and yielding a covering law, are not natural in mathematics. Math-

ematical theorems are not “explained” by reference to initial conditions, or

even boundary conditions.

3. “How Possible” Explanations

What has been found so far shows the importance of the role of logic in explain-

ing why something is or was the case of why something happens or happened.

My purpose in the rest of this chapter is to extend this insight into the role of

logic to a still further kind of explanation. The most prominent type of expla-

nation is explaining why. Indeed, explanations of this type are often identi¬ed

with answers to why-questions. Sometimes we also speak of explaining how

something happened. This is the type of explanation so far examined in this

chapter. However, there is yet another kind of task that can be identi¬ed

by speaking of explaining. That is the task not of explaining why something

happened, but how it was possible for it to have happened.

Such explanations have been discussed in the philosophical literature of

the last several decades. An interesting example is found, for instance, in von

Wright (1971). But these discussions have not led to any clear-cut analysis of

the notion of “how possible” explanations or to any other simple conclusion.

The ¬rst philosopher to discuss “how possible” explanations at some length

seems to have been William Dray (1957). His overall thesis was that historical

explanations do not conform to the covering law model. I will not discuss his

arguments here. As a separate topic, Dray argued in his Chapter VI that there

is a separate class of explanations that do not conform to the covering law

model, either, but that differ from why-explanations. These he identi¬ed as