One of the major current developments in cognitive psychology is what is

usually referred to as the “theory of cognitive fallacies,” originated by Amos

Tversky and Daniel Kahneman. The purported repercussions of their theory

extend beyond psychology, however. A ¬‚avor of how seriously the fad of

cognitive fallacies has been taken is perhaps conveyed by a quote from Piatelli-

Palmerini (1994, xiii), who predicted “that sooner or later, Amos Tversky and

Daniel Kahneman will win the Nobel Prize for economics.” His prediction was

ful¬lled in 2002.

The theory of cognitive fallacies is not merely a matter of bare facts of psy-

chology. The phenomena (certain kinds of spontaneous cognitive judgments)

that are the evidential basis of the theory derive their theoretical interests

mainly from the fact that they are interpreted as representing fallacious”that

is, irrational judgments on the part of the subject in question. Such an inter-

pretation presupposes that we can independently establish what it means for

a probability judgment to be rational. In the case of typical cognitive falla-

cies studied in the recent literature, this rationality is supposed to have been

established by our usual probability calculus in its Bayesian use.

The fame of the cognitive fallacies notwithstanding, I will show in this chap-

ter that at least one of them has been misdiagnosed by the theorists of cog-

nitive fallacies. In reality, there need not be anything fallacious or otherwise

irrational about the judgments that are supposed to exhibit this “fallacy.” I will

also comment on how this alleged fallacy throws light on the way probabilis-

tic concepts should and should not be applied to human knowledge-seeking

(truth-seeking) activities.

The so-called fallacy that I will discuss is known as the “conjunctive

fallacy” or the “conjunction effect.” It is best introduced by means of an

example. Here, I follow the formulation of Piatelli-Palmerini (1974, 65“67,

abbreviated).

211

Socratic Epistemology

212

Consider the following information that one is supposed to have received:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy.

As a student she was deeply concerned with issues of discrimination and social justice,

and also participated in antinuclear demonstrations.

In the experiments of Tversky and Kahneman, the subjects are asked to rank

a number of statements as to what Linda™s profession is for their probability.

Among them, there are the following:

(T) Linda is a bank teller

(T & F) Linda is a bank teller and is active in the feminist movement

If the reader ¬nds on the basis of his or her intuitive judgment that (T & F)

is more probable (credible) than (T), that reader is in a large company. In a

typical experiment, between 83 percent and 92 percent of subjects agree with

this ranking. Yet there is something strange going on here. Speaking of such a

probability ranking, Piatelli-Palmerini (1994, 65“66) writes:

That is what almost all of us do, though again this is a pure cognitive illusion. In

fact, the likelihood that two of these characteristics should be simultaneously true

(that there is what scientists call a “conjunction”) is always and necessarily inferior

to the probability of anyone of these two characteristics taken alone. If you think

for a moment, you will be obliged to admit that it must . . . be more likely that Linda

is a bank teller and takes part in some movement or other (case T, in fact, speci¬es

nothing more than that) than it is that Linda is both a bank teller and is an active

feminist.

Moreover, this alleged fallacy has little to do with the subjects™ defective knowl-

edge of the laws of probability.

What is really surprising is that there is no great difference in the average responses

from the “uninformed” subject (that is, one who has no real notion of the laws of

probability) and those of statistical experts. (Piatelli-Palmerini, 1994, 66.)

Such evidence has persuaded many psychologists that there is an inevitable

“built-in” tendency in the human mind to commit the conjunction fallacy.

I believe that this way of looking at the experimental results of Tversky and

Kahneman (among others) is misguided. There need not be anything fallacious

about the conjunction effect. At the same time, I do not see any reason here

to doubt the usual laws of probability calculus, rightly understood.

The ¬rst main idea of my argument to this effect is exceedingly simple. It

is admittedly true that the probability simpliciter of a conjunction cannot be

higher than that of one of its conjuncts. But in the conjunction phenomenon,

we are not dealing with prior probabilities; we are dealing with probabilities

on evidence (conditional probabilities). Then the probability P1 ((T & F)/E1 )

of (T & F) on certain evidence E1 is unproblematically smaller than (or at

most equal to) the probability P2 (T/E2 ) of T on certain evidence E2 only if

A Fallacious Fallacy? 213

the prior probability distributions P1 and P2 are the same and if the relevant

evidence E1 or E2 is the same in both cases. Both of these assumptions are

obviously made by the likes of Piatelli-Palmerini who believe that subjects™

probability estimates are irrational.

But neither of the two assumptions can always be made in the critical situ-

ations. First, the total available evidence is not completely the same in the two

cases. Everything else is, of course, identical evidence-wise in the two cases,

except for one thing. In the situations in which the fallacy arises, the two items

of information T and (T & F) are thought of as being conveyed to the subject

by two different informants. Moreover, the two differ in that the information

they convey is different.

But this seems to be a distinction without difference. For the main distin-

guishing factor is merely that a different person is relaying the information to

the probability appraiser. This should not make any difference, it seems. Nei-

ther of the two messages T and (T & F) brings in new facts into the background

evidence with respect to which probabilities are being estimated. For instance,

no causal connection is known or presumed to obtain between, on the one

hand, the fact that T or that (T & F) and, on the other hand, the fact that

these propositions are conveyed to the appraiser by this or that person. More-

over, the appraiser has by de¬nition no independent knowledge about the two

sources of information. The two informants might as well be two different

databases. Hence their being different does not seem to make any difference

to the evidence with respect to which the Bayesian conditionalization is being

carried out. And in a very real sense, I can grant that E1 = E2 .

There seems to be no doubt that the irrelevance of the source for the objec-

tive evidence has greatly encouraged the idea that the conjunction effect is

somehow fallacious.

However, this is not the end of the story. What are often forgotten are the

presuppositions of strict Bayesian inference”in particular, the role of “prior”

probabilities. Here we are led to examine some of the most fundamental ques-

tions of probability evaluation.

How are the degrees of credibility supposed to be changed by new

information? The simplest approach in fact forms the historical background

of the theory of cognitive fallacies. Even though the term is not completely

accurate historically. I will call it the “Bayesian approach.” According to it, an

inquiry starts from an assignment of prior probabilities to the propositions of

the language used in the inquiry. The in¬‚uence of new evidence is taken care

of by conditionalization. The probability of an event H on the evidence E is

simply the conditional probability P(H/E). When new evidence E* is added,

the resulting probability (degree of credibility) will be P(H/E & E* ).

This is a simple and attractive model. It has serious problems, however. For

philosophical purposes, it cannot be considered a presuppositionless general

model. For a closer analysis quickly shows that the choice of the priors amounts

to non-trivial assumptions concerning the world. These assumptions cannot be

Socratic Epistemology

214

known to be correct a priori, and therefore we may in principle be forced to

change them in the teeth of the evidence.

This point is not new, and it should be obvious. Indeed, L.J. Savage (1962)

defended Bayesian methods precisely because the prior probability distribu-

tion is (according to him) a handy way of codifying background information

relevant to some given decision problem. Along different lines, the same point

is testi¬ed to by Carnap™s attempt to build a purely logically based prior prob-

ability distribution. This probability distribution is restrained by strong sym-

metry assumptions. However, it turns out that even Carnapian assumptions

leave open a parameter , which can be thought of as an index of the order

in the Carnapian universe in question. This is a substantial assumption that

can be veri¬ed only by one™s knowledge of the whole universe. Accordingly,

any estimate of may always in principle be subject to change in the light of

future experience. And the same result will in principle hold in other cases, too.

We cannot exclude the possibility that experience may force us to change our

“prior” probability distribution. Of course, being “prior” then does not any

longer mean antecedent to the entire cognitive enterprise, but only antecedent