rather not snack.

I think many people will react to the example by admitting that the sort

of thing Slote describes is quite familiar but doubting that it sheds much

light on the theory of rationality. Or if it does point to some fact about

rationality, it points to the inadequacy of the notion of utility that is being

put to use. Bentham might think it obvious that rational agents are driven

by the prospect of pleasurable experience, but not many philosophers

today ¬nd such a crass hedonism attractive. Rationality doesn™t require us

to maximize enjoyment “ not because rationality imposes no maximization

constraints but because rationality doesn™t tell us to pursue enjoyment at

all. Nor, for that matter, does it require us to pursue our own good. Most

of us accept that much of Hume™s remarks about what ™tis not contrary

to reason.

This dif¬culty infects many of the initially plausible examples of satis-

¬cing, I believe. Some independently conceived quantity is suggested or

stipulated, and then we are invited to react to the example by agreeing

that the agent is not rationally required to maximize that quantity but could

rationally satis¬ce it instead. But the examples are spoiled by the fact that

the quantity in question is just not a plausible candidate for the decision

theorist™s utility. Certainly, a person could rationally fail to maximize it;

for that matter, one could quite rationally ignore it altogether, though it

might be unusual or even bizarre not to care about it at all. Shortly I will

present what I™ll call True Decision Theory. According to True Decision

Theory, there is no independent conception of utility. A person™s utility func-

tion is a construct, developed by the decision theorist as a convenience.

If that is correct, then Crude Decision Theory is hopeless. The quantity

(the expectation of which) it tells us to maximize has no independent

existence; it is meaningless in advance of the theoretic construction of

decision theory™s central theorem. And, as I will explain, if utility is an

artifact of that theorem, then the advice of Crude Decision Theory is

empty.

True Decision Theory

Standard decision theory starts by assuming that a person™s preferences

are characterized by an ordering that ranks things from more to less

preferred. The “things” in question are variously events, propositions,

or states of affairs, depending on the formalization; I will take them

to be propositions.7 There need be no most preferred proposition, no

summum bonum, nor any least preferred, though there may be. Suppose

Why Ethical Satis¬cing Makes Sense 137

your preferences do work this way. For any pair of propositions, there

is a de¬nite matter of fact about which you prefer (to be the case); one

way it could be de¬nite is that you are indifferent between a certain pair

of propositions. Then decision theory says that if your preference rela-

tion satis¬es a certain set of axioms, there will be an expectational utility

function that represents your preferences.

There are three technicalities to ¬ll in. First, what axioms? Second,

what is meant by “expectational”? Third, what is it for a utility function

to “represent” someone™s preferences? I will explain these in reverse or-

der. A utility function, which is just a function from propositions to real

numbers, represents a preference ordering just in case it assigns larger

numbers to the more preferred propositions. Formally, a function u rep-

resents a preference relation R iff:

( p)(q )[pRq (u( p) > u(q )]

When you are indifferent between a pair of alternatives, your preference

is represented by a utility function that assigns the same number to those

alternatives.

A utility function is expectational just in case it assigns to each prospect

a utility equal to that prospect™s expected utility. Previously I gave the

formula for the expected utility of a strategy. Here is that formula:

eu(S) = u(A1 & S) pr(A1 | S) + u(A2 & S) pr(A2 | S) + . . . + u(An & S) pr(An |S)

So long as the Ai partition logical space, this formula gives the expected

utility of S no matter what sort of proposition S is. It need not be a strategy.

For instance, S could be the prospect that the Red Sox win the American

League East division race, A1 could be the proposition that if the Sox win

the AL East they go on to lose in the ¬rst round of the playoffs, A2 the

proposition that if the Sox win the AL East they lose in the ALCS, A3 the

proposition that if the Sox win the AL East they lose in the World Series,

and A4 the proposition that if the Sox win the AL East they win the World

Series.8 Suppose that a certain Red Sox fan, Cole, takes the probability

of A1 to be .5, the probability of A2 to be .3, and the probabilities of A3

and A4 to be .1.9 So the expected utility of S will be

.5u (A1 & S) + .3 u(A2 & S) + .1 u(A3 & S) + .1 u(A4 & S),

that is, the expected utility of the proposition that the Red Sox win the

division will be .5u(Red Sox lose in the ¬rst round) + .3u(Red Sox lose in

the ALCS) + .1u(Red Sox lose in the World Series) + .1u(Red Sox win the

World Series). Then for an expectational utility function u to represent

James Dreier

138

Cole™s preferences, the number that u assigns to the proposition that the

Red Sox win the division must be the same as that expected utility, the

weighted sum of the numbers u assigns to the Ai propositions.

I will not explain the axioms in any detail.10 The interesting ones in-

volve probabilities and preference, and they are generally thought of as

constraints of coherence. For example, a constraint that Leonard Savage

called the “sure-thing principle” says (roughly speaking) that if one strat-

egy yields outcomes preferred to the outcomes yielded by a second strat-

egy no matter how the world turns out, then one must prefer the ¬rst

strategy to the second. The crucial fact is that if a person™s preferences do

conform to the axioms, then there is an expectational utility function that

represents those preferences. The Representation Theorem shows how

to construct such a function, given a collection of preferences that satisfy

the axioms, and it also shows that all expectational functions that repre-

sent the given collection will be simple transformations of one another.11

What if a person™s preferences do not satisfy the axioms? Then no expec-

tational utility function represents that person™s preferences.

Now I can explain True Decision Theory. Take our Red Sox fan, Cole.

Cole can tell us which outcomes of the baseball season he prefers to which

others. If we had the time, patience, and inclination, and if Cole would

stand for it, we could ¬nd out his complete preference ordering for out-

comes and also get him to give us probability estimates for each.12 If these

preferences conform to the axioms, then we can make up an expecta-

tional utility function for Cole. In principle, we could then go on to extend

the utility function so that it covers not just baseball but all propositions,

again assuming that his preferences in general conform to the decision

theoretic axioms. Pretend that we have done this. Now it will turn out that

in each and every decision Cole faces, he always prefers the option with

the higher expected utility. True Decision Theory does not advise Cole to

choose the option with the higher expected utility. Rather, it constructs

a utility function for him that will always assign a higher utility to the

option he prefers. And because the function is expectational, the utility

it assigns to each option will also be the expected utility of that option.

According to True Decision Theory, there is simply no question of

a person™s making some kind of mistake in calculation and choosing

the option with the lower expected utility, much less any possibility of a

person™s deliberately adopting some other strategy than expected utility

maximization. True Decision Theory allows for no such possibility. Either

your preferences satisfy the decision theoretic axioms, or they do not. If

Why Ethical Satis¬cing Makes Sense 139

they do, then expected utility maximization takes care of itself. Whenever

it might look to some Crude theorist as if a person chose some option with

a lower expected utility than an alternative, that would just show that

the Crude theorist had not properly identi¬ed the utility function that

emerges from the construction given in the Representation Theorem. If,

on the other hand, someone™s preferences do not conform to the axioms,

then the construction doesn™t work and we have no utility function for

that person to maximize.

David Schmidtz™s friends were not True Decision Theorists. They were

Crude Decision Theorists. As an alternative to Crude Decision Theory,

satis¬cing looks considerably more plausible, at least in many circum-

stances. But if that™s all that satis¬cing has going for it, then it doesn™t

have much, because Crude Decision Theory is organized around a mis-

take. Satis¬cing, I suspect, inherits that mistake. The mistake, as I have

said, is to think that there is some independently identi¬able quantity

called “utility” that a person could identify and seek to maximize. I will

now argue that this is indeed a mistake.

True Decision Theory Corrects a Mistake

Suppose we ask Cole which he likes more: the prospect of the Red Sox

going into game seven of the World Series with Pedro Martinez on the

mound, or the prospect of the Red Sox being ahead three games to two

but with Martinez unavailable for the remainder of the Series. With some

re¬‚ection, Cole could presumably say which he likes more. We are just