by waiting a little longer must not be terribly signi¬cant to her. We could

put it this way: Hannah™s strategy is reasonable for her because there is

a kind of tradeoff between money and time. Getting the deal done soon

is better, and getting more money is better, and the longer she waits the

more money she is likely to get for the house. The threshold she sets and

the strategy of setting a threshold are reasonable if she picks an amount

that trades time against money in the most reasonable way, given what

she wants and what she knows.

Decision theory, in its classic form, is often understood to give de-

cision makers formal and explicit advice about how to make tradeoffs

like Hannah™s. It is often understood to combine three factors in provid-

ing Hannah with a strategy for selling: First, it tells her to assign utility

numbers to waiting times, presumably lower numbers to longer periods.

Second, she is to assign numbers from the same scale to dollar values for

the house, with higher numbers to greater dollar amounts in proportion

Why Ethical Satis¬cing Makes Sense 133

to how much those dollar amounts matter to her. And third, she is to as-

sign probabilities to prospects of receiving offers of various amounts given

that she wait various periods of time. These three collections of numbers

she can then use to calculate the expected utility of various strategies,

and she will be rational if she picks the strategy with the highest expected

utility. If this version of decision theory is correct, then Hannah™s satis-

¬cing strategy of setting a threshold of $200,000 and accepting the ¬rst

offer over the threshold has a higher expected utility than any available

alternative strategy.

By de¬nition, the expected utility of a strategy, S, is given by the formula

eu(S) = u(A1 & S) pr(A1 |S) + u(A2 & S) pr(A2 | S) + . . . + u(An & S) pr(An |S)

where the Ai form a partition (so that the conjunctions of the Ai with S

partition the space of outcomes of the strategy), and the function pr(X | Y)

assigns conditional probabilities: the probability of X given Y. Intuitively,

the idea is that we can ¬gure out how good a strategy is by taking its various

possible outcomes, weighting their goodness by their probabilities, and

summing the products. If some possible outcome is extremely good, it

will contribute a lot to the goodness of the strategy if it is likely, but it

might contribute little to the goodness of the strategy if it is unlikely.

Decision theory thus explains in a precise way the rough conditions

we sketched. By setting her threshold at $200,000, Hannah reduces her

chance of getting a much higher offer. If she would be very likely to

receive a much higher offer were she to wait a little longer, and if waiting

longer does not reduce her utility, and if getting a lot more money would

increase her utility by a great deal, then the expected utility of setting a

higher threshold (or some other type of strategy) would be greater than

her actual one. But if she cares little for the extra money she might get

over $200,000 (so that the utility of these forgone prospects is not great),

and if it matters a lot to her to get the house sold quickly (so that the

utility of prospects of larger dollar amounts received after a longer wait

would not be relatively small), and if the chance of getting much larger

offers would not be large (so that the multipliers of the better prospects

would be small multipliers), then her chosen strategy is reasonable.

Now, the advice that decision theory gives to Hannah may seem oner-

ous to follow. What if Hannah doesn™t have any clear idea of the proba-

bilities associated with the various outcomes given the various strategies

she might adopt? Even coming up with plausible estimates might be a

dreary prospect. And how exactly is she supposed to go about assigning

all those utility numbers? By de¬nition, your utility for X is greater than

James Dreier

134

your utility for Y just in case you prefer X to Y. But having assigned, say, an

arbitrary utility of 100 to the prospect of selling her house for $200,000 in

one week, how does Hannah know whether to assign 110, 195, or 1,000 to

the prospect of selling for $205,000 in ¬ve days? David Schmidtz writes:

Suppose I need to decide whether to go off to ¬ght for a cause in which I deeply

believe or to stay home with a family that needs me and that I deeply love. What

should I do? My friends say I should determine the possible outcomes of the

two proposed courses of action, assign probabilities and numerical utilities to

each possibility, multiply through, and then choose whichever alternative has the

highest number.

My friends are wrong. Their proposal would be plausible in games of chance

where information on probabilities and monetarily denominated utilities is read-

ily available. In the present case, however, I can only guess at the possible outcomes

of either course of action. Nor do I know their probabilities. Nor do I know how

to gauge their utilities. The strategy of maximizing expected utility is out of the

question, for employing it requires information that I do not have.3

Schmidtz™s friends go on to suggest instead that he make the best esti-

mates he can of the relevant parameters and use those, but Schmidtz

notes that he has no reason to trust a formula full of estimates.

Schmidtz™s friends are devotees of what I will call Crude Decision

Theory. Crude Decision Theory can indeed be useful in special circum-

stances, like deciding whether to accept the doubling cube in backgam-

mon or how much to direct your multinational corporation to spend on

research. Schmidtz is right, though, to say that as a guide to ordinary

decisions it is useless (or worse). The most compelling objection is that

we do not, ordinarily, know how to assign utility numbers to outcomes.4

In the special cases of games or boards of directors, there are (or anyway

may be) good reasons to let numbers of points or dollar pro¬ts stand in

for utilities, but in most of our lives we have no such proxies. The prob-

lem is not only that we ¬nd it hard to be precise about our own utility

functions, but that there is some quality we know well but ¬nd it hard to

quantify. The problem is worse than that. It is that ˜utility™ is here a bit

of technical jargon, so that without some theoretic explanation (beyond

the usual paraphrase, “strength of preference”) we don™t even have any

clear idea of what quality it is that we are supposed to quantify.

What Is Utility?

Or do we? There are some candidates, at one time or another and by

some philosophers or other thought to be plausible candidates, for qual-

ities whose quantities should stand in for utilities. Even the tradition of

Why Ethical Satis¬cing Makes Sense 135

decision theory has seen some (confused, I think) candidates for qual-

ities that “measure” utility. Michael Slote writes, “Even those opposed

to consequentialism and utilitarianism as moral theories have tended to

think that (extra-moral individualistic) rationality requires an individual

to maximize his satisfactions or do what is best for himself. . . . ”5

If satisfaction is an experience, if it is, say, enjoyment of something

you like or want, or if we have or are given some independent concep-

tion of what is best for a person, then at least the more severe problem

would be solved. We would still need some explanation of how to quan-

tify enjoyment or the good for a person, but at least we would have some

de¬nite account of what quantities we are supposed to be plugging into

the expected utility formula. Crude Decision Theory might be ¬lled in

by a story about this quantity, and eventually by a further story about how

to determine the utility for any given prospect (and person). In that case,

Crude Decision Theory would be a competitor to a serious satis¬cing the-

ory of rational choice: The one would tell us to maximize the expectation

of the quantity; the other would say that we should (at least sometimes)

be choosing so as to get enough of that quantity, and that we have satis¬ed

the strictures of rationality when we have reached that threshold, even if

we could have gotten more.

Slote gives an example in which it is fairly plausible that satis¬cing

(understood as a competitor to maximizing expectation in the Crude

version) is rational: the example of the Snacker.

Imagine that it is mid-afternoon; you had a good lunch, and you are not now

hungry; neither, on the other hand, are you sated. You would enjoy a candy bar

or Coca Cola, if you had one, and there is in fact, right next to your desk, a

refrigerator packed with such snacks and provided gratis by the company for

which you work. Realizing all this, do you, then, necessarily take and consume

a snack? If you do not, is that necessarily because you are afraid to spoil your

dinner, because you are on a diet or because you are too busy? I think not. You

may simply not feel the need for any such snack. You turn down a good thing,

a sure satisfaction, because you are perfectly satis¬ed as you are. Most of us are

often in situations of this sort, and many of us would often do the same thing. We

are not boundless optimizers or maximizers, but are sometimes (more) modest

in our desires and needs.6

If you are perfectly satis¬ed as you are, then clearly you could not increase

your satisfaction by slurping a soda. But it is clear enough what Slote

means us to imagine. We are supposed to imagine that we are content,

even though we know that we would enjoy a snack. The felt experience of

enjoyment would straightforwardly add to whatever satisfaction we feel

James Dreier

136