you extract all the covariation from the other disturbances before you

arrive at policy-invariant disturbances. There is no theory in the Cowles

Commission approach for how you do this extraction. You have to

take a stand on these issues if you are going to really use the model. This

is the reason for the added structure in the VAR literature. In most

applications, I think that it is the right way to go.

The second approach is to make restrictions on the long-run response

matrices, but again to assume that the shocks are orthogonal. Restric-

tions on long-run response matrices are probably not as widespread

because when they lead to overidenti¬cation, they can result in unwieldy

computational problems. In contrast, you can handle overidenti¬cation

in restrictions on the contemporaneous covariance matrices with much

less computational dif¬culty.

There is another informal aspect to identi¬cation. Researchers will

make some explicit restrictions and then look at the plausibility of the

results. For instance, speci¬cations in which responses to what are pur-

ported to be monetary policy shocks that are clearly ridiculous tend not

to be reported. This informal aspect has bothered some people, including

An Interview with Christopher A. Sims 221

Uhlig (2001), Faust (1998), and others. They have explored what hap-

pens if you make these prior plausibility restrictions formal. With modern

computational methods, this approach can be feasible. The result of these

exercises is that the empirical ¬ndings are very robust. Faust doesn™t explain

his results that way, but my reading of his paper is essentially a ¬nding of

robustness.

In this VAR literature, you see a phenomenon that is not treated in

econometrics texts. We almost always really have fewer reliable iden-

tifying restrictions than we need to identify the full set of parameters.

We are always experimenting with a variety of identi¬cation schemes, all

of which are hard to reject. We evaluate this identi¬cation partly on the

basis of how well the resulting econometric model ¬ts the data and partly

on the basis of how much sense the identi¬cation makes.

Hansen: What do you see as being the important empirical insights

that emerged from the VAR literature.

Sims: I think the most important ones have been the ones about

sorting out endogeneity of monetary policy that I™ve already talked about

a little bit. I think that literature has had a really major impact on the way

people think about monetary policies. The basic dynamics of the estim-

ates from the VARs showing that the effect on output and prices of

monetary policy shocks are quite smooth and slow are widely accepted

now, even among policymakers. This pattern holds up under many dif-

ferent variations of a VAR speci¬cation.

Hansen: You have had a longstanding interest in Bayesian statistics

and econometrics. Your research in Bayesian econometrics has targeted

situations in which Bayesian and classical perspectives can lead to import-

ant differences in practice, as in

Sims and Uhlig (1991). A leading

example of this is research on unit

roots. Is this a fair assessment, and

are there other important examples?

Sims: Early on in my career, I

didn™t see that the difference be-

tween Bayesian and classical think-

ing was very important. So I didn™t

get involved in defending Bayesian

viewpoints or get into arguments,

because I thought that was irrel-

evant. Then, I noticed that it

really made a difference in the

unit-root literature. The construc-

Figure 10.5 Chris and Cathie Sims,

tion of the likelihood function

August 2000.

222 Lars Peter Hansen

for an autoregression conditioned on the initial values of the time series

proceeds in the same way whether or not nonstationarity is present. So,

the form of inference implied by the likelihood principle should be the

same for stationary and nonstationary cases. Classical distribution theory

seems to imply that we must use very different procedures when we have

an autoregression that may include a unit root.

The Bayesian perspective implies that any special character of inference

in the presence of possible nonstationarity should arise from differing

implications (in stationary and nonstationary cases) of conditioning on

initial conditions and from the related fact that “¬‚at” priors can imply

bizarre beliefs about the behavior of observables. So when such differences

arise in the way you handle models that are dynamic and might have a

unit root, they should come from the imposition of a reasonable prior for

use in scienti¬c reporting, and that™s a very different problem formally

and intuitively from the unit root classical distribution theory.

Another example of when it makes a lot of difference whether you

take a Bayesian or classical perspective is in testing for break points.

When you are testing for one break point, both Bayesian and some non-

Bayesian approaches will trace out the likelihood as a function of the

break point (though non-Bayesians are more likely to trace out the max-

imized, and Bayesians the integrated, likelihood). The Bayesian, or likeli-

hood principle, approach would tell you that in a change-point problem,

the precision of your knowledge about the change point, given the sample,

is determined by the shape of the likelihood you confront in the sample.

Classical approaches can lose track of this point, by thinking about the

distribution of the likelihood function over all possible samples, rather

than focusing on the likelihood function that™s in front of you.

Though there is relatively little Bayesian work on instrumental vari-

ables I think there could be more, and it might make a distinct con-

tribution. Instrumental variable estimation is not likelihood-principle based,

but it applies to models for which there may be a likelihood. Also,

one can ask the question of what is good inference conditional on the

moments that go into the instrumental variable estimate instead of con-

ditional on the whole data set. I think one may be able to get con-

clusions there that provide a more solid foundation for the discussion of

weak instruments, which is an important applied topic.

Hansen: As a researcher, you have been a great example of someone

for whom methodological and empirical interests are intertwined. As

economics and econometrics become more developed, there is an inevit-

able pull toward specialization. Econometric theory is becoming a separate

¬eld in many places. Is there a good reason to be concerned about eco-

nometrics becoming too specialized too quickly?

An Interview with Christopher A. Sims 223

Sims: In all kinds of ¬elds, including economics, there™s a split

between more abstract and more applied theorists, and between theory

and empirical work in general. Within econometrics, there™s a division

between econometric theory and applied econometric work. It is import-

ant that people work on connecting these areas. There™s an internal social

dynamic that makes people respond more to work within their own

specialty, and that can leave people who actually bridge specialties with-

out ¬rm constituencies in the profession. Moreover, there is value

to having economists involved in policy issues, because that creates a

pressure to connect theory and practice and to contribute to economic

research explicitly connected to real-world problems.

So I agree that excessive separation of econometrics from the rest of

economics is not a good thing, and that there is, at least in some places,

momentum in that direction. There is an opposite danger, though:

By insisting that only people who have strong credentials in a substantive

area of research are real, or useful, econometricians, some departments

have, in my view, created environments hostile to theoretical econometrics,

and thereby also to rigorous thinking about empirical methodology. Com-

munication between econometricians and noneconometrician economists

is important, but this happens best when there are econometricians who

are truly dedicated to their subject rubbing shoulders with substantively

oriented economists. When the strong abstract econometrician and the

substantive researcher happen to be the same person, that™s great, but

it™s rare.

Hansen: I know that you have continual contact with research in

Federal Reserve banks. What role do you see time-series econometrics

playing in research that supports the formulation and implementation of

monetary policy?

Sims: I wrote a paper [Sims (2002)] recently that is concerned in

part with this issue. I argue there that econometricians have failed to

confront the problems of inference that are central to macroeconomic

policy modeling. The ¬rst serious policy models inspired, and then used,

the Cowles methodology, but, as the models expanded to try to incor-

porate all the important sources of information about the economy, they

reached a point where non-Bayesian approaches to inference ceased pro-

viding answers. The models had many equations, many predetermined