For instance, suppose we have reached the following three potential

premises:

(∀x)(∀y)(∀z)((Rxy & Ryz) ⊃ Rxz) (4)

Rab (5)

(∀x)(∀y)(Rxy ⊃ Ryx) (6)

and the potential conclusion

Raa (7)

Who Has Kidnapped Information? 205

Here, there are no ways of applying (i). In contrast, a = 5 and n = 2, and hence

there are 25 = 32 ways of introducing new formulas by universal instantiation.

As this simple example shows, in an attempted proof it is the universal

instantiations that on the face of things complicate the process so much that

it soon becomes impossible for any real-time computers to carry out. (If a

concrete example is needed, the party problems in Ramsey theory can serve

the purpose.) Yet, often, only a fraction of the universal instantiations is needed

for the actual proof. For instance, in our mini-example it suf¬ces to introduce

only one instantiation (instead of all of the 32 ones)”namely, the following:

(Rab & Rba) ⊃ Raa (from(4)) (8)

Rab ⊃ Rba (from(6)) (9)

Here, (5) and (8)“(9) imply (7) in virtue of propositional logic.

Students of mechanical theorem-proving have accordingly sought means

of keeping the instantiations (ii) minimal”for instance, by postponing them

(for instance, through a use of dummy symbols)”until it is easier to see which

ones are really needed. This is in fact what much of the theory of mechanical

theory-proving amounts to.

However, in a general theoretical perspective, the crucial steps are (i) and

not (ii)”that is, introduction of new individuals by existential instantiation

rather than application of already established or assumed general laws to

known individuals. This is because the impossibility of a countermodel can

be seen only by considering approximations of a certain minimal size, mea-

sured by the number of individuals in it. In an attempted proof, this number

is increased only by existential instantiations (i).

Of course, the choice between different possible existential instantiations

makes a difference. Hence, what is really needed in the theory of mechani-

cal theorem-proving is a set of guidelines for choosing the optimal existential

instantiations. This problem is not new. In the traditional axiomatic geometry, it

was known as the problem of carrying out the optimal auxiliary constructions.

Leibniz complained that “we still do not have a method of always determining

the best constructions.” (Nouveaux Essais, Book IV, ch. 3, sec. 6.) My main

point here is a generalization of Leibniz™s. We still do not have a good theo-

retical overview in the theory of theorem-proving on how the new individuals

that are needed in our logical thinking are best introduced.

In any case, the minimal number of such auxiliary individuals that are

needed to bring out the hidden inconsistency is an interesting parameter. It

indicates the minimal complexity of the con¬gurations of individuals that have

to be considered in order to bring hidden inconsistencies to daylight. It seems

to yield much better measures of surface information than the mere length of

the deductive (computational) argument needed to prove the inconsistency.

Perhaps computer scientists could use such a measure of information instead

of, or in addition to, the simple-minded notion of the length of a deductive

Socratic Epistemology

206

argument. For one thing, suitable non-elementary deductive methods can, in

effect, allow the virtual introduction of several auxiliary individuals in one fell,

inferential step, which confuses the conceptual situation.

But to measure surface information by means of the complexity of the

con¬gurations needed for the proof amounts essentially to the same thing as

the use of the depth at which the inconsistency of a constituent becomes appar-

ent in the form of trivial inconsistency. And if this were the recommended way

of measuring surface information, then we would be returning to the idea of

measuring surface information in the same way as depth information, with the

help of a system of weighting inconsistent constituents in a suitable manner.

Perhaps the critics are right and the idea of the “amount of work” needed

to rule out merely apparent possibilities is too crude to allow any interesting

philosophical or other general theoretical conclusions. The close connection

between surface information and depth information argues for developing

their respective theories in the same way.

The same conclusion is suggested by other considerations. One of them is

the fact just pointed out that in a purely deductive (computational) sense, the

introduction of several new individuals can in a certain sense be compressed

into one deductive step. This is what happens with rules that do not satisfy

the sub-formula principle (in the framework of countermodel construction).

Such rules are known among proof theorists as rules that are not cut-free. It is

highly suggestive that the interpolation formulas that serve explanatory and

other theoretically interesting purposes can be obtained only from cut-free

proofs, such as tree method proofs or tableau proofs, not from proofs using

cut rule or unlimited modus ponens.

We have thus seen how Quine has tried to destroy the notion of informa-

tion and how complexity theorists have twisted it out of recognition for their

special purposes. But we have also reached several constructive suggestions

concerning this notion. In particular, studying the varieties of the concept of

information throws an interesting light on the role of logic in the total struc-

ture of our knowledge, both on the way logical consequence relations can be

established and on how they serve our wider theoretical purposes.

References

Attneave, Fred, 1959, Applications of Information Theory to Psychology, Holt,

Rinehart and Winston, New York.

Bar-Hillel, Yehoshua, 1964, Language and Information: Selected Essays on Their

Theory and Application, Addison-Wesley, Reading, MA, and Jerusalem Academic

Press, Jerusalem.

Bar-Hillel, Yehoshua, 1955, “An Examination of Information Theory,” Philosophy

of Science, vol. 22, pp. 86“105. Reprinted in Bar-Hillel (1964), ch. 16.

Bar-Hillel, Yehoshua, 1952, “Semantic Information and Its Measures,” Transactions

of the Tenth Conference on Cybernetics, Josiah Macy Jr. Foundation, New York,

pp. 33“48. Reprinted in Bar-Hillel (1964), ch. 17.

Who Has Kidnapped Information? 207

Bennett, Charles H., 1982, “The Thermodynamics of Computation “ A Review,”

International Journal of Theoretical Physics, vol. 12, pp. 905“940.

Boolos, George S., and Richard C. Jeffrey, 1989, Computability and Logic, third

edition, Cambridge University Press, Cambridge.

Brillouin, Leon, 1956, Science and Information Theory, Academic Press, New York.

Calude, Christian, 1994, Information and Randomness, Springer-Verlag, Berlin.

Carnap, Rudolf, 1952, The Continuum of Inductive Methods, University of Chicago

Press, Chicago.

Carnap, Rudolf and Yehoshua Bar-Hillel, 1952, “An Outline of a Theory of Semantic

Information.” Technical Report No. 247, M.I.T. Research Laboratory in Electron-

ics. Reprinted in Bar-Hillel (1964), ch. 15.

Chaitin, Gregory J., 1987, Algorithmic Information Theory, Cambridge University

Press, Cambridge.

Chaitin, Gregory J., 1982(a), “Algorithmic Information Theory,” Encyclopedia of

Statistical Sciences, vol. 1, John Wiley, New York, pp. 38“41.

Chaitin, Gregory J., 1982(b), “Godel™s Theorem and Information,” International

¨

Journal of Theoretical Physics, vol. 22, pp. 941“954.

Chaitin, Gregory J., 1974, “Information-Theoretic Limitations of Formal Systems,”

Journal of the ACM, vol. 21, pp. 403“424.

Cherry, Colin, 1957, On Human Communication, M.I.T. Press, Cambridge, MA.

Cherry, Colin, 1952, “The Communication of Information,” American Scientist,

vol. 40, pp. 640“664.

Cover, Thomas M., Peter Gacs and Robert Gray, 1989, “Kolmogorov™s Contributions

to Information Theory and Algorithmic Complexity,” The Annals of Probability,

vol. 17, no. 3, pp. 840“865.

Davis, Martin, editor, 1965, The Undecidable, Raven Press, New York.

Davis, Martin, 1973, “Hilbert™s Tenth Problem Is Unsolvable,” The American Math-

ematical Monthly, vol. 80, no. 3, pp. 233“269.

Davis, Martin, 1978, “What Is a Computation?” in L. A. Stern, editor, Mathematics

Today, Springer-Verlag, Berlin, pp. 241“267.

Dreben, Burton, and Juliet Floyd, 1991, “Tautology: How Not to Use the Word,”

Synthese, vol. 87, pp. 23“49.

Dretske, Fred, 1981, Knowledge and the Flow of Information, Blackwell, Oxford.

Feinstein, A., 1958, Foundations of Information Theory, McGraw-Hill, New York.

Ford, Joseph, 1989, “What Is Chaos and Should We Be Mindful of It?” in The

New Physics, P. Davies, editor, Cambridge University Press, Cambridge, pp. 348“

371.

¨

Godel, Kurt, 1931, “Uber formal unentscheidbare Satze der Principia Mathematica

¨ ¨

¨

und verwandter Systeme I,” Monatshefte fur Mathematik Physik, vol. 38, pp. 173“

198. Reprinted in English translation in K. Godel: Collected Works, vol. I: Publica-