| |
---|---|

© Copyright 1985 by Ralph E. Kenyon, Jr.

Logicism had its beginnings in a desire to construct a foundation for mathematics which was both certain and free from intuition. The method of logicism sought to define every mathematical idea in logical terms. It's a curious paradox that the very model of rigor which motivated the effort, geometry, was deemed exempt from this reduction. With the development of Gödel's theorems, at least one formulation of logicism becomes undecidable.

Before the term 'logicism' was applied to any domain, certain beliefs and projects emerged that were later labeled logicism. An example is Frege's research toward providing a foundation for arithmetic.

Frege noted that there was a resurgence of respect for the
Euclidean standards of rigor. The model of a proof in geometry, in which each
step is explicitly justified as taken from a construction, a definition, or a
previous theorem, provides the standard of rigor to aspire to in the rest of
mathematics. In his __Grundlagen___{1} Frege
notes that proofs are being demanded for things which had been previously
accepted without question. He characterizes proofs as aiming to provide
certainty regarding the truth of propositions, as well as, to provide knowledge
regarding how the truth of a proposition is dependent upon the truth of other
propositions. In particular, our intuition that a given proposition is true __
must__ be kept distinct from a proof of that proposition. Whether such a
proposition is "true" or "false" is a metaphysical question.

Metaphysical realism, and Platonism presumes that there is "a fact of the
matter" which __can be known__. How the truth of a proposition can be known,
on the other hand, is an epistemological question. It would seem, that the newly
evolved need to __justify__ propositions which had previously been accepted
on faith (without question) indicates a movement from metaphysics to
epistemology. For this reason, logicism is sometimes referred to as
epistemological logicism. A complication of this is that the proposition that
logicism is true (where just what logicism is is as yet not explicitly stated)
must itself be subject to this movement entailed by logicism; that is, logicism,
as a movement of ideas from falling within the purview of metaphysics to falling
within the purview of epistemology, poses an epistemological question, and must
therefore be proven or "justified" on epistemological grounds. So, at an
abstract level, logicism is a change in the conventional assumptions about the
metaphysical status of certain propositions; namely that these propositions,
which had been previously deemed as not posing any significant epistemological
question, do, in fact, involve epistemological problems.

I see this trend as an important indicator that a fundamental paradigm shift is
occurring. The old paradigm entails the notion that some things are knowable
(and hence true) without regard for how they are known. The new paradigm would
deny this; nothing is knowable if we cannot "know" how it is known. The new
paradigm statement is problematic in virtue of an apparent equivocation in
'know'. The second level 'know' must be different from the base level 'know' if
circularity is to be avoided; otherwise 'know' must be specified in such a way
that recursion is possible. It would appear that what it is __to know__ is no
longer a clear, certain and self-evident notion. For the purposes of
mathematics, I would say that something is known if (a) it is an axiom, or (b)
[and this is the recursive part] there is a proof of it (in the sense that a
proof has at one time or another actually been constructed). This makes 'know'
depend upon our characterization of proof. In any event, there must be some
primitive things that are known, but how they are known is not questioned.
Anything of significance cannot be allowed as such a primitive thing. These
primitive things should therefore pose no epistemological problems; they should
only be a matter of metaphysical question, and ones with nearly universal
agreement, that is, conventional truths such as some of those in logic.

Frege specified that the proofs used in arithmetic should meet
certain criteria. They should exhibit clarity, certainty, and be self-evident._{2}
If this criteria is applied to those primitive things above, then the criterion
of self-evident rules out epistemological questions for these primitive things.
And, the certainty criterion guarantees their truth. I attribute a psychological
character to the word 'certain'; when something is "certain" it has the status
of not being doubted (by people); all agree to its
__alleged__ truth. For this reason I would deny that from certainty, truth
follows. The element of conventionality in this is unmistakable. Frege's
logicist direction seems to be simply to minimize the scope of this
conventionality. I do not believe that the conventional can be eliminated; I do
believe that it can be pushed back far enough that substantial agreement is
possible. Whether that is back far enough that the questions are relegated to
logic, or even to language, seems to be one way of stating logicism. Are numbers
such clear, certain, and self-evident things?

You may be asking why I am applying the criteria for an acceptable proof to "things". Such a generalization is not unfounded. It is just such criteria which go into our acceptance of things, and it will be required that primitive things as well as simple steps be justified in rigorous proofs (although the "justification" for primitive things may simply be to assert that they are indeed primitive.) Frege is examining the concept of number; his aim is to show that it is not, what until then it had been taken to be, one of those primitively knowable things. If number is to be founded on something else (for instance, logical concepts) the previous conventional belief that it was such a clear, certain, and self-evident thing requires questioning. The epistemological dimension is added. We must move the concept of number from "pure" metaphysics into epistemology.

Ok, what are the epistemological structures with which the question is to be analyzed? Well, rather than re-invent the wheel, Frege adopted Kant's answer to the question of the structure of epistemology.

Kant makes a distinction between general logic on the one hand and transcendental logic and synthesis on the other hand. The first, general logic, is founded in the other, wherein is found the first origins of knowledge. These differ in bringing different representations to concepts (analysis), and pure synthesis of representation to concepts (synthesis). Analytic and synthetic procedures distinguish the two. Analytic procedures do not presume to add to named and described items, but merely to arbitrate among them. It is the function of synthetic procedures to name and describe items. It is propositions which are classified according to this distinction.

_{3}The a priori/a posteriori distinction originally marked order priority (between things); a priori refers to what comes before; a posteriori refers to what comes after. Kant applied this distinction to knowledge in classifying its sources. A posteriori knowledge is gained empirically, that is, from our senses. A priori knowledge is gained from reason.

_{4}

If we look at these two distinctions in regard to the question of certainty, it
becomes clear to us that analytic procedures can provide certain answers. The
process of abstracting synthetic judgments or judgments of transcendental logic
do not give certainty because the factor of __adding to__
inherent in the synthetic process can introduce uncertainty.

A posteriori knowledge, on the other hand, bespeaks a fundamental uncertainty. The very nature of empirical knowledge is permeated throughout with uncertainty. While it sometimes seems that we may be "certain" of a specific observation, it is clear that any principle or model designed to account for the observation remains subject to disconfirmation. Specific observations, which depend upon the senses, are themselves subject to question, in that parallax, illusion, and other processes render them doubtful. Any knowledge of this kind cannot be certain.

In his effort to provide a "certain" foundation for mathematics (arithmetic), Frege naturally must choose, from among these distinctions, that mathematics falls within the realm of analytic and a priori. Kant had already classified geometry and arithmetic as falling within the realm of synthetic and a priori. Kant's motivation was that geometry "synthesized" intuitions about space and arithmetic "synthesized" intuitions about time. (The invention/discovery of non-Euclidean geometries casts doubt on Kant's decision about geometry). If Frege accepts Kant's classification, then he must, a fortiori, accept that arithmetic is based upon fundamentally uncertain precepts. Accordingly, Frege rejected Kant's classification. It is much easier to dispute Kant's motivation for arithmetic as being based on temporal intuitions (counting is a time sequential process) than to discredit the motivation for geometry, whose demonstrations have often been intimately involved in the objects embodying spatial intuition. That we can perceive groups of 2, 3, 4, and even 5 things as gestalts provides a counter-example to temporal intuition as a motivation for arithmetic being synthetic a priori. Therefore, providing a certain foundation for mathematics merely becomes the task of showing how it can fit into the analytic a priori classification.

Frege took up the challenge. The a priori criterion tells us that we must look
for explanation and understanding in terms of reason rather than in terms of the
empirical. The analytic criterion requires that what is known about mathematics
must be shown in ways which do not add anything to previously given names and
definitions. Does this prevent making new definitions? Not at all; it merely
restricts the kinds of definitions. A definition which "does not add anything"
may specify a set of conditions which are describable in known terms; it only
creates a __name__ which is thenceforth an abbreviation of the full
description. This kind of definition is one created by __analytic__
procedures, and therefore, falls within the purview of general logic.

Arithmetic rests upon a few fundamental mathematical ideas. If definitions can be devised/discovered which account for these ideas by using only terms of logic and analytic procedures, then arithmetic will be shown to rest solidly on general logic, that is, it will have been reduced to logic. Frege used, as his basic motivation, the notions of something falling under a concept, and the extension of a concept. He characterized number as the extension of a certain concept.

the number which belongs to the concept F is the extension of the concept "equal to the concept F"Frege's approach, in defining numbers in terms of concepts, explicates a more general view, of which these numbers are specific examples. He said that the extension of two concepts is the same if and only if the same things fall under both concepts. His basic law (V), from his_{5}

the extension of the concept "equal to the concept F" is identical with the extension of the concept "equal to the concept G" if and only if The same number belongs to the concept F as to the concept G_{6}

(F)(G)[εF(x) = εG(x) ≡ (x)(F(x) ≡ G(x))]Andrew Ward points out that this characterization of identity depends upon a formulation of the abstraction principle.

(F)(∈S)(x)[x ∈ S iff F(x)]It was Bertrand Russell who pointed out that this leads to a contradiction known as Russell's paradox. By letting the concept F be (x ∉ S) a self contradictory theorem ensues._{7}

(F)(∃)(S)(x)[x ∈ S iff F(x)]

If F(x) is (x ∉ S), then this particularizes to:

(∃S)(x)[x ∈ S iff (x ∉ S)]

Russell's solution to prevent this contradiction was to invent the theory of
types. Every object applies at a particular level. The statement '(A ∈ B)' is a
wff iff the type of A is one level less than the type of B. (A ∈ A) is
disallowed because this criterion is not met -- the formula is not well formed.
Russell's solution is very unsatisfactory because numbers must be re-duplicated
at every level.

Even if the consistency problem is solved, there is still the matter of a couple of axioms which seem to involve something other than just logical notions. The null set axiom and the axiom of infinity are examples. The null set axiom can be stated:

(∃S)(x)(x ∉ S)Russell believed that this "existence" axiom postulated the existence of something, and was not a purely analytic statement. Accordingly, he advocated that this and other "existence" axioms must be part of the hypothesis of all mathematical theorems which depend upon them. This "if-then-ism" was also a very unsatisfactory solution.

Frege strengthened his basic law (V) to

(F)(G)[(εF(x) = εG(x)) ≡ (y)(((y x≠ εF(x)) ∧ (y x≠ εG(x))) ≡ (F(y) ≡ G(y)))]This necessitates changing the abstraction principle to

(F)(∃y)(x)[x ∈ y ≡ ((¬(x = y) ∧ F(x)))]

Unfortunately Geach and Quine both derived contradictions using this new repair. Frege himself eventually gave up trying to show a foundation for mathematics in the analytic a priori arena, and began to examine the synthetic a priori, which is to say that he gave up logicism.

Quine proposed a compromise between Russell's solution and
Frege's attempts. Quine proposed to allow unstratified formula, but not within
the context of the abstraction principle. (A formula is stratified whenever
every occurrence of '∈' is preceded and followed by objects whose relative type
differs by one level, with the preceding element at the lower level.) Once
again, the patch was not adequate. Rosser showed in 1942 that this led to the
Burali-Forti Paradox._{8}

Other efforts in the name of logicism deviated from Frege's goal to provide a certain foundation for mathematics. These other views stem from a direct statement of logicism as an attempt to reduce mathematics to a single formal system which is based entirely on logic. Rudolf Carnap stated that the statement of logicism had two parts.

Carnap argues that the contradiction for which Russell devised the theory of types is a not a problem in attempting to show that mathematics rests on logic, but that the contradiction is implicit in logic itself.1. The conceptsof mathematics can be derived from logical concepts through explicit definitions.2. The theoremsof mathematics can be derived from logical axioms through purely logical deduction._{9}

Geoffrey Hellman goes one step further. His formulation of logicism is:

Hellman proves a theorem regarding finitely axiomatizable systems which denies his statement of logicism.

- EL: There is a formal system K such that

- for any knowable mathematical claim, P, there is a sentence S in the language of K such that S represents P and S is a theorem of K; and
- Any theorem S of K represents some knowable mathematical claim.
_{11}

He goes on to note that the key to proving the infinitely axiomatizable case rests on an inference from (i) to (ii).Metametatheorem I.

There is no formal system that is finitely axiomatizable and that meets the conditions of the epistemological logicist thesis._{12}

This seems to me to be a claim that EL is undecidable.

- (i) ∀i ◊Kn ⌈B
_{i}is true⌉- (ii) ◊ Kn ⌈∀i B
_{i}is true⌉. . . passing from (i) to (ii) depends in general on the conditions under which it is possible to know quantified statements ranging over infinite sets, a central question of mathematical epistemology. . . . we do not have a categorical extension of metametatheorem 1 to the infinite case, but rather a conditional one:Metametatheorem II.

No recursively axiomatizable formal system for which the (material) conditional "if (i) then (ii)" holds can meet the conditions of epistemological logicist thesis (EL).

. . . Although there is a window left open through which the EL thesis can pass,it can never be known of any recursively axiomatizable system(to which the Gödel theorems apply)that it passes through!_{13}

Logicism, which started with a search for certainty as a foundation for mathematics lead to the certainty of Gödel's theorems, which shows the uncertainty of any foundations. If a system is powerful enough to include logic, then it cannot be proven consistent, except by recourse to a more powerful system (which, incidentally, includes a submodel of the very system whose consistency is attempting to be shown). In mathematics (and logic) one can be certain of uncertainty or uncertain of certainty, but not certain of certainty

- Gottlob Frege,
__The Foundation of Arithmetic__, Northwestern University Press, (1950)

- Philip Kitcher, "Frege's Epistemology",
__The Philosophical Review__LXXXVII, No. 2 (April 1979), 235-262

- Graham Bird,
__Kant's Theory of Knowledge__, Humanities Press, New York, 1973

- D. W. Hamlyn, "A Priori and A Posteriori",
__Encyclopedia of Philosophy__, Vol. 1, pp. 140-144, MacMillan, New York, (1967)

- Frege (1950), pp 79-80

- ibid. p. 80

- Andrew Ward, "The Development of Quine's Logicism",
__Dialogue__25 (April 1983), 48-55

- ibid. pp. 51-52

- Rudolf Carnap, "The Logicist Foundations of Mathematics",
__Philosophy of Mathematics__2nd. ed., Benacerraf & Putnam. 41-52

- ibid., p. 46

- Geoffrey Hellman, "How to Gödel a Frege-Russell: Gödel's Incompleteness
Theorems and Logicism",
__NOÔS__15 (November 1981), 451-468

- ibid.

- ibid.

This paper was originally submitted on May 7, 1985 in partial fulfillment of the requirements for the course PHILOSOPHY 791R to Professor Linda Wetzel, Department of Philosophy, University of Massachusetts, Amherst Massachusetts by Ralph E. Kenyon, Jr., 191 White Oaks Road, Williamstown, MA 01267