April 18, 1989
This paper concerns itself with certain historical and foundational issues in epistemology. Its main focus is to take another look at Locke's theory of knowledge with the hope that a fresh look back can supply some insight into some current concerns in computer science and artificial intelligence.
There is currently a great controversy concerning whether machines can, or, more to the point, will ever be able to, think. I doubt whether the many varieties of realism currently in vogue are up to the task of providing a satisfactory answer. I am not sure anti-realistic views will fare any better.
Since science has a strong emphasis on empiricism, a logical approach is to start with the early empiricists.
II. Origins of Empiricism.
Any serious consideration of epistemology requires examining empiricism and its alleged weaknesses. Empiricism has been identified as an English school of philosophy which had its beginnings in the thirteenth century.
The Sufi claim an early emphasis on knowledge from experience. According to Idries Shah, certain Sufi teachings of the illuminist school, the distinction between a collection of information and the knowing of things through actual experience, were interpreted and recorded by the adept, Roger Bacon, a Franciscan Monk. In 1268, in Opus Maius, Bacon wrote:
There are two modes of knowledge, through argument and experience. 'Argument' brings conclusions and compels us to concede them, but it does not cause certainty nor remove doubt in order that the mind may remain at rest in truth, unless this is provided by experience.(1)
According to Sufi lore, Roger Bacon was an initiate. At the time, centuries of human thought had been dominated by rational deduction as the only "true" source of knowledge. Roger Bacon planted a seed which flowered in present day empiricism and scientific knowledge. Whether we can justifiably claim that Roger Bacon was the father of empiricism is another matter, but it is clear that Bacon's emphasis on experiment and "verifying" argument by experience form the essential feature of empiricism. Others contributed to the development of empiricism.
In the middle ages William of Ockham [1300-1349] had already put forward empiricist doctrines(2)
The Empiricist theory of language finds expression in the works of Thomas Hobbes (1588-1679)(3)
[Francis Bacon's (1561-1626)] fame has been greater than his true merit .... It has been necessary to limit Bacon's achievement to the introduction of empiricism and the inductive method. But even here it is not possible to forget the role played by his namesake of three centuries earlier, Roger Bacon, who was more original.(4)
However, in order to understand the philosophical significance of empiricism, ... we must consider its mature expression in [An] Essay Concerning Human Understanding of John Locke, published in 1690.(5)
Locke attempted to integrate the popular Cartesian rational view, which had had a long and distinguished history going back to Aristotle and Plato, with the notion that experience provides knowledge. Locke wanted to elevate knowledge gained through experience to the status of intuitive or introspective knowledge and deductive knowledge.
I am attracted to both the rational approach which includes intuitive and demonstrative knowledge and the empirical approach which Locke calls sensitive knowledge. On the other hand, I am not particularly enamored of the destination to which some modern philosophers claim Locke leads--solipsism, by way of Berkeley, Hume, & Kant. Locke's theory of knowledge could be more favorably interpreted so as to set us on the road to a different destination, one more palatable than the belief that, as one American Indian religion expresses it, "When I die, the world ends."
Bruce Aune has suggested such a "revisionist" approach to empiricism in his newest book Knowledge of the External World.(6) Aune has rightly keyed on the traditional interpretation of "knowledge" and "truth" as bearing reexamination, and possibly alteration. The kind of knowledge that entails a truth that is infallible is the price of admission to traditional metaphysics; it is a price that cannot be paid by epistemology and science. The price reeks of the taint of realism where "certainty" is the kind of infallible truth unaffected by attitudes of persons who might entertain such alleged "truths". The "attitude" of certainty seems hardly so omniscient. Descartes weeded out nearly everything from this garden of metaphysical certainty. An empiricism which is asked to provide this kind of certainty, knowledge, and truth simply cannot be coherently devised. The search for this kind of certainty, knowledge, and truth does lead from empiricism to solipsism. However, this path is not the path suggested by empiricism. Certainty is not an ontological commitment of empiricism, consequently the question of certain knowledge is a non-sequitur for empiricism. Locke's empiricism recognizes the fallibility of our senses and the need to "verify" argument with "experience". It is only the old rationalist need for "certainty" that asks empiricism for results beyond its charter. Locke hinted at this when he said that sensitive knowledge is not as certain as that obtained by the other means (intuitive and deductive).
III. Locke's Theory of Knowledge.
Locke follows Descartes lead in searching for some minimal starting point on which to build a sound theory of knowledge. He eventually gives assent to three kinds of knowledge: intuitive, demonstrative, and sensitive. All are based upon "ideas". All three have analogues in computer information processing.
A. Ontological considerations.
Locke defines knowledge as the perception of the agreement or disagreement between "ideas". He presents four ways of apprehending this agreement or disagreement. The four ways are identity or diversity, relation, coexistence or necessary connexion, and real existence. They aren't ways of apprehending agreement or disagreement per se, but are ways of "knowing" regarding ideas. Locke is explicitly committed to the existence of ideas as objects of the mind and the means to knowledge. By implication however, he is also "ontologically committed" to external objects which "cause" certain "ideas"--those which give rise to "sensible knowledge".
The fundamental building block of Locke's theory of knowledge is the "idea". Ideas are the objects of the mind with which we think and by which we know. Some ideas are expressible by words. Other ideas seem like images, not adequately expressible with a thousand words. Since ideas are of the mind, words cannot communicate what an idea is simpliciter. The best we can do is to hope that our listener infers what we intend, or alternately, based upon enough interchanges, we could infer that our listener inferred what we intended to communicate.
It is the difficulty with getting past ideas that later philosophers believe leads to solipsism. The only direct experiences we have is of our ideas themselves. This includes our ideas of experiencing other persons. There is just no alternative but our "idea" of another person. I don't think Locke would have carried this quite so far. He gave great credence to "sensible knowledge" which will be discussed more later.
2. Identity or diversity.
By identity or diversity, Locke means that we are able to perceive ideas in two ways; we identify or recognize them, and we are able to distinguish them from one another. Identity is perceiving that an idea "agrees with itself", while diversity is perceiving that distinct ideas "disagree with each other".
The mind may compare distinct ideas in several ways and find agreement or disagreement (knowledge) in such comparisons. The agreement or disagreement of relations is limited by our ability to discover intermediate ideas with which to demonstrate the agreement. It is this way of apprehending agreement or disagreement that provides the basis for deduction. A deductive reasoning process proceeds by finding pairwise agreement between a sequence of "steps" that eventually allows finding agreement between the starting and ending points. "Relation", as Locke defines it, gives us a place in our scheme of knowing to justify our basic rules of inference.
4. Coexistence or necessary connexion.
Though limited in scope, this is very important in our knowledge of substance. Simple ideas "coexist" in a subject. Our ideas of species of substance consist of collections of such coexisting complexes.(7)
I shall use a set metaphor. Simple ideas are elements. Subjects are sets of such simple ideas. When two such simple ideas are elements of the same set, they "coexist" in the same subject, which may be the idea of a substance.
5. Real existence.
By "real existence", Locke means our ideas which are "caused" by external objects. This concession is what brings Locke into the empiricist camp. He is giving the status of knowledge to some things we perceive. As Locke puts it, "Some 'real existence' agrees to an idea." Locke appears to reserve this to ideas "corresponding to" something outside the mind. This kind of agreement or disagreement is not strictly between ideas; We would say that the "agreement" is between the idea and its physical referent; a correspondence theory of reference is suggested here.
Locke states "all ideas come from sensation or reflection".(8) He denies that we have any innate ideas; he claims that the mind begins like "white paper, void of all characters, without any ideas". In answer to where we get all our materials of reason and knowledge, he asserts "from EXPERIENCE".(9) That we still do not know, even today, how the beginnings of such a process may develop does not, by itself, make Locke's belief false. However, there are some discoveries in brain research which call into question the opinion that the mind is a "tabula rasa".
1. Brain research suggests some innate knowledge.
Studies in brain responses to visual stimuli suggest the existence of "hard-wired" responses to certain visual stimulation patterns. Low level optical neurons respond to the presence of "edges"; others respond to lines or edges at a particular orientation. Other studies involving higher optical processing in the cortex show cellular groups which respond to the "essential visual features" of a face (generating the so-called "grandmother cell" controversy).(10) A reasonable conclusion from these studies is that the ability to recognize certain basic patterns of visual stimulation is "built in" or innate. A built-in ability to "recognize" certain basic patterns is just the ability to "identify" a simple idea. This would entail that there is some knowledge which is innate, which is contrary to Locke's claim.
2. Innate ideas solve the problem of first knowledge.
This evidence provides an answer to what would otherwise have been a significant difficulty with Locke's system. Locke is at a loss to explain just how we come to have our first ideas, that is, to "identify" them (other than by "experience" writ large). Empirically derived evidence which shows that we have, in fact, a "hard wired" or innate ability to recognize a few simple ideas, overcomes the "first cause" problem with ideas.
3. Locke would accept innate knowledge.
Locke believed that our first ideas are acquired from experience. But, as an empiricist, he would have been the first to drop his belief, that there is no innate knowledge, in favor of this new "evidence of the senses" obtained through "scientific" research. Within Locke's system, the ability to identify an idea generates primitive knowledge, that of the "identity of an idea". If identifying or recognizing certain primitive ideas is "built-in", then the knowledge of the identity of these ideas is also "built-in" or innate. These modern scientific research results do not pose a major threat to Locke's system. To date, only the most simple of ideas seem likely to have innate origins, and that provides just enough structure to solve the first cause problem without damaging the remainder of the system.
All relations seem to be based upon a hierarchy of structure which has at its base a few simple "primary" relations. While I find no specific indications, it would seem almost as if Locke was using a knowledge of "recursive" definitional structure. Some ideas "coexist" with each other in other ideas and some are primary, that is, not composed themselves of other ideas. Such primary ideas provide the "base case" for a recursive structure, while the other ideas form the recursive cases. For ideas, the structure is not overly complex. An arbitrary idea is either complex or simple. If it is complex, then it is composed of ideas which coexist in the subject. Otherwise, it is a simple or primary idea.
It could also be argued that contrasting "relation" and "sensitive knowledge" anticipates Kant's analytic/synthetic distinction; a simple idea in a subject "coexists" in the predicate applied to it. An agreement is found between the subject and the predicate consisting of some simple idea which coexists with some ideas in the subject and coexists with other ideas in the predicate. It is the "necessary connection" (perceiving the same idea in both predicate and subject) than leads us to Kant's analytic a-priori.
D. Difference between Knowledge.
Locke asserted an essential difference between intuitive and demonstrative knowledge on the one hand, and sensitive knowledge on the other hand. This difference extends to the degree of certainty and evidence for the distinct kinds of knowledge.
14. Sensitive knowledge of particular existence.--These two, viz., intuition and demonstration, are the degree of our knowledge; whatever comes short of one of these, with what assurance soever embraced, is but faith or opinion, but not knowledge, at least in all general truths. There is, indeed, another perception of the mind employed about the particular existence of finite beings without us; which, going beyond bare probability, and yet not reaching perfectly to either of the foregoing degrees of certainty, passes under the name of knowledge. There can be nothing more certain, than that the idea we receive from an external object is in our minds; this is intuitive knowledge. But whether there be anything more than barely that idea in our minds, whether we can thence certainly infer the existence of anything without us which corresponds to that idea, is that whereof some men think there may be a question made; because men may have such ideas in their minds when no such thing exists, no such object affects their senses. But yet here, I think, we are provided with an evidence that puts us past doubting; .... So that, I think, we may add to the two former sorts of knowledge this also, of the existence of particular external objects by that perception and consciousness we have of the actual entrance of ideas from them, and allow these three degrees of knowledge, viz., intuitive, demonstrative, and sensitive: in each of which there are different degrees and ways of evidence and certainty.(11)
There is a lot packed into this one passage. There are three kinds of knowledge. Each has its own way of certainty and evidence. Sensitive knowledge does not extend to general truths. Intuitive and demonstrative knowledge provide the standard of certainty to which claims to knowledge must be compared. The evidence for the existence of particular finite beings without us is beyond doubt. The kind of certainty appropriate to intuitive and demonstrative knowledge is not appropriate to sensitive knowledge. That a particular idea corresponds to a particular thing is not certain, but merely probable.
Expecting sensitive knowledge to actually attain the degree of certainty one demands of intuitive and demonstrative knowledge seems to be the turn that leads to solipsism. According to my reading of Locke, that degree of certainty and evidence is not the appropriate standard. On the other hand, it also seems that he excludes general truths from the providence of sensitive knowledge. We are left with a gulf between intuitive and demonstrative knowledge with their criterion of certainty and evidence on the one hand and sensitive knowledge and its criteria of certainty and evidence on the other hand. How are we to reconcile them? Locke has an answer.
Hypotheses, if they are well made, are at least great helps to the memory, and often direct us to new discoveries. But my meaning is, that we should not take up any one too hastily ... till we have very well examined particulars, and made several experiments in that thing which we would explain by our hypothesis, and see whether it will agree to them all; whether our principles will carry us quite though and not be as inconsistent with one phenomenon of nature as they seem to accommodate and explain another.(12)
E. Analogy and Probability.
Since the way of certainty and evidence for sensitive knowledge differs from that of intuitive and demonstrative knowledge, "whether [an hypothesis] will agree to them all" requires another measure of agreement than simple identity, or diversity. Locke's proposal is probability, which he describes as "the appearance of agreement upon fallible proofs", but defines as "likeliness to be true".(13) He goes on to describe an entire qualitative hierarchy of subjective probabilities arranged from the more certain to the less certain. He also cites "analogy" as "the great rule of probability". Clearly, analogy is a method of inference for reasoning with probability.
I love to argue by analogy. Analogy, however, like induction, is one of those arguments which must be corroborated by experience. Analogy is a favorite of mine because I see its origins in both the rational and empiricist disciplines. The rational source of analogy is the precise mathematical model of ratio and proportion. The ratio of two numbers is analogous to the ratio of the other two numbers; the two pairs are in the same proportion to each other. This illustrates the "pure form" of analogy. The empirical source of analogy is the physical phenomenon of resonance. A motion in one structure stimulates a similar motion in a similar structure. For me, analogy is a perfectly good inference to the best explanation, but with the empiricist caveat: argument must be corroborated by experience.
From the rationalist perspective however, an independent axiom added to a system may be presumed to be in either the form of an affirmation or a denial. Consequently, any conclusion arrived at by analogical reasoning cannot be disposed of without the test of experience; neither, however, can it be given carte blanche acceptance until corroborated by experience. It is inappropriate to claim that both the affirmation and denial may be added to the system and so to yield a contradiction. One doesn't assume the fifth postulate to be simultaneously Euclidean and non-Euclidean.
IV. Computer Science Methods.
As hinted at earlier, my interest is in examining how Locke's theory of knowledge can shed light on current controversies in computer science in general and artificial intelligence in particular. My experience suggests that Locke's theory can be applied with little or no modification as a description of how knowledge is possible for computers.
A. Ontological considerations.
Any consideration of computers as possibly capable of "knowledge" requires suspending our species chauvinistic bias which tends automatically to exclude the possibility for any being not human. I freely admit to reasoning by analogy, and only time and experience will provide the corroboration necessary to vindicate my "belief". Moreover, I expect that there will always be those who will reject my conclusions, even with corroborating "evidence". Reasoning by analogy requires finding "similar" structures to stand as the base for projecting "similar" conclusions. It is plain to me that there is a great deal of similarity in the structure of knowledge based upon ideas as presented by Locke, and "knowledge" based upon representations as implemented in computers.
Computer processing takes for granted that information is represented within the computer. Any such representation is simply a delimited range of characters located in a particular place in memory, whether in active random access memory (RAM) or in "virtual memory", or on some storage device such as a disk or tape. Many "levels" for describing a representation are possible. But, to keep this exposition as simple as possible, and to capture the notion of a "bottom level", or "essential" representation, I will limit my detailed description to the lowest levels of organizing the representation. At the lowest level, a hardware device known as a memory cell stores one "bit" of information. One of these "cells" can have one of two possible states and is like a switch which can be in one of two possible positions, one called "on" and the other called "off". These possibilities are usually written as the binary digits "0" and "1". Higher levels of representation are just composed of sequences or strings of these bits. What these sequences "mean" or represent can usually be taken in two ways--as information or as instructions.
Information or data.
Since one bit of information is only capable of making a single distinction, we need to use more than one such bit in combinations in order to distinguish among more possibilities.
We make up a table of various combinations of such binary digits and call them "characters". For example, according to the American Standard Code for Information Interchange (ASCII), the binary digit pattern 1000001 represents the upper case letter "A". The ASCII table defines 128 standard bit patterns to represent 128 specific characters. More complex "representations" are just combinations of these (or other) characters stored in particular locations.
Instructions or code.
Computers do not live by data alone. They also require code. The same pattern of bits that represents an ASCII character can also be an instruction for the computer to perform some operation. A particular pattern of bits is placed in a specific location in the computer. Only, in this case, the bits actually turn switches on or off. These switches cause the computer to perform operations. One switch might turn on a disk-drive. A group of 3 switches might select from among 8 storage locations.
These lowest level "switches" are active in two ways. The switches can be set (to 0 or 1), or the current setting of the switch can be "read". This is a bit of an anthropomorphic doggerel. What actually happens in both cases is the settings of some switches are "copied" into the settings of other switches. Most of these low level devices are actually what the engineers call "tri-state" devices. The third way of functioning for these switches is to be inactive. The inactive or "high impedance" state is a condition whereby the device does not respond to a read request or a write instruction. By setting all but one such device into the high impedance state, the remaining one is "selected". The selected device can be effectively connected to the system while the non-selected devices remain quiescent. The action of some of these switches may be to change which switches are copied from or to in a future step. The organization of particular groups of these switches becomes an important part in the functioning of the computer. Certain groups of switches are identified (by us) as comprising the computer "memory"; other groups are identified by us as comprising the CPU registers, etc.
When a pattern of bits is copied from memory locations into one particular computer register, the instruction register, that pattern of bits is interpreted as a machine instruction; it "instructs" the computer what to do next. However, the same pattern of bits may be copied from memory into another register; it could be data in such a case, and exactly what data it was would depend upon a higher level of organization, the program running. Ultimately, it is the program which determines when, in a given context, the pattern of bits being processed "represents" something else directly, indirectly, or not at all. How it decides to make one pattern of bits represent something else is discussed later. What is important to note here is that it must be consistent so that the computer may "recognize" when it has a particular pattern of bits repeated or not. It must be able to "identify" a representation or "distinguish" between two.
3. Identity and diversity.
The problem of identity is handled in the computer in two ways; both are engineering considerations. In order to guarantee that a representation "remains the same", great engineering pains are taken to insure that the physical property states that the devices register, the binary digits, are distinguishable by hardware the and do not degrade to the point of preventing the hardware from distinguishing between the two possibilities (0 & 1).
The second consideration, not independent of the first, is that various devices are built to distinguish between the two possible conditions; some devices have each possibility "built in", which we could call "0" identifiers (in the sense that the device "identifies" when a "0" is present) and "1" identifiers, which "identify" when a "1" is present. A third device compares two such representations and "reports" whether the compared representations are the same or are different. Of course, the only representations possible in the system are "0" and "1", so one of these is chosen (by us, the designer) to implement a match and the other to implement a mismatch.
When it comes to "identifying" the letter "A", matching seven bits is required, so there are also devices which implement logical relations. The hardware devices implement a set of logical relations sufficient to give us the usual logic. Some built in capabilities include the ability to compare two strings of bits. The results of any such comparisons must be "stored" somewhere so the device can "recall" its particular comparisons.
Since there is no difference between "0" and "0", there must be some way of differentiating among various instances. This differentiation is achieved by organizing a portion of the storage. One part would be set aside to be the bit patterns of the table of ASCII characters. Keeping track of where such things can be found is instantiated as the responsibility of particular hardware devices. (The so-called "grandmother cell" idea becomes much more tenable in the light of how computers are constructed and work.)
Suffice it to say, that larger and larger "organizations" of basic bit patterns can be assigned to "represent" more complex "ideas". The direct analogy is between Locke's ideas and a computer's internal representations.
The comparison relation I spoke of earlier is the only possible relation implemented in computer hardware. Since any comparisons yield essentially a success or failure, this implements just Locke's "identity" or "diversity". More complex "relations" may be built up of logical combinations of comparisons. Software can organize these comparisons in many ways. At the bottom level of structure, there are devices to perform each of the logically possible comparisons. One can determine whether two bits match, do not match, or whether one bit is "on" or "off". Devices which "report" as output the status of two (or more) input bits are hard-wired to direct their output to specific locations. Some of these outputs control where the input or output of others is directed, and whether an input or an output is to be effected.
Without going into great detail, it is possible to construct these devices so that a general purpose "programmable" device operates on inputs and executes certain other outputs. (Runs a program and produces results.) The entire process is accomplished through a small sequence of either/or choices. The choices are to read or write, go forward or backward, and whether to compare or not. The level of organization of such activities that becomes interesting is when groups of representations can be associated with things of interest to us. We "see" words and numbers in the representations.
"Coexistence" becomes a very important feature for the proper operation of a computer. The "coexistence" of different bit patterns in a complex representation allows "identifying" that that pattern "coexists" in another representation with other "ideas". The entire structure of ideas within ideas is easily assimilated to computers; blocks of memory containing various representations would be classified as a complex idea. An "organization" of such a block of memory into smaller units would allow "identifying" the simple ideas of which the more complex idea is composed. Key terms from computer science include the notion of a record and a key-field. A record is composed of several fields, and a key-field is used to search for and identify which record is to be worked with. For example, our social security number is often the primary key-field used for distinguishing between persons in data records. The use of the name field for searching is common also, but when the same name appears in two different records, another field, such as address, is required to distinguish between the records. The "coexistence" of different fields in a record directly implements Locke's notion of the coexistence of some ideas in others.
It is well known now that computers examine relations among data exceedingly well. It has often been said that "These 'mindless calculators' savagely tromp through data at millions of instructions per second." The problem of other minds suggests that we are as unable to dispose of other minds as we are able to confirm them; to arbitrarily assert that humans have minds and computers cannot ever have minds simply ignores the subtleties of the problem by invoking what some have termed "species chauvinism". When we built computers, we built them to be reliable at performing certain operations and to do it FAST. But, we have also discovered that they are capable of much more than anticipated by the structure of their simple parts. (Of course, examining the structure and function of neurons does not predict or anticipate that something made up of these things is capable of An Essay concerning Human Understanding.) We are learning more and more how to devise methods which preserve not only truth-conditional "relations", but probabilistic relations. Modern expert systems do as well as qualified physicians at diagnosing infectious blood diseases (MYSIN), or as well as geologists at predicting underground oil locations (PROSPECTOR). These systems use probabilistic reasoning and what is called non-monotonic logics to use rules on a data-base to perform "inferences" which are subject to our later corroboration. In most cases, the subsequent success or failure of the prediction is fed back to the system to add "internal" corroboration or disconfirmation of the prediction. My point is that all these high level effects are based upon the simplest of relations -- comparing two representations and the coexistence of simple relations in more complex ones.
The force of an argument by analogy depends directly upon the degree of similarities between the known relations. The greater the "agreement" found between the known comparands, the more likely is the unknown result to agree with the known result. Metaphor is pervasive in our language itself as well as in our communication habits, yet metaphor is just implicit analogy.
A. Intuitive knowledge.
Locke defines intuitive knowledge as the perception of the agreement or disagreement between ideas. A computer discerns the agreement between its internal representations by comparing them bit-by-bit and using the logical relation of "AND" to "decide" when two representations "agree". "Agreement", in this context, becomes the low level bit-by-bit matching of two representations (whether it be the bit-by-bit comparison of a representation with itself, or the bit-by-bit comparison of one representation with another from some other location in storage).
B. Demonstrative knowledge.
According to Locke, demonstrative knowledge is agreement found between ideas by means of a sequence of necessary connections between intermediate ideas. Frege made this explicit and modern logic gives us a very precise definition of a proof which exquisitely captures the sense of Locke's characterization. Little more needs to be said about this similarity because computers are explicitly designed to implement it.
C. Sensitive knowledge.
Locke defines sensitive knowledge as the perception of the agreement between ideas and "real existence". Recent research in brain science suggests that some very primitive ideas are "hard-wired" to specific neurons or groups of neurons. While the innate knowledge which depend upon "identifying" these primitive stimuli is technically in conflict with Locke's assertion that there is no innate knowledge, it is of a character that Locke would readily accept. In the case of computers, the "hard-wired" responses are designed in the form of peripheral devices such as keyboards, card readers, and other input devices. Pressing a certain key causes the creation of a certain representation in a particular location within the computer.
The force of an argument by analogy depends inversely upon the degree of differences between the known relations. The greater the "diversity" between the known comparands, the less likely is the unknown result to agree with the known result. It remains for experience to corroborate or disconfirm any projections.
A. Intuitive knowledge.
In humans, intuitive knowledge is limited to the ability to recognize the identity or diversity of ideas. We recognize that an idea is "the same" and identify it; we recognize that two ideas are "diverse" and distinguish them. Locke does not specify how an idea in which other ideas coexist is to be recognized in terms of the ideas which coexist in it. Computer science representations, however, are precisely well defined in this regard. One might naturally expect logical conjunction to be the method, but Locke does not explicitly say so. Neurological studies suggest otherwise. Connectionist sources have suggested that conventional logic can be implemented with underlying network architectures. Exactly how higher level ideas are "composed" of lower level or primary ideas is not known regarding humans or Locke's theory. This portion of the analogy is not filled in.
B. Demonstrative knowledge.
Since computer programs are explicitly designed to implement our understanding of the process of demonstrative knowledge, there are essentially no differences here.
C. Sensitive knowledge.
When a computer sensor is activated, a series of steps is initiated which results in making available to the central processor a precise representation in a specific location even though the central processor may not always process the available input. In the human condition, much probable knowledge exists, but it falls far short of enough to determine if precise locations play a role in human processing of ideas. Certain brain research results suggest a preferred "front center" orientation for processing low level sensory input. So-called "orienting responses" have been identified with specific brain structures called "ramp architectures", in which disproportionate representation from contralateral and ipsilateral processes create bilaterally unsymmetrical motor responses, resulting in the organism physically turning until the stimulus is in "front center". The process is not altogether unlike phased array radars whose direction of selectivity is determined by programmable time differences in signal processing. While I have not seen explicit research linking ramp architectures with recall, I conjecture that neuron complexes with such "steerable" selectivity would be eminently suitable as a center for processing ideas.
A significant difference between computers and neurological processing involves a fundamental difference between so-called serial devices and connectionist devices. Information in serial devices is stored in integral units, and these internal representations are physically stored in discrete locations. Connectionist devices, on the other hand, store information in a "distributed" manner. It is not possible to find the "location" of a bit of information; "parts" of the bit are "distributed" throughout the storage area. We are familiar with holograms which do this. Holographic memory theories accord with this kind of structure.
This difference is not fatal to Locke's theory. With very few exceptions, all the neural devices studied have been simulated on underlying serial hardware! Moreover, serial logic has been implemented on underlying neural net hardware as well. This difference would only result in questions regarding at what level in the organization the appropriate similarity is to be found.
VII. Discussion and conclusion.
It would appear that Locke has not been dealt justice over the years. Upon a closer look, his system of knowledge appears truly robust even today. The charge that his system leads to solipsism appears to be founded on a failure of his detractors to recognize that Locke himself advocated a different criterion for judging the certainty of knowledge derived from sensory experience. While Locke did grant the highest degree of certainty and evidence to intuitive and demonstrative knowledge, he certainly did not exclude knowledge derived from sensory experience, which is what those who charge solipsism advocate. Locke did suggest that knowledge derived from such experience could not extend to general truths; this suggests that any such general "beliefs or faith" which were derived from experience could not attain the same degree of certainty as that obtainable with intuitive and demonstrative knowledge. Locke implied, in his discussion of analogy and probability, and his discussions of the origin of error, that we should continually seek to improve the degree of certainty of any such general "truths" derived from sensitive knowledge. He clearly placed much "beyond the reach" of certain knowledge, but does not, as the metaphysicians would, dismiss "knowledge" having less than perfect certainty.
A careful reading of Locke suggests that he would not have a strictly univocal use for the terms 'truth', 'knowledge' and 'certainty'. He explicitly advocates different "ways and degrees of evidence and certainty" for the different kinds of knowledge. I agree. Rigorously limiting the use of the word 'knowledge' as that which entails the kind of "truth" which is "infallible" seems to me to deny careful epistemologists the use of perfectly good words. We can relax our degree of certainty from perfect when it comes to our knowledge of the external world. What we hold as "truths" about the world, though still "knowledge", lacks the perfect certainty of intuitive or deductive knowledge. As Bruce Aune puts it:
"Ideally, our well-confirmed theories should contribute to our understanding (to the fallible picture we possess) of what our world is like and how it is put together. But the ontological import of a given theory should be ascertained or estimated by an analytical process in which entities are not multiplied beyond necessity and in which that theory is related to other theories--the ones we have, and the ones we are beginning to have."(14)
We could advocate the opposite extreme. Eliminate all "certain" knowledge and truth as ideas whose time has long gone. What would remain would be knowledge with less than perfect certainty, and, consequently, less than perfect uncertainty as well. But that would abuse Locke in the opposite direction. Let's keep perfect knowledge in matters of demonstrative logic and counting, and probable knowledge in matters of experience and measuring. In the pursuit of truth, let us all hope for the serenity to accept perfect knowledge, the courage to update uncertain knowledge, and the wisdom to discern the difference.
|This page is maintained by Ralph Kenyon and has been accessed 3084 times|
|Would you care to read and/or make comments about this site?|
|xenodochy||extrapolator notebook||general semantics||personal||articles||Laws of Form||search|