Perception, Representation, and Reference
Some Thoughts on an Essential Structure

A Thesis Presented

by

Ralph E. Kenyon, Jr.

Submitted to the Graduate School of the
University of Massachusetts in partial fulfillment
of the requirements for the degree of

MASTER OF ARTS

September 1987

Philosophy Department

© Copyright by Ralph E. Kenyon, Jr. 1987

All Rights Reserved

Order No. 1332173, UMI, 300 N. Zeeb Rd., Ann Arbor, MI 48106


ABSTRACT

Perception, Representation, and Reference:
Some Thoughts on an Essential Structure

SEPTEMBER 1987

Ralph E. Kenyon, Jr., B.A., B.S. Miami University,
M.A. Pepperdine University, M.S. Old Dominion University

M.A. University of Massachusetts

Directed by: Bruce Aune

The terms 'reference', 'representation', and 'perception', have not been univocally used. This thesis provides a new theory which explains reference, representation and perception by showing that each has primary and derived forms, related more-or-less recursively.

The primary forms are located in a simple model which is based on the structure of a computer. The structure presented is a 'minimal' model, that is, the smallest structure in which the simplest kinds of reference, representation, and perception occur.

An analogical relation constructed between the simple model and a person in the world provides an account of the simplest forms of, and an explanation for the derived forms of each. The relation presented sheds light on our corresponding ordinary notions and shows that they can all be explained or accounted for entirely within the cause-effect paradigm.

In effect, this thesis presents the 'essence' of each of reference, representation, and perception.

Approved as to style and content by: Bruce Aune, Director; Gary Hardegree, Member; and Michael Jubien, Philosophy Department Head


Preface

When I began to study philosophy, I found many terms very perplexing. The way philosophers were using terms just didn't fit with my experience. As a newcomer to philosophy, I could not be expected to have a clear sense of the terms in question. On the other hand, my background was not like that of the typical philosophy grad student, fresh from the formative experience of undergraduate philosophy courses and without significant exposure to other disciplines. I came to philosophy after a twenty year career in the management of technical scientific and engineering information and operations. I had already completed masters degrees in both management and computer science and had become a 'de facto' interdisciplinarian. It was natural that I turned to the study of philosophy, the forerunner of interdisciplinary studies.

My main purpose has been and will probably always be to force a high degree of coherency and consistency into my world-model. Although, I started with the so-called 'hard' sciences of mathematics and physics, my main interest has been in the development of a model of thought and understanding, and the present work provides foundations for that effort.


Acknowledgements

I wish to acknowledge the mentorship of Bruce Aune, without which this project could never have been completed. While we have differences, I have felt a kinship to his brand of philosophy. Bruce's many hours of patient discussion have guided me to where I am now. I wish also to thank Gary Hardegree for his assistance and patience in the development of this work.

I am also indebted to A. E. Van Vogt, whose novel The World of Null-A first introduced me to general semantics and Alfred Korzybski. It was through the medium of Korzybskian general semantics and its emphasis on Karl Popper that I became intensely interested in the philosophy of science and philosophy in general.

I wish also to acknowledge the influence of the writings of Russell Ackoff, Nina Bull, Patricia Churchland, Fred Dretske, Fred Emery, Douglas Hoffstadter, Thomas Kuhn, and Karl Pribram, all of whom have contributed significantly to my present view.

Finally, I wish to thank Virginia Sturtevant for her effort in editing this document and her patience with me through the development of this thesis.


TABLE OF CONTENTS

Preface
Acknowledgements
LIST OF ILLUSTRATIONS
CHAPTER I - INTRODUCTION
CHAPTER II - A NEW LOOK AT REFERENCE IN THE LIGHT OF MODERN COMPUTER SCIENCE KNOWLEDGE
A. The Computer - World Analogy
1. A Simple Being in a Simple World
2. Corresponding Things with Memory
3. Corresponding People with CPU's
B. The Computer Model in Detail
1. Hardware Components in Detail
2. The CPU Interface to its Outside.
3. A Language to Describe the Happenings in the World.
4. Additional Partial Structures
5. Some Additional Considerations
C. Where Reference Fits in
1. Distinguishing Between Derived and Primary forms of Reference
2. What Happens During Memory Reference.
3. The Reduction of Reference to Cause-Effect.
D. Where Representation Fits In
1. Distinguishing between Primary and Derived forms of Representation.
2. What Happens During Representation.
3. The Reduction of Representation to Cause-Effect.
E. Where Perception Fits In
1. What happens During Perception.
2. Locating the Phenomenological Perspective.
3. Where 'Qualia' Fit In.
F. A Review of the Essential Structures Involved
1. Selection by Cause and Effect.
2. Reference as Selection.
3. Copying by Cause and Effect.
4. Representation as Copying.
5. Comparing by Cause and Effect.
6. Recognition as Comparing.
7. Perception as Cause and Effect.
G. Generalizing the Essential Structure
1. Reference.
2. Representation.
3. Perception.
CHAPTER III - CONCLUSIONS
A. Reference is Reducible to Cause and Effect.
B. Representation is Reducible to Cause and Effect.
C. Perception is Reducible to Cause and Effect.
REFERENCES
BIBLIOGRAPHY

LIST OF ILLUSTRATIONS

Figure 1 - A One Bit memory cell constructed with a simple latch.
Figure 2 - A One Bit memory cell constructed with a latch and a driver.
Figure 3 - A One Bit memory cell constructed with a latch and a driver using a single control line.
Figure 4 - A One Bit memory cell with an enable line to control access.
Figure 5 - Two One Bit memory cells with different addresses.
Figure 6 - The One Bit CPU interface to its outside world.
Figure 7 - The One Bit CPU interface and connected Memory Cells.
Figure 8 - The One Bit CPU with its memory cells and an enable line.

CHAPTER I

INTRODUCTION

Perception, Representation, and Reference:
Some thoughts on an Essential Structure

or

Can philosophers afford to let computer
scientists solve a 2500 year old problem?

The growth of scientific knowledge has occurred at an erratic pace.  Periods of relative stability, in which it was believed that the main structures of knowledge had been found -- that it remained only to 'fill in the details' -- were interspaced with periods of transition and change in which the models which guided the search for new knowledge were themselves ripped asunder.  Thomas Kuhn has pointed out that these 'paradigm shifts' occurred as the burden of anomalous bits of information overwhelmed a standing paradigm and required a significant reorganization of the theory, sometimes even to the point of changing the meaning or sense of major terms.1  It is just such an accumulation of information that is 'at odds' with traditional philosophical views and that requires a shift in our use of the terms 'refer', 'represent' and 'perceive'.

A certain insulation exists between diverse fields of study by virtue of the compartmentalization which has occurred. Progress in one field often goes unnoticed by another field for an extended period of time.

I have been informed that there are many philosophers who take pride in not knowing what modern science has to offer.  I find such an assertion amusingly depressing in view of the fact that philosophy means love of wisdom, or love of knowledge, and that the philosophical tradition has been the search for knowledge.

It requires an interdisciplinary focus to correlate the knowledge in different fields.  Upon us today are the interdisciplinary fields of cognitive science and artificial intelligence.  Philosophy needs to take notice of the growth of knowledge in those fields, and not from just a 'territorial' perspective.

Many terms are common to these areas and philosophy.  However, the use put to many of these terms by philosophers was often at odds with the use within the scientific and engineering universes of discourse.  While reading philosophical materials and engaging in discussions with philosophers, I continually encountered anomalies in the usage of significant terms.  What the philosophers wrote and said simply did not make sense in the context of my experience.

I was experiencing the same kind of thing that Thomas Kuhn wrote about in the case of philosophers reading ancient texts.2  My paradigm was different and I had to learn the paradigm of the philosopher in order to understand what was written and said.  I finally realized that philosophy is not a single consistent whole. Philosophy is more like management than science; it is an eclectic collection of inconsistent points of view, whereas science strives to be a single consistent whole.

Don Kerr said that, in using symbols, we evoke the experiential elements of the listener.3  The listener brings his experiential elements to the symbol heard.  These experiential elements are not the same as those of the speaker. If the listener has not experienced the activity the speaker seeks to elicit, then the experiences must be provided before understanding is possible.

The Sufi say that someone cannot understand something on the first exposure.4  Nevertheless, one needs to be exposed to it so that one will be capable of understanding it at a later time when one encounters it again.  So, too, is it with philosophical terms.

Often a philosophical term is used in many ontologies with slightly different senses in each.  There is often enough overlap so that a statement made from the perspective of one speaker having one ontological commitment seems acceptable from the perspective of another speaker having another ontological commitment.  The speaker and the listener both agree with the statement, but each has a (slightly) different meaning for it and neither considers that the other might not mean what (s)he thinks.  Many statements are required to specify an ontological commitment (often volumes are not enough).

I think that the related terms "perception", "representation", and "reference" all share the notion of "aboutness".  The basic ways these terms are used can be accounted for without any special relation; aboutness can be reduced to cause and effect.  Showing the reduction requires elaborating its structure.  To illustrate that structure I will provide the experiential elements necessary and the initial exposure needed for later understanding.  Fitting the structure together requires a shift in the meanings of the terms 'perception', 'reference' and 'representation'.

Examining the necessary shift in meaning requires a more than superficial examination of the usages in the computer science domain.  In this thesis I shall examine a structure which seems essential to reference, representation, and perception.

The structure is motivated by the paradigm of 'hard' science, in which the only connections that can occur between things is via an exchange of physical energy.  It is necessary to frame our inquiry in terms of this paradigm.

My intent here is to present a 'minimal' model.  That is, I will describe the smallest structure in which the simplest kinds of reference and representation occur, and provide an account for the simplest kind of perception.  The model presented will shed light on our corresponding ordinary notions and show that they can all be explained or accounted for entirely within the cause effect paradigm.  In effect, I will present the 'essence' of each of reference, representation, and perception.

Of course, selecting a thing presumes that it is distinguished from other things.  I do not intend to solve the problem of identity here; likewise I do not intend to resolve the difficulties which are associated with the distinction often drawn by philosophers between form and substance.  Let us not be sidetracked in our discussion of aboutness with issues central to other problems.

By assuming the form/substance distinction and identity, we can distinguish between energy and information.  Information can be reproduced; energy cannot.  Energy may be required to 'make a copy', but the energy in the copy is different from the energy in the original, whereas the information in the copy is the same as the information in the original.  These notions I take as given.

The structure in which I illustrate the reduction of aboutness to cause and effect is motivated by the progress of modern computer science.

Let the world be divided into a central processing unit (CPU) and one or more peripheral processing units (PPU).  Let these units be connected by signal lines.  Let the signal lines be divided up into three types: address lines, data lines, and control lines.  Let the signals be of the simplest kind, a simple selection between two kinds, high and low.  Call the high "1" and the low "0".  This is the form of the device.

In the manufacture of such a device, the high and the low apply to voltages.  Voltage differences cause currents to flow. Changing currents cause magnetic fields to change.  Changing magnetic fields induce voltages in conductors.  By using the appropriate construction techniques, devices can be manufactured in which a change in one parameter at one point 'causes' a change in another parameter at another point by a long series of small cause-effect changes using just the mechanisms cited.  It is the architecture of the system, the laying down of conductive paths, which determine, by design, what parameters respond to which other ones.  For our purposes we only need to worry about a high or a low voltage at one point being ultimately caused by a high or low voltage at another point.  This is the substance of the device.

The form is controlled by the particular configuration of structures in the substance.  The propagation of changes throughout the substance determines the form (and hence the abstract device) of which the substance is a particular instantiation.

It is my intent to present a structure in which reference is reduced to cause and effect.  To do this I must presume the ancient distinction between form and substance.  Its present instantiation is that of the distinction between energy and information.  Information can be copied, while energy cannot. Information requires energy, but is not energy.  Whatever the copy is that is also the original is its form.  Whatever the copy is that is not the original is its substance.  Accounting for this distinction is beyond the scope of the present work.


CHAPTER II

A NEW LOOK AT REFERENCE IN THE LIGHT OF
MODERN COMPUTER SCIENCE KNOWLEDGE.

The growth of knowledge follows an exponential curve.5  Each new bit of information must be correlated with all the previous bits. However, these correlations take time and are not all done immediately upon the creation of a new bit of information.

We have seen a compartmentalization of knowledge into fields.  The correlation of new bits of knowledge across the boundaries separating fields has been sporadic at best.  The phenomenal growth of knowledge in the young field of computer science has resulted in much new information which has not been thoroughly correlated with the older and more traditional fields of thought.  My aim here is to make one such correlation.

Philosophers have had some interest in the correlation of knowledge in one field with that of another.  A particular form of correlation is the reduction of the knowledge in one field to that of another.  Such a reduction proceeds by mapping items (terms and statements) in one field to items in the other, and showing that the same relations hold among both sets of items. The more specific knowledge is said to be reducible to the more general case.  As an example, the laws of optics are reducible to the laws of electromagnetism.6

In some cases the knowledge to be reduced may not have been explicated to the level of detail of the reducing knowledge.  The more detailed structure provided in the reduction can be said to 'fill out' weak or unknown areas in the reduced knowledge structure, or even to provide missing 'explanations'.  In any case, the reduction proceeds by an appropriate mapping from selected items in one area to selected items in the other area. A weaker form of such a mapping is analogy.  In analogy the mapping from the terms of one area to the terms of the other area may be less well defined or less compelling.  My thoughts presented here flow from the mapping of two structures.  I shall make an analogy, which I hope is compelling enough to serve as a reduction.

A. The Computer-World Analogy

I make an analogy between a computer and the world.  The parts of the computer correspond to the parts of the world and the connections between the parts of the computer correspond to the relations between the parts of the world.  A certain amount of care must be exercised to prevent the technical meaning of certain terms from confounding the analogy.  "Memory", for example, is used to describe event-structures in people; it is also used to describe parts of a computer.  In my analogy, the central processing unit (CPU) corresponds to a person and computer memory locations correspond to places in the world (at which a person could look).  We must guard against thinking of these memory locations as being within a person.  To motivate my analogy I shall describe a simple being in an equally simple world.

1. A Simple Being in a Simple World

Carl Philip Ursa is a strange creature who inhabits an equally strange world.  Carl is a creature who has one hand and one eye.  Unfortunately, his eye is on the back of his hand, so he cannot see what he is doing.  He can either do something with his hand, or turn it around and look at his handiwork.

He lives in a simple world.  The world has two places; they are here and there.  The world has two colors; there is black and there is white.  The only things Carl can do is to paint here or there white or black, or look to see what color is here or there.

As the greatest mathematician in his world, Carl has figured out that he can represent the places in his world by the digits "0" and "1".  He has also adapted this scheme to represent the colors with the same digits.  Finally, he has generalized this scheme to describe whether he is painting or looking.

It didn't take him long to figure out that his many actions could be expressed as a 3 digit binary number.  He used the first digit for the location, the second digit for his action, and the third digit for the color.  "111" represented seeing black here, "101" painting black here, "110" represented seeing white here, "011" represented seeing black there, and so forth.

2. Corresponding Things with Memory

The 'things' in Carl's world consist only of the places here and there.  In the analogy between Carl's world and a computer, these two locations correspond to computer memory locations.  The colors white and black correspond to the contents which could be stored in a computer memory location, namely 0's and 1's.

3. Corresponding People with CPU's

Carl himself corresponds to the CPU in the computer. Painting corresponds to writing and looking corresponds to reading.  When Carl looks at a location, his act corresponds to the CPU reading the corresponding location.

The level at which the analogy becomes useful involves a finer sub-division than heretofore described.  However, the detailed knowledge of our human processing of information is less exact and greatly complicated by its complexity.  There is, however, ample knowledge available in the computer and engineering fields to provide a more fine grained structure on the computer side of the analogy.  It is my hope that this more finely structured knowledge will actually shed light on our own processes, and ultimately, give us a reductionistic account of reference.  Let us proceed by examining that structure in more detail.

B. The Computer Model in Detail

In the world of the CPU, its memory devices, and interconnections, cause-effect connections are clear and easily described.  The structure and the terms which describe it and how it functions may be unfamiliar to most philosophers.  Also, computer scientists have not taken the time to describe the essential features of the structure which impact philosophical considerations; they are too busy with the engineering aspects of the design.  Computer scientists and engineers are more interested in making the structure do things than examining how its abstract structure can serve as a model.

I have the experience with computers and engineering and I have the philosophical interest to examine the ramifications. Loosely, I shall show how the CPU 'refers' to memory devices and how that reference cumulates in the CPU representing the data of that memory location.

Examining the computer model in more detail requires examining each of the components and the interconnections in more detail.  So far we have a CPU and two memory locations, each of which can be written to and each of which can be read from.  Each memory location can store only one bit of information (corresponding to the colors white and black).  Distinguishing which of the two memory locations to select requires one bit of information.  Distinguishing which action the CPU takes, read or write, requires one bit of information.  Following Carl's lead, if we use 'A' for the action choice, 'C' for the color selection, and 'L' for the location choice, the three bit binary number 'LAC' describes the possible choices.

(L)ocation (A)ction (C)olor

1 here 1 read 1 black
0 there 0 write 0 white

So far, I have been describing the computer in terms of its "form" or logical structure.  0's and 1's are symbols which don't exist in the physical structure of the computer.  0's and 1's here just mean low and high voltages in an electrical circuit. From a Texas Instruments component specification (data sheet) the low voltage is from 0 to .4 volts and the high voltage is from 2.4 to 5.5 volts.  When we speak of a CPU writing to or reading from a memory location, we are in effect speaking about the changing of voltages in various parts of the system.

1. Hardware Components in Detail

The physics of electromagnetism is assumed here; only a few qualitative relations are required.  Voltage differences cause currents to flow.  As currents start and stop magnetic fields arise and collapse.  Changing magnetic fields induce voltages. By the appropriate construction techniques, devices can be manufactured in which a change in one parameter at one point causes a change in another parameter at another point by a long series of small cause-effect changes using only the mechanisms cited.  It is the architecture of the system, the laying down of conductive paths, which determine, by design, what parameters respond to which other ones.  The design of these systems can be fully realized with appropriately timed, electrically operated switches.

The "zeroth" generation computers were solenoid control circuits; the first generation computers were made with vacuum tubes; the second generation computers were made with transistors; the third generation computers were made with integrated circuits; and forth generation computers are made with large scale integrated (LSI) circuits.  At each stage "switching" devices were smaller, using less power, and becoming more densely packaged.  They are also better understood.  For my purposes the only point to keep in mind is that of a high or a low voltage causing another high or low voltage.  This is the substance of the device.

The form is controlled by the particular configuration of structures in the substance.  The propagation of changes throughout the substance determines the form (and hence the abstract device) of which the substance is a particular instantiation.  (I accept and use the ancient form/substance distinction without trying to resolve its difficulties.)

Thus far I have said that the CPU can read or write.  Our CPU can select one of two memory devices; each memory device needs to detect whether it is the selected device.  A memory device also needs to detect which action is specified, read or write.  The CPU controls the action, providing the value to be written and receiving the value read.  When the CPU writes, the memory device must receive the value provided by the CPU and must store it.  When the CPU reads, the memory device must provide the value stored back to the CPU.  These requirements determine the form the device must take.  Also, the nature of the interconnection of the memory device with the CPU needs explication.

Thus far I have spoken of the device level of structure and cited a need for a more detailed level of description.  To avoid confusion I shall call sub-structures "components".

To construct a memory device certain components are required.  Components have inputs and outputs.  The inputs and outputs can be high or low voltages ("logic 1" or "logic 0" state).  Most components also have a third state, called the 'high impedance' state, which, as they say in the trade, can be connected to its input or output (or both).  One can think of such a device as having three internal connections for each output line; one connection is to a high voltage source, the second connection is to a low voltage source or 'drain' (ground), and the third connection is to a "Y" resistor network which has large values of resistance in legs connecting to both the high and low voltage sources.  It is electrically equivalent to disconnecting the output from both the high and low voltage sources.

About the Inputs and Outputs

Components' outputs are capable of "driving" a line up to a high voltage, or down to a low voltage.  Connecting the output to a power supply drives it up; connecting the output to a ground drives it down.  Obviously, conflict can occur if two devices' outputs try to drive a line in opposite directions (something design engineers are paid to prevent).  Inputs are usually designed to require very little actual current in operation (industry competition leads to a supply of components with lower and lower power requirements).  When the high impedance state is connected to the input of a component, the component does not respond to voltages on the input line; the component's input is, in effect, disconnected.  Similarly, when the high impedance state is connected to the output of a component, the component does not drive the output line -- in effect, disconnecting the output.  Connecting the high impedance state is like opening a switch connecting the device, thereby "disconnecting" it.  Most components have a special (designated) input line which determines when the high impedance state is selected; this line is called the 'chip enable', or just plain 'enable' line.  When the enable input is inactive the component will be connected to the high impedance state.

To keep things simple, I shall use 'positive logic' components only.  In such components a high voltage instantiates a '1' or a logical 'true'.  Accordingly, a component is placed in the high impedance state (disconnected) when the enable line is low, or logic '0'.  A high or '1' on the enable line, in effect, connects the component.

Now the core of a memory device is the component which 'remembers' the value written into the memory device.  That component must hold its value (until a change is enabled).  Such a component is a simple latch.

A Latch

A latch is a component which has an input line, an output line, and an enable line.  The high impedance state is connected to the input of the component when the enable line is not high. The latch drives its output all the time.  When the enable input is raised, the value of the output is set to the value of the input.  When the enable line is lowered again the value of the output line remains what had been the value of the last input.

Writing into a memory device ultimately enables a storage latch.  The value of the line connected to the input of the latch must be set high or low before the component is enabled.  Then, when the enable line is activated, the output value of the latch is set to that input value.  See figure 1.

Of course, a memory device which is always providing its value is not very practical.  We need a component to disconnect the value of the storage cell.  Such a component is a driver.

A Driver

A driver is a component which has an input line, an output line, and an enable line.  The high impedance state is connected to the output of the component when the enable line is not high. The driver responds to its input all the time.  When the enable line is raised, the value of the output is set to the value of the input.  When the enable line is lowered again the output is connected to the high impedance state.

In constructing a memory device, the output of the storage latch is connected to the input of the driver.  When the memory device is read, the driver is enabled.  See figure 2.

So far our memory device has an input line which is connected to the input of the latch, an output line which is connected to the output of the driver, and two other lines.  One is connected to the latch enable and allows writing to the memory cell; the other is connected to the driver enable and allows reading from the memory cell.

Since the CPU can either read or write but not do both at the same time, the logical relation between the value of read and write is complementary.  Recall that Carl's representation system has one bit to represent the CPU action; 1 controls read and 0 controls write.  If we connect the line which conveys this bit of information to the driver enable line, the memory device will respond with its contents whenever the CPU signals a read by setting the control line to 1.  However, we cannot connect the same line to the enable of the latch, as it will respond to the 1 resulting in writing into memory at the same time.  The latch needs to be disabled when the control signal is a 1.  It also needs to be enabled when the control signal is a 0.  If a 1 signals a read, then its complement, a 0, signals a write, and vice versa.  To accomplish this we need another simple device, an inverter.

An Inverter

An inverter is a component which has an input line and an output line; it has no high impedance state.  An inverter responds to its input all the time and drives its output all the time.  When the input line is raised, the value of the output is set to the low value.  When the input line is lowered the output is set to the high value.  An inverter instantiates the logical NOT relation and is sometimes called a NOT gate.

Accordingly, we can add an inverter into our simple memory device; we connect the control line to the input of the inverter and the output of the inverter to the latch enable line.  Now, a logic 1 on the control line enables the driver resulting in a read, but because the inverter inverted the 1 to a 0, the latch is not enabled.  Similarly, a logic 0 on the control line fails to enable the driver, but because the inverter inverted the 0 to a 1, the latch is enabled, resulting in a write.  See figure 3.

This device has one drawback as it now stands.  It is always in the read or write mode.  Our system will have two such memory cells and one must be inactive when the other is active.  The present design does not have the equivalent of the high impedance state which is available at the component level.  We need to be able to fix it so that a memory cell can be quiescent.

We will need the equivalent of an enable line which will turn the device on to be read or written to, but otherwise turn the device off.  In other words, we want to enable the device whenever it is to be read or written to and when a device enable line is activated.  To do this we need another component.  Such a component is an AND gate.

An AND gate

An AND gate is a component which has two input lines and one output line.  The AND gate drives its output all the time.  When either input is raised, the value of the output is set to the value of the other input.  When either input line is lowered the value of the output line is lowered.

To improve our memory device we add a device enable line. An AND gate connecting the enable line is inserted in the control line connecting to the driver so that the output of the driver is enabled by a combination of BOTH the enable line being 1 and the control line being 1.  Similarly an AND gate connecting the enable line is inserted in the inverted control line connecting to the latch so that the input to the latch is enabled by a combination of BOTH the enable line being 1 and the control line being 0 (the output of the inverter being 1).  In this manner our memory device is activated only when the device enable line is raised.  See figure 4.

This simple memory device is not quite adequate for our purposes yet.  We need two such device, one for each location. Moreover, the two devices must not respond at the same time. Only one or the other can respond.

In Carl's representation system, the third bit of data represents the location selected.  Since one memory location is to be activated by a '1', and the other memory location is to be activated by a '0', we can use another inverter in one memory cell to invert the memory device select information bit.  Our device enable line activates one memory location when it is 1 and activates the other location when it is 0.  Since this line selects between locations, it is called the address line.  In one memory cell the address line is connected directly to AND gates connected to the latch and driver.  In the other memory cell the address line is first inverted before connection to AND gates connected to the latch and driver.  See figure 5.

2. The CPU Interface to its Outside.

Equally important is the interface between the CPU and the memory devices.  An explanation of the entire structure of the CPU is beyond the scope of this paper, but it is necessary to describe part of its structure.

The interface between the CPU and its memory devices is controlled by components, and the design of the CPU requires some interconnection between these components.  I won't go through the stages of building the interface in the same manner that I did with the memory cell.  The evolution of the CPU through these stages is not important enough to warrant the time and space. Suffice it to say that the important aspects of the CPU for our purposes here is the structure of its interface to the outside. While that interface is part of the structure of the CPU, its design makes communication with the memory devices well defined. It has both efferent and afferent processes in its connections to the outside and in its connections that go deeper into the structure of the CPU.

On the Memory Side of the Interface

Afferent connections include the data line by which the CPU writes into memory, the address line by which the CPU selects which memory is to respond, and the control line by which the CPU signals its intended action to read or write to memory.  The one efferent connection is the sense line by which the CPU receives the contents of memory during a read.

On the Inside of the Interface (Deeper in the CPU)

Afferent lines include an address selection line, a control selection line, and a data selection line.  Efferent lines include an address status line, a control status line, and a data status line.  The structure of the interface can be thought of as a 3 bit control register and a 3 bit status register.  To use the interface, the control register is loaded with the desired control word or the status register is sampled.  The status register contains the information necessary to determine what the last action was.  It contains a 'record' of the values of the address, control, and data (or sense) lines.  See figure 6.

Addressing Memory Locations

In the number 1 memory device the address line is connected to the AND gates connected to the latch and driver.  Setting the address line to 1 presets both these gates.  This will result in the number 1 memory device responding.  The value of the control line determines which of these AND gates is activated.  Since the address line is inverted in memory device 0, the corresponding AND gate in that memory cell will not be preset.  Therefore, a high on the address line will be passed through only in memory device 1.  Had the address line been set to 0, the inverter in memory device 0 would have had a 1 as its output presetting the AND gates in that memory device.  Setting the address line low or high selects which memory device is to respond.  See figure 7.

The control line must be set to 0 or 1 depending upon whether the activated memory device is to accept or return data. If the control line is set to 1, the value of the data line does not matter, but if the control line is set to 0, the data line must be set to 0 or 1.

Storing Data in Memory

To store data in memory the data line must be set to the datum to be stored (0 or 1).  The memory device must be selected by setting the address line to 0 or 1.  And the control line must be set to 0 (write).  Since the control line is set to 0, the AND gate connected to the driver will not be preset and its output will remain low.  As a result, the driver will not be enabled. However, the 0 on the control line is inverted before going into the AND gate connected to the latch.  With a 0 input to the inverter, the output will be 1, presetting the AND gate connected to the latch.

The address line will be high or low.  In memory cell 1 a value of 1 will activate the preset AND gate.  In memory cell 0 a value of 0 will be inverted and its output, 1, will activate the preset AND gate.  In either case the AND gate output will be 1 and the latch will be enabled.  The enabled latch sets its output to the value of the data line.  When the address line drops, the latch continues to hold the value it just received from its input, the data line.

Reading Data from Memory

To read data from memory the memory device must be selected by setting the address line to 0 or 1.  The control line must be set to 1 (read).  Since the control line is set to 1, the AND gate connected to the driver will be preset.  Since the address line is high, the AND gate output will be 1 and the driver will be enabled.  The enabled driver sets its output to the value of the latch output.  Since its input is being held high or low by the output of the latch, this value will be placed on the sense line.  Again, the 1 on the control line is inverted before going into the AND gate connected to the latch.  With a 1 input the output of the inverter will be 0 and the AND gate connected to the latch will not be preset.  Since that AND gate was not preset, its output stays low, thus not enabling the latch, so its value remains unchanged.

Enabling Action

In actual practice, an additional line is included which synchronizes the communication between the CPU and the memory devices.  That line is normally low, but is raised, held momentarily, and then dropped (pulsed) to signal to all components in the CPU interface and in the memory devices when switching is to take place.  See figure 8.  The pulse is held high long enough for all transitions to stabilize and then dropped; the latches throughout the system then hold the values sensed at the time of the falling edge of the enable pulse.

For most of my purposes, the enable line will be an unnecessary complication and may be ignored; figure 7 will be adequate.  However, in some cases the enable line will be necessary; figure 7 will then apply.

3. A Language to Describe the Happenings in the World.

Computer programming languages describe what actions a computer is to take at our direction.  There are many levels of such languages.  Most are high level languages which describe happenings of interest to the programmer.  But, all such languages are ultimately implemented in terms of the lowest level language which correlates directly with the architecture of the machine.  Assembly language, as it is called, maps its terms and syntax directly to the components of the CPU and its memory locations, and the actions the CPU can accomplish.  Such a language includes terms for the structures in the CPU and its possible actions.

The Things to be Named

In our computer, the CPU must refer to the two memory locations and to the two possible values which might be stored in memory.  It must also refer to itself.  In referring to itself, it will need to distinguish between its afferent and efferent processes, and to distinguish among the individual bits of information in those processes.

0 = a low voltage or logic '0' (FALSE)
1 = a high voltage or logic '1' (TRUE)
AB = Address bit Values = 0 or 1)
A = Address
CB = Control bit (Values = 0 or 1)
C = Control
DB = Data/Sense bit (Values = 0 or 1)
D = Data
R = Register
CR = Control Register (= AB & CB & DB)
SR = Status Register (= AB & CB & DB)

The Actions to be Cited

Our language must also describe actions which the CPU can accomplish.  The actions the CPU can take consist of setting bits in the control register, or sampling bits in the status register.  The general form used to describe such bit setting is chosen so that a combination of such actions can be described in the same format.

MOV D,S = Move from (S)ource to (D)estination.

Examples:

MOV A,1 = Set the address bit in the control register to 1.

MOV D,1 = Set the data bit in the control register to 1.

MOV C,0 = Set the control bit in the control register to 0.

The same language syntax can be used to describe sequences of these immediate moves.  The three examples immediately above performed in the given sequence result in writing a 1 into location 1, and can be more compactly expressed as:

MOV (1),1 = Write 1 to the location 1.

This format introduces the syntax used to distinguish the contents of a source or destination from the value of its name. It is necessary when symbolic source and destinations are used.

( ) = Contents of

More examples:

MOV A,1 = Set the address bit in the control register to 1.

MOV (A),1 = Write a 1 into the location addressed by A.

This level of flexibility is more meaningful to us in that it collects groups of 'low-level' actions under a single description.

The two actions the CPU is capable of are reading and writing to memory.  It does this by setting the values of the bits in the control register.

MOV C,1 = Read.

MOV C,0 = Write.

When these actions take place other things happen that are better described as:

MOV (A),D = Write to memory.

MOV D,(A) = Read from memory.

This one syntax captures different points of view.  The former example describes what is happening within the CPU interface, while the latter describes what happens in the entire computer as a result of what happens within the CPU interface. The distinction is analogous to the distinction between the perspectives of phenomenalism and realism.  The later description holds whenever the former holds only if the devices external to the CPU are connected and are operating properly.

4. Additional Partial Structures

Concatenating Structures

Concatenating structures have already been implicitly discussed.  Circuits or lines which convey different bits of information are laid down in logically parallel paths.  Frequently these paths are implemented with physically parallel circuits. By referring to the control and status registers as 3 bit devices, it is assumed that the bits are considered together. Concatenating circuits add parallel lines.

Logical Structures

I have already discussed an AND gate and an inverter or a NOT gate.  Using DeMorgan's law, an OR gate can be constructed from these devices.

A OR B = NOT (NOT A AND NOT B).

Abstracting Structures

An abstracting structure is any circuit with the same or fewer outputs than inputs.  A NOT gate has the same number of outputs as inputs.  An AND gate has fewer.

Shift Register

A shift register has several parallel outputs, one input and an enable line.  When the enable line is pulsed, the values of the outputs are 'shifted' and the latest input is placed on the first output.  A shift register can 'remember' a sequence of inputs.  The parallel outputs contain the history of the input.

Suppose we connected 3 64-bit shift registers to the efferent processes in the CPU interface.  By aligning the outputs we could go back as many as 64 read and write cycles and have a (moving) record of the past actions by the CPU.  These triples of lines could be connected to all manner of additional circuits for comparison, etc.

5. Some Additional Considerations

Hardware Circuits to Software

Computer scientists have shown that any hardware circuits that can be wired up, can be simulated in software on a serial computer.  The advantage of hard-wiring a circuit is its speed. Software processing takes much more time than hardware circuits.

Parallel to Serial Circuits

Additionally, parallel circuits can be implemented serially. For example, to represent three parallel circuits in a single serial circuit, the serial circuit is divided up in time by thirds, with each third dedicated to the corresponding parallel line.  This 'time division multiplexing', as it is called, allows simulating any parallel configuration serially.

Intelligence

It is beyond the scope of this paper to account for intelligence.  Computer scientists have argued both for and against machine intelligence.  One problem is that intelligence is not well-defined.  If it can be specified precisely, then the question of machine intelligence will be answerable.  I shall only allude to certain features attributed to intelligence.

C. Where Reference Fits in

By 'reference' we ordinarily speak of something being about something else.  A person talks about something.  A symbol stands for its referent.  A long-standing philosophical question is: How is a symbol 'connected' with its referent?

1. Distinguishing Between Derived and Primary forms of Reference.

I think that, like Bruce Aune's distinctions among types of existence,7 there are distinctions among types of reference.  There are derived and primary forms of reference.

Derived Reference

Derived forms ultimately depend upon primary forms.  I think that symbols referring to objects is a derived form.  It is the use of these symbols by people that endows the connection.  It is people's use of symbols to refer to things that creates this derived form.  I will not be directly dealing with the (derived) forms of reference philosophers talk of when they use terms like "Easter Bunny", "unicorn", etc., although the proposed mechanism will account for these forms.

Primary Reference

People referring to things is primary.  People select a thing in their environment and use something else in place of the selected thing in certain contexts (speech, art, etc).  It is because people use things in place of others that secondary forms of reference arise.  The primary form of reference is comprised of selection and substitution.  The selection is made among things in the environment.  Actually, the selection is made among representations of things in the environment, but the details of this process will not become evident until later on in this work. Philosophers call it intension.

Selection as its Essence

The primary form of reference is this selection process. One refers to something when one selects that thing from among others.  The most immediate secondary form of reference, so immediate that I have chosen to include it with the discussion of primary reference, is the use of a token to indicate such a selection.  A token or symbol is substituted for the object selected (in information contexts).  There is presumed to be a connection between the symbol and the object; likewise there is presumed to be a connection between internal representation and things in the environment.

With the possible exception of the skeptics, we talk, believe, etc, as if we are selecting from among the objects in our environment, when we are actually selecting from among their internal representations.  We take our internal representations to 'refer' to external objects.  Realistically, we could characterize reference as the mechanism by which our representations interact with things.  An intension is nothing more than selecting among internal representations; follow-on behavior brings us to take some action involving the selected object itself.  Skeptically, we have no access to things except via our internal representations.  An intension is nothing more than selecting among internal representations; follow on behavior brings us to subsequent representations which still include the selected one.  Of course, in the realistic case, our only confirmation of the action involving the selected object is in the subsequent representations.

There are two fundamental structures in the primary form of reference; they are the mechanism of selection and mechanism of connection between representations and objects.  What is a representation? A representation is a different instantiation of the form of something.  The representation is processed instead of there being a direct interaction with the object; we operate on information about things rather than on the things themselves. Since information about a thing is a representation of it, we have substituted a representation for a thing in these contexts.

In communication we make a simple substitution of one thing for another with the expectation that other people will know to 'reverse' the substitution.  It is the intension by a person that is the primary form of reference.

In intending something a person must pick out that thing from among the many in the environment.  To refer to something is to select that thing from among many.  I can go pick it up, point at it, or use some token to stand for it, (assuming that another will not take the token as the object).

It may be argued that this form of reference depends upon the internal representation of the objects to be selected, and the problem of representation is the other side of the same coin of which reference is cast.  It is only because the internal representation 'refers', in some mysterious way, to the objects that an intension functions to select that object.  I claim that selection takes place among the representations, but that representations can be explained in a manner that sheds light on primary reference.  I will come back to this, but first I shall look at what happens during reference in our model.

2. What Happens During Memory Reference.

There are two ways to refer to memory, read and write.  A memory reference is either a read operation or a write operation.

Selecting a Memory Location

The CPU sets the address bit in the control register to 0 or 1.  From the point of view of the CPU, that's all that can be done.  The rest is up to its outside world.  The world outside the CPU consists of the memory.  It is the structure of the memory that determines what happens in response to the selection made by the CPU.

Events in the CPU Interface

During a memory reference the CPU sets the three bits in the control register to the desired value.  The address select bit is set to 0 or 1 to select which memory is to respond.  The control bit is set to 0 or 1 depending upon whether the reference is a read or a write.  And, if the reference is a write, the data bit is set to 0 or 1, the value to be written into the selected memory location.

In the system design without an enable line, the CPU is always referring to one memory location or to the other.  There is no time when the reference does not occur.  But, in the system design in which the enable line has been added, memory reference occurs only when the enable line is pulsed.

Events in the Memory Interface

When a memory reference is made by the CPU, it places values on the lines connecting the memory devices with the CPU.  The individual memory cells respond to the values on these lines. The address line value is responded to in one cell.  If it is high, memory cell 1 is enabled.  If it is low, memory cell 0 is enabled.

3. The Reduction of Reference to Cause-Effect.

We can describe the events occurring during a memory reference entirely in cause-effect terms.  It is the level of abstraction of our description that makes the difference in the terms used.

Primary Reference by the CPU

During a memory reference the CPU is selecting a 0 or a 1 to place on the address line.  A certain value is placed on a certain line.  Also, a 0 or a 1 is selected to place on the control line.  Again, a certain value is placed on a certain line.  If the value selected for the control line is a 0, then a 0 or a 1 must also be placed on the data line.  So, from the point of view of the CPU, certain affector actions are taken, and nothing more.  The only thing being done by the CPU is the selection of a triple of values to be placed in the control register.  It would correspond to Carl choosing whether to look or to paint, and whether to do it here or there, and whether to use black or white.  However, the entire action by the CPU consists only of selecting certain affector values.  Since the CPU has efferent processes connected to the afferent processes, it has available a record of its own actions.

By looking at those efferent bits of information in isolation we can see some additional correlations.  The address bit has the information of which memory was selected.  The control bit has the information of whether the action was to read or write.  The data/sense bit has the information of which color was present.  'Understanding' which action was taken requires all three bits of information, what, where, and what result.  To look at only one bit in isolation does not present the whole picture.

Suppose, for example, one abstracted the sense bit and the control bit and selected cases in which the control bit was 1. This corresponds to just seeing, without regard for where.  The phenomenalists are fond of speaking of the 'raw feel' of a white expanse.  It is clear that such a perspective is nothing more than 'just seeing' as described above.  It is incomplete interface information.  For the information to be meaningful, the third bit, location, must be added.  That information is, however, a feed-back of a selected afferent process on the part of the CPU.  Still, it could be argued that phenomenalists really intends to include all three bits of information, but that would make their position inconsistent.  One cannot 'just see' when one already knows where one is, that is, where one intended to go.

Effects in Memory

It is the structure of memory that allows reference by the CPU.  A high voltage on the address line and a low voltage on the control line has the following effect.  The inverter in memory cell 1 is caused to have a high voltage output.  This output, in turn, is a contributory cause which, together with the high voltage on the address line, causes the output of the AND gate connected to the latch in memory cell 1 to go high.  This high voltage, in turn, is a contributory cause of the output of the latch.  The output of the latch is caused to go high or low, depending upon the level of its input line when the contributory cause, the enable line being high, occurs.

Similarly, a high voltage on the address line and on the control line causes the AND gate in device 1 connected to the driver to connect a high voltage to the enable line of the driver in that cell.  The high voltage on the enable line of the driver causes the value at its input to be reproduced at its output. Since this line is connected back to the CPU, the net effect is that a high voltage on both the address line and the control line ultimately causes a high or a low voltage to appear on the sense line.  Which it causes depends upon the most immediately prior occurrence of a high on the address line and a low on the control line as described above.

Differences Between Read and Write

The CPU controls whether a read or a write occurs by presenting a 0 or a 1 on its control line to its outside world. The relevant difference occurs in the CPU interface.  If the control bit is a 1, it enables a driver connected to the sense line.

If the control bit is 0, it is inverted and that inverted value enables a driver connecting the data line back to the sense line.  When a write operation occurs, the value of the data line is not a reproduction of the effect in the environment, but is only a copy of the color bit selected (intended).  This is purely a reflection of the internal selections.

In the case of a read, when the control line is 1, the driver connected to the sense line from the outside world is enabled.  In this case the efferent processes contain a mixture of reflections of the internal selections and one bit of external sense.  One might say that the single bit sensed is 'qualified' by the selected or intended action.  To know that that bit represents what was in memory, the control bit must be present and it must be 1 (read).

D. Where Representation Fits In

So far I have used the term representation loosely; however, that use has been consistent with the following.  We typically see representation as the inverse relation to reference.  If a thing is referred to by a symbol, that symbol represents the thing.

1. Distinguishing between Primary and Derived forms of Representation.

There are primary and derived forms of representation just as there are primary and derived forms of reference.  A thing represents another just in case some person uses that thing to refer to the other.

Copying as Primary Representation

A bit of information can have only one of two values, 0 or 1.  If a bit of information is to represent another bit of information, there are only two possibilities.  A 1 represents a 1 and correspondingly a 0 represents a 0, or a 1 represents a 0 and correspondingly a 0 represents a 1.  While it might be possible to use the inversion scheme, the 1-1 and 0-0 scheme is natural.  A 1 represents a 1 and a 0 represents a 0.

For representation to occur, the representing bit must differ from the represented bit in some characteristic.  Since a 1 is just a 1, the difference cannot be in form; the value of the bit is just copied.  The only characteristic possible is its location.  A bit in one location represents a bit in another location if it is a copy of that bit.  However, a more stringent connection is required.  The copy must be taken from the original so that the value of the bit in one location is guaranteed to be the same as the value of the bit in the other location.  This guarantee can be had only if the representation bit is caused by the bit it represents.  Since the value of the bit is produced by cause-effect connections from another, and it is the same as the value of the other, the value of the other is reproduced or copied at the second location.

Representation is not exactly the inverse relation of reference, but it functions that way at most derived levels.

Association as Secondary Representation

At the base level a 1 in one location is associated with a 1 at another location.  However, at higher levels, more than one bit of information may be present.  Any 1-1 and onto mapping between one set of bits and another (including inversion) instantiates the 'connecting' of one thing (the input bit configuration) with another (the output bit configuration).  The hardware to instantiate such a transformation requires merely permuting the lines and inserting inverters.

Abstraction as Secondary Representation

Since we often think of something as representing more than one other thing, selectively dropping bits in the output set also yields our idea of representation.  Any onto function from one set of bits to another produces representation.

Still, the primary or base case of representation is the mere copying of bits of information.

2. What Happens During Representation.

In our CPU interface the content of a memory location is copied onto the data line during a read.  This data line value is, in turn, copied onto the sense/data line in the efferent process of the CPU.  In the system with an enable line designed in, that value is latched onto the efferent line by the latches internal to the CPU.  The connection with memory is severed, and a power failure in the memory location would not be reproduced in the CPU interface.

Copying and Moving Data

A datum is the value of a bit of information.  When a write occurs, a datum to be written is placed on the data line.  That value is reproduced in the selected memory cell by the end of the write cycle.  During a read operation the datum in the selected memory cell is copied onto the sense line, and is ultimately reproduced on the data/sense line in the efferent processes of the CPU interface.  We often speak of moving data.  Moving data consists only of copying data and then not caring about whether the original is overwritten by something else.

In a data move, the location characteristic is of overriding importance.  Reading from one location and subsequently writing the same datum to another location would constitute a data move external to the CPU.  I have not drawn circuitry to show how this would be achieved.  Additional latches and drivers in the CPU connecting the efferent processes to the afferent processes would be required.

Activating Dedicated Circuits

These additional circuits and components would be constructed so that a datum could be moved within the CPU from the efferent processes to the afferent processes.  Each kind of such an action would require a set of components, have connections and have an enable line.  In short, the CPU would have internal devices which have special purpose effects.  One such device might be a simple comparator.  A comparator compares two values and outputs a true (1) if the inputs match.  This is just the logical relation of NOT EXCLUSIVE OR.  Since we only have AND and NOT gates, our exclusive OR gate can be designed using the formula:

A CMP B = NOT (A AND NOT B) AND NOT (NOT A AND B)

Now that we have a comparator, we can build into the CPU a circuit to 'recognize' a datum.  One input to the comparator is connected to the value to be recognized.  To recognize a 1, the input is permanently connected to a high voltage source.  To recognize a 0, that input is permanently connected to a low voltage source.  The other input to the comparator is connected to the data line to be tested, and the output is directed to specialized circuitry which is to be activated whenever the test condition is met.

3. The Reduction of Representation to Cause-Effect.

Representing a datum is just copying it.  The copying process proceeds by cause-effect steps.  In secondary forms of representation more than one bit of information is present, and representation is just a transformational mapping to another set of bits.  Each stage proceeds by cause-effect steps.

E. Where Perception Fits In

Perception is usually applied to objects.  One perceives objects that are external to us by means of recognizing certain patterns (including dynamic) in our sensory inputs.  Perception involves processes which are 'downstream' from the efferent processes (higher level abstractions).

1. What happens During Perception.

During Perception an analysis of the patterns in the effector circuits and in higher level processes results in setting bits of information which signal the presence of the conditions required for the particular perception.

Dedicated Circuits

The simplest way of instantiating perception is by the use of dedicated circuits.  A circuit which has comparators and 'watches' the efferent processes for a particular combination of signals 'recognizes' that a particular effector pattern has occurred.  Here's an example.  Connect one input of a comparator circuit to the data line and the other one to a high voltage (logic 1).  Connect a second comparator to the control line and the logic 1.  Connect the outputs of these two circuits to the inputs of an AND gate.  The output of the AND gate will be 1 whenever the control line is 1 (read) and the data line is 1 (black).  This dedicated circuit responds to signal the condition "seeing black".

The outputs of these dedicated circuits can be combined with the efferent processes and/or with other such circuits as inputs to additional circuits to form higher level processing.  For example, feed the output of the "seeing black" circuit into a three bit shift register.  Connect the outputs of the shift register through AND and NOT circuits according to the formula (A AND (NOT B) AND C).  The output of this new circuit signals "having seen black, white, black".  Just for drill, connect the seeing black output to the input of a 6 trillion bit shift register and you get a historical record of vision for this device.  The 'memory' of any particular seeing can be 'recalled' by computing which output line to sample.

Preliminary Exposure

The Sufi say that one must first be exposed to something so that, at the proper time, understanding will be possible.  More simply put, we understand in terms of our prior experiences; new experiences serve as the exemplars for 'understanding' later, similar, experiences.  Perception of something requires that something similar had been previously experienced.  The essence of recognition is, after all, 'cognizing' again.

Just such a notion went into the design of hardware which could be trained to recognize patterns.  Perceptrons, as these early devices were called, had two modes of operation.8  The first mode was the learning phase; patterns observed were stored and integrated into a generalized pattern recognizer.  The other mode was the operating mode in which the generalized pattern recognizer analyzed incoming patterns and rendered a judgment that the sensed pattern was or was not a case it had been trained to recognize.

While these devices had many levels of processing and are now being seen as special cases of a more generalized approach called network relaxation, the essential feature by which we understand them is a simple comparison.  The devices are designed to compare inputs to previously processed inputs and determine if the current input does or does not fit the pattern.  In these cases the patterns entail much information.  For our purposes, however, one bit of information will suffice.  We need only store a single bit of information during the learning phase and compare only one bit of information during the operation phase.  Also, in our simple device, the learning phase would consist of a single trial inputting a single bit of information.

In such a device, there needs to be present a way of 'knowing' which phase the device is in.  The epistemological question can be represented by a single bit of information.  So, the one-bit perceptron has an input line to read in a bit of information during the learning phase.  It has an output line which is meaningless before learning has taken place, but which 'recognizes' the input or signals that the input is not the known input.  It may also have another output line, a status line, which signals whether it is in the learning phase or has already been exposed to what it can later recognize.  Of course, engineers also build in a 'reset' line which puts the device back in the learning phase.

A one-bit perceptron has two modes of operation.  The pristine state, unprogrammed, cannot perceive or recognize a bit of information.  The first exposure to a bit of information 'programs' the device so that it may 'recognize' the same value of that bit of information on later encounters.  In the programmed mode the device simply responds with a bit of information which signals that the input matches the stored value or that it does not, in short whether it recognizes the value of the input.

This type of circuit is a 'programmable' circuit.  The simplest example is a circuit which has an enable line, an input line and an output line.  The circuit functions like a latch, except that it can only be enabled once.  At the first time that it is enabled, it sets its output to the value of its input and then connects its enable line to the high impedance state. Thereafter its output is always the same as that first exposure.

The use of a circuit such as this is that it 'remembers' its first experience.  By combining such circuits with dedicated circuits one can construct circuits capable of permanently storing and recognizing patterns that were 'learned' from experience.

Subsequently, the same circuits that provided the programming can be compared to the output of the programmed circuit.  In this way, the pattern that first programmed the circuit is 'recognized' by the previous programming.  At the base level, recognition is nothing more than a true output of a simple comparator and a programmable circuit; at higher levels it is the combined, permuted, abstracted, and transformed matching of inputs to stored patterns.

2. Locating the Phenomenological Perspective.

In using our analogy between the CPU and a person, we can see where the claims of phenomenalists apply.  The CPU interface has only one sense line; that line responds to 'whatever' drives it.  In the 'god's eye view' I have presented we know what drives that line.  Were we limited to the perspective of the CPU we could not know that.  All that would be available to us is whatever patterns we could abstract from the fluctuations of that sense line as correlated with the other status lines.

The First Person Viewpoint

From the first person perspective the CPU has only what information is derivable from its sense line input in conjunction with its control and address line output.  In short, Carl can learn about the world only by going, painting, and seeing.  He knows that he can turn left or right (to face here or there).  He knows that he can look or paint.  He knows that he can see black or white, and that he can paint black or white.  Nothing else can he directly know.  By recording which direction he faced, and whether he painted or saw white or black, he can discover that he has always seen black where he painted it last, and the same with white.  ('Always' has a finite but large connotation.) A circuit which could make such a detection would be too complex to describe here, but such circuits are easily constructed.

What Information is Inaccessible

Carl does not share our god's eye view and cannot know of the correlation between himself and the CPU.  The CPU cannot detect that there are two memory cells connected to its lines. The actual structure connected to the signal lines cannot be 'known', but some of its characteristics may be inferred from pattern analysis.  It is just this inaccessibility that leads to the phenomenalist perspective.  The Phenomenalists claim that only the value of the sense line is available.  I think that this is not quite right.  We need more of the information available at the efferent processes.

3. Where 'Qualia' Fit In.

Phenomenalists speak of 'just seeing white'.  I have already shown that this involves abstracting from the efferent processes by discarding the location bit of information.  It's not a very useful abstraction.  However, it does show where in the computer model that 'raw feels' occur.

Representational Structure

Representing the outside world consists of abstracting patterns from the efferent processes.  While I have not explicitly mentioned it, this included patterns involving time, or sequences of inputs.  To capture the notion of time, there must be a 'clock' in the CPU.  The clock ultimately controls the enable line, so that the efferent processes are not constant, but change to different configurations of 0's and 1's with each cycle of the clock.

Sensing Circuits

A sensation of 'sensing' must ultimately be derived from the control bit being 1 (read).  The simplest level includes 'just seeing white' or 'just seeing black'.  These conditions can be detected by connecting the inputs of an AND gate to the control line and to the sense line, or by connecting the inputs of an AND gate to the control line and to a NOT gate connected to the sense line.  In either event, the location bit is simply discarded.

Patterns in time can be abstracted by shift registers and comparator circuits.  A simple example would be a circuit which shifted the sense and control bits with each clock pulse. Suppose such a register held 40 such pairs.  Let there be a comparator device which outputs 1 (true) if all 40 shift register output bits are 1.  (A comparator device is composed of comparator components and AND gates.) This line would signal the presence of extended 'just seeing'.  If the other 40 bits, the sense line, were also 1, this would signal 'just seeing black'.

F. A Review of the Essential Structures Involved

The essential structures involved are built up entirely of components having inputs and outputs capable of three states, 0, 1, and the high impedance state.  The components include logical AND and NOT gates, a latch and a driver, and a programmable latch; a comparator device is built of these components.

It is, however, the form of the devices constructed which illustrates the form of reference, representation, and, to a degree, perception.

1. Selection by Cause and Effect.

The simplest form of selection is to choose between two (things, etc.).  Devices can be constructed which respond to only one of two voltage levels.  Setting the voltage level causes the appropriate device to respond while the alternative device does not respond.

2. Reference as Selection.

By setting the voltage level on the address line, the CPU selects which of two memory devices is to respond.  In doing so it 'refers' to that device or location, so the primary form of reference, selection, is instantiated in the computer/CPU model. The CPU makes a memory reference by selecting which voltage level to apply to its address line.  I have not gone into what processes, internal to the CPU, result in its setting the address line high or low, but this selection, in fact, causes which memory device is to respond.

3. Copying by Cause and Effect.

The simple process of enabling a latch or a driver allows the voltage value present at its input to be replicated at its output.  This 'copies' the bit value from one place to another. With appropriate circuitry, this value can be copied anywhere.  A latch makes a copy which holds after the original is destroyed.

4. Representation as Copying.

The simplest form of representation is mere copying.  The value stored in a memory device is copied into the CPU effector processes.  As such, the value of the effector processes 'represents' the contents of the memory device from which it came.

5. Comparing by Cause and Effect.

At its simplest level, comparing is the inverse of the exclusive OR relation.  It can be constructed of NOT and AND gates as described above.  The output of such a circuit, which is high if the inputs match and low if the inputs do not match, is caused by its inputs.

6. Recognition as Comparing.

In recognition there are two stages.  The first stage is the storing of a pattern; the second stage is comparing an input with the stored pattern and signaling a match.

7. Perception as Cause and Effect.

Perception consists of recognizing patterns.  This can be accomplished by comparing input patterns to a stored pattern and signaling a match (recognizing a particular thing), or signaling which of stored patterns matches the input pattern (recognizing which thing).  Since all the processes going into this are realizable in a cause-effect medium, so is perception. Therefore, perception is accountable for by cause-effect.

G. Generalizing the Essential Structure

The essential structure involves afferent and efferent processes.  Efferent processes include the status of the afferent processes.  Reference is selecting particular afferent processes. It is the cause-effect events deriving from the choice in selecting values for those afferent processes that determines that something responds.  It is the response, in the world, that our afferent processes cause that 'counts'.  But it only counts relative to the effects on our efferent processes.  In short, we understand what we see relative to what we do.

1. Reference.

We refer relative to the responses of our efferent processes.  'Refer' usually requires an object.  But an object must be recognized.  Recognition consists of storing and comparing abstractions from efferent processes (which necessarily include representations of afferent processes).  In the usual sense of 'refer' we intend to indicate whatever caused or gave rise to a particular configuration of efferent processes and/or abstractions from these processes (perception).  However, there is simply no direct access to the cause of a perception.  All that is internally accessible is the information within the efferent processes, and that is information which is realistically caused by external things.  Selection is made among bits of information, not among the external things.

Derived Forms

Derived forms of reference, such as when a symbol stands for something else, require more than one person, and require communication.  An explanation of this process is beyond the scope of this paper, but some comments are in order.

Each person has the other as part of his or her environment. One may apply a particular afferent pattern, as in asking the name of a particular efferent pattern.  This, of course, assumes a good deal.  It assumes that the other's interface has the same form, and that the other has the same efferent pattern present. Aside from these complications, the process is simple.  The other, presumed to have already programmed circuits to recognize the pattern in its efferent circuits and to associate a particular afferent pattern with that efferent pattern, simply selects the associated afferent pattern, which we might call "giving its name".  Now, the first person associates his corresponding efferent pattern with the earlier one and 'programs' recognition circuits to respond appropriately.

Extended to Semantic Environments

Of course, this process of association and learning by correlating efferent process patterns with subsequent patterns can be applied where the efferent patterns arose only from the afferent processes of others.  (We can learn names for groups of words from others.) We wouldn't know that the patterns arose only from the afferent processes of others as that information is privileged to the god's eye view.  But after eons of evolution of the hardware and the interactions, we could get a pretty good picture which had others in it.  In fact, our picture could get so good that we could mistake the picture for that which causes it.

I differentiate between physical and semantic environments. In physical environments the change of our efferent processes in response to our afferent processes depends upon the laws of nature and the structure of the physical environment.  In semantic environment the change of our efferent processes in response to our afferent processes depends upon the afferent processes of others.  Now, in truth, a detailed knowledge of the structure (and programming) of another could allow us to cite only the laws of nature in describing the relation between our afferent and efferent processes, but it's much more interesting not to make that reduction -- especially since we haven't yet the detailed knowledge necessary.

2. Representation.

The primary form of representation is simple copying.  The information has the same form, but a different location, and is cause-effect connected to the original.

Derived Forms

Derived forms of representation involve cases with more than one bit of information.  In such cases, any transformation on the bits can be associated and be a representation.  By dropping bits in the transformation, the resulting patterns can be derived from more than one original.  By permuting the bits and by selectively inverting them, a different pattern can be caused by an original.

Extended to Semantic Environments

Since associative patterns can be built up and processed, transformed patterns can be manipulated.  The 'environment' of a CPU consists of possibilities for the efferent processes.  If there are two CPU's writing and reading to the same two memory locations, then what is found in the patterns of one CPU's efferent processes depends upon not only what that CPU wrote, but also on what the other CPU wrote.  (Carl could find black here after he painted white here.)

For comparison, suppose there were certain additional 'environment' devices connected to the signal lines (along with the memory cells).  These devices could periodically write into the memory cells.  The CPU could detect and identify patterns over time showing up in the memory cells, patterns which were dependent upon what the CPU wrote as well as on what the environment devices wrote.

3. Perception.

Generalizing perception is straightforward.  Any abstraction on the sequence of efferent processes can be learned and subsequently recognized.  By adding OR gates on subsets of such patterns, 'fuzzy' recognition or similarity measures can be instantiated.  In fact, any logical formula can be constructed of hardware.  In the human brain with from 1012 to 1014 neurons each having thousands of connections, there is room for a lot of patterns that can be innately recognized (dedicated hardware circuits), or learned and subsequently recognized (programmable circuits).

Generalized Comparison

At the base level the comparison is bitwise; however, transformations on an abstraction from a bit sequence can find 'similarities' (or differences).  Logically, it is appropriate to compare two bit sequences of the same length.  Comparing two sequences by a pattern-matching technique would allow skipping extra bits in one, or blocking out templates, or other such means.  Each is achievable through abstraction on the bit sequences.

Extended to Semantic Environments

Any efferent process pattern, including higher level abstractions, can be compared and recognition can occur.  We humans now give most of our processing faculties over to patterns much removed from the physical environment.  In 1921 Korzybski classified mankind as time-binders.9  George Miller called us 'informavores';10 we spend much more of our efforts producing and sensing our semantic environment.  Teenagers now practically live in front of Music Television (MTV) and get more input from music videos than anything else.  (Today's high school graduates have averaged 12,000 hours in the classroom and 15,000 hours in front of the tube.)


CHAPTER III

CONCLUSIONS

Essential to reference, representation, and perception is the notion of aboutness.  Symbols are "about" their referents; representations are "about" what they stand for; and perceptions are "about" that which gives rise to them (whether in physical or semantic environments).  However, these are all distinct forms of "aboutness".  The most primitive of these forms is that found in representation.  Aboutness in this context is the aboutness that a copy has of the original when that copy is caused by the original.  Its essential structure is that of the form of something being reproduced.

Perception has its aboutness derived from representation. Patterns in the information, which are internal copies of sensor responses (and abstractions), are stored and compared.  Aboutness of these patterns is directly derived from the aboutness of copying.  When selections are made among such patterns, and associated afferent processes initiated, that is, when we choose from among things we see and act accordingly, a 'mysterious connection' is endowed between the internal representation and that which causes it.

A. Reference is Reducible to Cause and Effect.

There is no mystery to reference.  The simple processes of selecting from among afferent processes enables a differential response by the environment.  Selecting a high or low voltage for the control line causes a change in the one or the other memory device.  This causes one device to be selected and the other device to be 'de-selected'.  Selecting the voltage level causes one or the other of two devices to be selected.  Analogously, our primary form of reference, selecting from afferent processes, results in our taking actions of the form going to or grasping something.

B.  Representation is Reducible to Cause and Effect.

The primary form of representation, the simple process of copying the form of something, just involves the setting of a voltage level (high or low).  The propagation of voltages in an electrical system is purely by cause and effect.  In our case, our senses respond to external stimuli; it is copies of these responses that are copied and propagated within the system.  The "Brains in a Vat" point of view is justified fully -- the skeptics cannot be answered; whatever caused the sense organ to respond is not reproduced within the system.  It's all caused.

C. Perception is Reducible to Cause and Effect.

As perception is nothing more than the recognition of stored patterns, which are themselves mere copies of sense organ responses, by the process of comparing, this too is based solely on cause-effect connections.

A Final Note

I have suggested an alteration to our use of the terms 'reference', 'representation', and 'perception'.  I have suggested that the terms have not heretofore been univocally used, that is, that there have been levels of structure imbedded within our past usages of these terms.  I view the structure as essentially recursive.  Each of these higher forms is derived from a basic form.  I have proposed selection as the basic form for reference, copying as the basic form for representation, and comparison as the basic form of perception, and have shown in detail how these are all related and instantiable solely in cause-effect relations.


Figure 1 - A One Bit memory cell constructed with a simple latch.

Figure 2 - A One Bit memory cell constructed with a latch and a driver.

Figure 3 - A One Bit memory cell constructed with a latch and a driver using a single control line.

Figure 4 - A One Bit memory cell with an enable line to control access.

Figure 5 - Two One Bit memory cells with different addresses.

Figure 6 - The One Bit CPU interface to its outside world.

Figure 7 - The One Bit CPU interface and connected Memory Cells.

Figure 8 - The One Bit CPU with its memory cells and an enable line.


REFERENCES

1. Thomas Kuhn, The Structure of Scientific Revolutions, Second Edition, University of Chicago Press, 1970 Back to document

2. Thomas S. Kuhn, Possible Worlds in History of Science, [August 1986 Nobel Symposium] (revised), as presented to the Cognitive Science Society at the University of Massachusetts at Amherst, November 20, 1986. Back to document

3. Don Kerr, Institute of General Semantics, Summer Laboratory-workshop in Korzybskian General Semantics, 1977 Back to document

4. Idries Shah, The Sufis, Doubleday, 1964, p. xxiii Back to document

5. Alfred Korzybski, TIME-BINDING: The General Theory, Institute of General Semantics, 1949, a paper originally presented before the International Mathematical Congress, August 1924, Toronto, Canada, p. 11. Back to document

6. Patricia Smith Churchland, Neurophilosophy: Toward a Uniform Science of the Mind/Brain, MIT Press, Cambridge, 1986, pp 278-83. Back to document

7. Bruce Aune, Metaphysics: The Elements, University of Minnesota Press, Minneapolis, 1985 Back to document

8. Marin Minsky and Seymour Papert, Perceptrons, MIT Press, Cambridge, 1969, (second printing with corrections 1972). Back to document

9. Alfred Korzybski, Manhood of Humanity, The International Non-Aristotelian Library Publishing Company, Lakeville, 1921 Back to document

10. Zenon W. Pylyshyn, Computation and Cognition, MIT Press, Cambridge, 1984 Back to document


BIBLIOGRAPHY

Abelson, Raziel, Definition, Encyclopedia of Philosophy, Vol. 2, Macmillan, New York, 1967.

Ackoff, Russell L., and Fred E. Emery, On Purposeful Systems, Aldine-Atherton, Chicago, 1972.

Ackrill, J. L., Aristotle's Categories and De Interpretatione Oxford, 1963.

Anderson, W. A., Auditory Illusions and Experiments, Royer-Anderson, Palo Alto, California, 1976.

Aune, Bruce, Metaphysics: The Elements, University of Minnesota Press, Minneapolis, 1985.

Bach, Emond, Natural Language Metaphysics, Chancellors Lecture, University of Massachusetts, October 10, 1985.

Bird, Graham, Kant's Theory of Knowledge, Humanities Press, New York, 1973.

Brown, G. Spencer, Laws of Form, London, 1969.

Carnap, Rudolf, The Logicist Foundations of Mathematics, Philosophy of Mathematics 2nd. ed., Benacerraf & Putnam., Cambridge University Press, New York, 1983.

Carroll, Lewis, Alice's Adventures in Wonderland, Avnel Books, Crown, New York.

Carroll, Lewis, Through the Looking Glass and What Alice Found There, Avnel Books, Crown, New York.

Churchland, Patricia Smith, NEUROPHILOSOPHY: Toward a Unified Science of the Mind/Brain, MIT Press, Cambridge, 1986.

Cohen Paul R., and Edward A. Feigenbaum, Handbook of Artificial Intelligence, Vol. 3, HeurisTech Press, Stanford, 1982.

Considine, Douglas M. (ed), Hearing and the Ear, Van Norstand's Scientific Encyclopedia, Fifth Edition, 1976, pp. 1237-42.

Considine, Douglas M. (ed), Vision and the Eye, Van Norstand's Scientific Encyclopedia, Fifth Edition, 1976, pp. 2289-93.

Cornman, James W., Perception, Common Sense, and Science, Yale University Press, New Haven, 1975.

Dretske, Fred I., Knowledge and the Flow of Information, MIT Press, Cambridge, 1981.

Eccles, J., The Brain and the Unity of Conscious Experience, Cambridge University Press, Cambridge, 1965.

Elgin, Catherine Z., Translucent Belief, The Journal Of Philosophy, 1985, pp. 74-91.

Feldman, Richard, Saying Different Things, Philosophical Studies 38, 1980, pp. 79-8.

Fine, Gail J., Knowledge and Logos in the Theaetetus, Philosophical Review 88, July 1979, 366-397.

Frege, Gottlob, On Sense and Nominatum, Language, Meaning, and Truth.

Geach, P. T., History of the Corruptions of Logic, Logical Matters, Basil Blackwell, Oxford, 1972.

Glymour, Clark, Theory and Evidence, Princeton University Press, Princeton, 1980.

Haldane and Ross, The Philosophical Works of Descartes, Volume I, Cambridge University Press, New York, 1931.

Haldane and Ross, The Philosophical Works of Descartes, Volume II, Cambridge University Press, New York, 1931.

Hall, Calvin S., & Vermon J. Nordby, A Primer of Jungian Psychology, New American Library Mentor Book, New York, 1973.

Hamlyn, D. W., A Priori and A Posteriori, Encyclopedia of Philosophy, Vol. 1, MacMillan, New York, 1967.

Hamlyn, D. W., Epistemology, History of, Encyclopedia. of Philosophy, Vol. 3, Macmillan, New York, 1967.

Hergenhahn, B. R., An introduction to Theories of Learning, Prencice-Hall, New Jersey, 1976.

Hirst, R. J., Perception, Encyclopedia of Philosophy, Vol. 6, Macmillan, New York, 1967.

Hirst, R. J., Sensa, Encyclopedia of Philosophy, Vol. 7, Macmillan, New York, 1967.

Hoffstadter, Douglas R., Gödel, Escher Bach: An Eternal Golden Braid, Random House, New York, 1980.

Hollis, Martin & Lukes, Steven (ed), Rationality and Relativism, MIT Press, 1982.

Holme, Richard, (pub), Developmental Psychology Today, Communications Research Machines, Del Mar, 1971.

Irwin, T. H., Plato's Heracleiteanism, The Philosophical Quarterly Vol. 27, No 106, 1977, pp 1-13.

Jackson, Frank, Perception: A Representative Theory, Cambridge University Press, New York, 1977.

Jennings, Richard C., Popper, Tarski, and Relativism, Analysis, 43, 118-123, Je 83.

Johnson, Kenneth, General Semantics: An Outline Survey, International Society for General Semantics, San Francisco, 1972.

Korzybski, Alfred, Manhood of Humanity, The International Non-Aristotelian Library Publishing Company, Lakeville, 1921.

Korzybski, Alfred, Science and Sanity: An introduction to Non-Aristotelian Systems and General Semantics, (1933) Forth Edition, The International Non-Aristotelian Library Publishing Company, Lakeville, 1958.

Korzybski, Alfred, TIME-BINDING: The General Theory, Institute of General Semantics, 1949, a paper originally presented before the International Mathematical Congress, August 1924, Toronto, Canada.

Kripke, Saul A., A puzzle About Belief, Encyclopedia of Philosophy, Vol. 7, Macmillan, New York, 1967.

Kuhn, Thomas S., Possible Worlds in History of Science, [August 1986 Nobel Symposium] (revised), as presented to the Cognitive Science Society at the University of Massachusetts at Amherst, November 20, 1986.

Kuhn, Thomas S., The Structure of Scientific Revolutions, Second Edition, University of Chicago Press, 1970.

Land, Edwin H., The Retinex Theory of Color Vision, Scientific American, Vol. 237, No. 6, December 1977.

Lewis, David, Attitudes De Dicto and De Se, Philosophical Papers, Vol. 1., Oxford University Press, New York, 1983.

Lewis, David, What Puzzling Pierre Does Not Believe, Australasian Journal of Philosophy, Vol. 59, No. 3, September 1981.

Mays, Wolfe, Piaget, Jean, Encyclopedia of Philosophy, Vol. 6, Macmillan, New York, 1967.

McLuhan, Marshall, Understanding Media: The Extensions of Man, McGraw Hill, New York, 1964.

Minsky, Marvin, and Seymour Papert, Perceptrons, 1969, (second printing with corrections), MIT Press, Cambridge, 1972.

Minsky, Marvin, Communication With Alien Intelligence, Byte, Vol. 10, No. 4, (April 1985), Petersborough NH.

Moody, Ernest A., William of Ockham, Encyclopedia of Philosophy, Vol. 8, Macmillan, New York, 1967.

Norman, Donald A., and Rumelhart, David E., Explorations in Cognition, Freeman, San Francisco, 1975.

Oster, Gerald, Auditory Beats in the Brain, Scientific American, October, 1973, W. H. Freeman, San Francisco.

Perry, John, The Problem of the Essential Indexical, draft typewritten manuscript version of a paper subsequently published in Nous 13.

Piaget, Jean, Genetic Epistemology, Columbia University Press, New York, 1970.

Popper, Karl R., The Logic of Scientific Discovery, 1959, Harper Torchbook 2nd ed., 1968, (Originally published in 1934 as Logik der Forschung).

Putnam, Hilary, The Thesis That Mathematics is Logic, Mathematics Matter and Method, Cambridge University Press, 2nd. ed., New York, 1979.

Putnam, Hilary, Reason, Truth and History, Cambridge, 1981.

Pylyshyn, Zenon W., Computation and Cognition, MIT Press, Cambridge, 1984.

Quine, Willard Van Orman, Theories of Intrinsic Value, American Philosophical Quarterly, 11, 123-132, Apr 1874.

Quine, Willard Van Orman, From a Logical Point of View, 2ond Ed., Harvard University Press, Cambridge, 1980.

Quine, Willard Van Orman, Ontologcal Relativity & other essays, Columbia University Press, New York, 1969.

Quine, Willard Van Orman, Theories and Things, Harvard University Press, Cambridge, 1981.

Quine, Willard Van Orman, Word & Object, MIT Press, 1960.

Raphael, Bertram, The Thinking Computer: Mind Inside Matter, Freeman, San Francisco, 1976.

Read, Allen Walker, The Relation of Definitions to Their Contextual Basis, ETC.: A Review of General Semantics, Vol. 39, No. 4, Winter, 1982, pp. 318-27,.

Rose, Stephen, The Conscious Brain, Random House, New York, 1976.

Salmon, Wesley C., The Foundations of Scientific Inference, Pittsburg, 1966.

Scruton, Roger, From Descartes to Wittgenstein: A short history of modern philosophy, Horper & Row, New York, 1982.

Searle, John R., Minds, Brains, and Programs, MindDesign, MIT Press, 1981.

Shah, Idries, The Sufis, Doubleday, 1964.

Ward, Andrew, The Development of Quine's Logicism, Dialogue 25, April 1983, 48-55.

Whorf, Benjamin Lee, Language, Mind, and Reality, 1941, Reprinted in Language Thought & Reality, MIT Press, 1956.

This page was updated by Ralph Kenyon on 2009/11/16 at 21:21 and has been accessed 30013 times at 39 hits per month.