From: PSCF 38 (December 1986): 237-243.
This paper was prepared for presentation to the Vancouver, British Columbia
section of the Canadian Scientific and Christian Affilliaton on October 18,
1985 at Trinity Western University, Langley, B.C.
The mind-brain problem is similar to problems in artificial intelligence (Al) since AI systems are analogous to the mind-brain. A mind-brain representation is presented which incorporates both monistic and dualistic aspects of mind-brain experience, with emphasis on the importance of distinguishing between ontological and epistemological categories. Alternative mind-brain representations are briefly discussed in view of AI notions of structure, behavior, causality, and function. Finally, the relevancy of the mind-brain problem to theology is briefly suggested in two ways.
1. existential awareness through introspection of our
conscious existence.
2.empirical awareness through observation of our physical existence.
The first kind of self-awareness is what Descartes
referred to in his "cogito ergo sum"-"I think, therefore I am"-and is a
cognitive
kind of awareness. The
second, in contrast, is a
perceptual
awareness; we learn
early that the being we know ourselves to be by
introspection is the same one that we physically sense as
our body. Because these two kinds of knowing of
ourselves are so different, it has been difficult historically to find a universally accepted relationship
between them. A multi-dimensional spectrum of tentative solutions or positions has attempted to establish the
relationship between these two kinds of self-knowledge, or else deny the reality of one of them. Some of
these positions have theological overtones in that they
lead to propositions on matters about which theologians
concern themselves.
AI and Knowledge Representation
One of the major activities in
Al
is knowledge
representation, which is concerned with computational representation of knowledge. The interest here
goes beyond the archiving of textual material; by
representing knowledge in the right computational
forms, automated reasoning or machine inference can
be applied to it with results that are normally attributed
to human experts. The principles of Al needed to do
this have led to a technology known as knowledge
engineering, which is a kind of applied epistemology.
The brain is analogous to the computer, as hardware, and the mind to the programming or software.
This study of knowledge from a computational viewpoint is similar in some ways to study of the relationship between mind and brain. Whatever "intelligence" Al systems demonstrate is manifested by a computer, which, like the brain, is a physical device. The brain is analogous to the computer, as hardware, and the mind to the programming or software. Thus, a problem analogous to the mind-brain problem is that of determining the relationship between hardware and software.
More significantly, the attempt to represent knowledge in a computationally compatible form is an attempt to express something of the essential nature of mind, since mind is the only instance we have of intelligence. Furthermore, these theories of mind are capable of "springing to life" in conjunction with a computer, giving us an unprecedented opportunity to examine the relationship between mental representations and physical mechanisms. Therefore, some major concepts of knowledge representation will be present here, and in their context the mind-brain problem will be examined.
Knowledge Representation ConceptsA representational theory is that which produces a representation of some domain of knowledge. This theory is expressed in a given language. The particulars of the domain are often called "objects" and can be physical objects, concepts, relationships, or abstractions in general. Representations are abstractions of the objects they represent. Consequently, not all that is true of an object will be found in a representation of it. The more comprehensive a representational theory, the more complete the representation, and also, the more complex it is. A theory which abstracts from the object only those attributes of interest is optimal, but contains simplifying assumptions leading to multiple, possibly conflicting, interpretations of the actual objects. To resolve contradictions among interpretations and select the correct interpretation, other kinds of theories are needed.
Thus, four aspects of knowledge representation may
be identified as follows:
1. objects (or a "domain") to be represented
2. a representational theory
3. a language in which to express the representational
theory
4. a resulting representation.
An illustration of this is the work done on electronic circuit recognition by Johan de Kleer while at MIT.", 6 His work involved capture of the kind of knowledge an electronics engineer has which allows him or her to determine what an electronic circuit does, given the schematic diagram (which shows the interconnection of the electronic components). The -diagram itself represents the circuit, but knowledge of electronics is
Dennis Feucht is involved in the development of electronically-controlled motion systems at Synektron Corporation and thermodynamic instrumentation and robotics at Innovatia Laboratories. He previously researched knowledge-based systems for instruments at Tektronix Laboratories, and co-authored a book on expert systems in 1986. He holds a B.S.E.E. (Computer Science) from Oregon State University, 1972.
required to understand what it is for. This domain of electronic circuit recognition knowledge illustrates some aspects of knowledge representation commonly found in other domains as well, including that of mind-brain representation.
To pose the circuit recognition problem in more detail, we first are given in the schematic diagram a structural description of a circuit, which comprehensively describes what the circuit is. The goal is then to formalize that knowledge, allowing an engineer to determine the functional or teleological description from the structural one. Not uncommonly, intermediate levels of abstraction are needed to proceed from structure to function because the conceptual leap involved is too large. Although engineers have memorized from past experience many circuits for which they immediately know the function, yet if they are confronted by a novel circuit, some kind of rationalization must be employed to identify the function of the new circuit. An intermediate level of description is used-a behavioral level-in which the behavior of the circuit is deduced from its structure. A behavioral description of an object makes explicit what it does. This involves knowledge of the behavior of the individual components under different conditions. To relate behaviors among components, a theory of behavior is needed, or a causal theory. A causal theory defines logically necessary relationships between behaviors. Two properties of a causal theory are important here:
1. locality-the theory applies to behaviors on a local rather than global (or overall) level in the structure of the object.
2. directionality-the logic of cause and effect is unidirectional in that causes are logically necessary for their associated effects to occur.
These two properties of the causal theory used in circuit
recognition (and elsewhere in physics) let us think in
terms of the propagation or flow of causes through a
circuit structure.
A functional description of the circuit can be derived from a causal one. This description tells what the circuit is for. Unlike causal theory, a theory of function, or teleological theory, has these two important properties:
1. globality-the theory applies to the overall function of a group of related components as a whole rather than to the components individually.
2. relationality-individual components are described in terms of how they contribute to the overall behavior of the circuit.
We then have a situation that can be graphically illustrated as this:Donwin Representational Theory Representational Language Representation
electronic
circuit
structure causal
theory
behavior
causal
representation
causal
representation
teleological theory
causes
functional
representation
These multiple representations increase in abstraction from structure to behavior to function. Because of abstraction, the resulting representations are simple enough to be usable, but at the cost of being ambiguous due to a lack of detail about the circuits being represented. This means that more than one explanation of how the circuit behaves is possible. By simplifying the theory of how circuits work, some critical aspects may be lost. For example, de Kleer simplified the representation of circuit behavior by allowing only increases or decreases in voltages at circuit nodes. Engineers often reason about how circuits work this way: "If the voltage at node A increases, then the voltage at node B must decrease. This causes the voltage at node C to decrease also, and so forth." But sometimes, it is not enough to reason about circuit behavior in terms of qualitative changes. Feedback loops and other global circuit structures may lead to multiple causes with different amounts of contribution to the combined effect. Thus, multiple-and sometimes contradictory-behavioral descriptions result. To determine which description is correct requires another approach to the circuit which is not just a better causal theory. Circuit modeling programs produce a comprehensive analysis of a circuit's behavior based on the laws of electronics, and give a unique, consistent result. However, this analysis becomes unwieldy when applied to circuits beyond a limited complexity. To handle greater complexity, a simplifying abstraction like de Kleer's qualitative causal theory is necessary. Such is the case for engineers as well, since numerically exhaustive analyses of circuits are not feasible.
So it is that multiple representations, with the accompanying theories that produce them, form a hierarchy of conceptual abstractions derived from the actual circuits themselves. Each level of this hierarchy is 11 spaced" from the lower one by a "conceptual distance" manageable by the human mind, for it is Al researchers who determine where the levels of representation for a given domain should be.
In addition to this abstraction hierarchy, the descriptions themselves may be hierarchically organized to deal with their own complexity. For example, the
Descartes placed both mind and brain at the same conceptual level-the physical or material level-so that both are the same kind of object and could be represented by the same representational theory
The Distinction between Ontology and Epistemology in Representations
Representations have dual contributing factors; both the representational theory, with its language, and the objects of the domain contribute to the representation. For example, a causal description of a circuit is the result of both the structural description of the circuit and of the causal theory used to produce the representation. For the mind-brain problem, the various solutions posed are different representations of the domain to be represented-the mind-brain. A significant q tion which arises in examining these various altetives is Which aspects of a mind-brain representations are due to the mind-brain itself and which are due to the theory used to represent it? Those aspec s of a, representation that are due to the nature of the object itself are the ontological aspects, and those due to the representational theory are the epistemological aspects. For the circuit example, it is easy to see that the qualitative aspects of the behavioral description are due to the qualitative causal theory used to analyze it. And the particular chain of causal flow is due to the structure of the circuit itself. But for the mind-brain problem, it is not at all easy to see which of these two contributors account for particular aspects of a representation. For example, is the mind ontologically real or a result of the theory used to describe the mind-brain?
Furthermore, it is not clear at what levels of abstraction the mind and brain are to be found. Some representations place them at the same level, For example Descartes placed both mind and brain at the same conceptual level-the physical or material level-so that both are the same kind of object and could be represented by the same representational theory in the same terms. This approach has not been too successful since the data we have to begin with about the mind is not normally described in material terms or categories. Descartes made an ontological distinction between mind and brain. When a distinction is made at the same level of representation, the resulting representation is dualistic. When no distinction is made, the representation is monistic. An example of a monistic mind-brain representation is identity theory (or central-state materialism). According to that theory, the mind is the brain. Whether the mind-brain is described in physical or mental terms, the descriptions are of the same object In this approach, two different representations of the mid-brain, one physical and the other mental, result from applying two different representational theories to the same object, the mind-brain. To then consider the mind and brain as one and the same would imply that the theories used to represent them are also the same, but this is not the case. However, although the identity theory asserts one object, the mind-brain, and affirms both mental and physical descriptions, it mistakes the descriptions themselves as being ontological rather than the result of distinct representational theories. Ontological and epistemological contributions to the mental and physical representations have not been adequately distinguished.
Another representation, epiphenomenalism, asserts that the brain causes the mind. But what kind of causality is meant? From the circuit example, a causal description of the brain, a physical structure, would be in terms of physical behavior. Since epiphenomenalism
acknowledges existential experience of mind, a physical-to-mental causal theory is required, whereby mental effects are described in terms of physical causes. But this approach confuses the idea of causation with different levels of representation. Unless epiphenomenalists develop a whole new concept of causation (in which case it should be given a different name to avoid semantic confusion), the present theory will not suffice for mind-brain representation. If this approach were taken to Al systems, it would lead to talk such as "These logic circuits caused the program to branch to a different line of code." Although a description of program branching can be related to logic-circuit behavior, it requires another level of abstraction, namely, a functional level, with a teleological theory which relates physical behavior of the computer to its functional description, which is the program. (Donald M. MacKay has made this distinction in specifying that the relation between mental and physical events are correlates rather than translations.') This is not to deny representation of mental behavior, but to emphasize that in order to relate it to the physical behavior of the brain another theory is required to bridge the conceptual gap-and this would be a teleological theory. In summary, the fault in the epiphenomenal representation is the assumption that mental and neural behaviors are related to a common representational theory. Neuroscience and cognitive psychology, however, are different theories and represent the mind-brain in different languages. To consider them causally related apart from teleological theory would be like trying to relate a street map of Vancouver with a politician from Vancouver. The two representations are both of Vancouver but are the result of distinct representational theories, which express the resulting representations in different languages, one geographical and the other political.
A Mind-Brain RepresentationTo do justice to what is understood from knowledge representation in Al, which provides an empirical base for testing representational theories, it is necessary to preserve both the monistic ontology of the mind-brain (as identity theory does), and yet preserve the necessary epistemological distinctions in the representational activity. What this suggests is the following kind of mind-brain representational scheme:
1. ontological monism: brain and mind are the same structurally just as a computer and the program it is running are.
2. epistemological dualism: mind and brain are different kinds of representations, resulting from different representational theories. Both are required to describe what the mind-brain is because both, being abstractions, are incomplete representations in themselves.
The first feature of this representational approach is to assume that the mind and brain both have as their objects of representation what, on the physical level of representation, is the same object, the mind-brain. That is, the mind and brain are not assumed to be distinct structures with their own independent existences any more than a computer and its program are structurally distinct. To ask where the program fits into the computer hardware must be answered in terms of energy states of parts of the hardware. It is the particular sequence of physical configurations that the computer takes that relates the hardware to the software. But do sequences of configurations of the same atoms have an' existence in themselves? The approach taken here would deny such a separate existence.
Unless epiphenomenalists develop a whole new concept of causation ' the present theory will not suffice for mind-brain representation.
Nevertheless, the program running on the computer is something to be reckoned with. It is a different representation of the computer's activity than an electronic one, but is not any less "real." It is as necessary to us in order to understand the computer as is the electronic description. Thus, the second feature of this approach is an affirmation of epistemological dualism-that both mental and physical representations of mind-brain are necessary. Again, the distinction has to do with representational theory and not with the ontology of the mind-brain.
An objection to this might be raised by the ontological dualist: If we make a distinction in our thinking between mind and brain, then, if that distinction is real, would it not also be true of the mind-brain itself? In a sense, the answer is yes. The distinction of epistemological dualism between mental and physical representations does correspond to a real distinction in the mind-brain itself. But it is not a physical distinction except in the sequence of states of magnetic dipoles, atomic orbital energies, et cetera. Returning to the computer analogy, different programs can run on the hardware and the distinction between the two phen ena are differences in state. No account of the h ware would be adequate (or complete in any se without, in some way, taking state into account, yet the physical structure of the computer remains essentially
The distinction of epistemological dualism between mental and physical representations does correspond to a real distinction in the mind-brain itself.
the same. The only substantial difference due to differences in state is in the thermodynamic quantity, entropy, which increases with time. But is entropy, a measure of the disorder of energy, materially real? Or, is it state real? It certainly cannot be ignored if we want to understand the computer in any but a superficial way.
This issue of the reality of epistemological categories vis-A-vis ontological categories is, I believe, a root issue in the mind-brain problem. In terms of knowledge representation, the question is whether the contribution to a representation by the representational theory adds to the reality of the object as it is represented. Extreme objectivists (such as materialists or positivists) would completely deny the reality of the contribution to a representation by the representational theory, while extreme existentialists would experience it as the only contributor. The approach set forth here does not deny the realism of either ontological or epistemological contributions to a representation. (Perhaps this is why D. M. MacKay, who has a similar approach, calls it "Comprehensive Realism. "8-10)
The Mind-Brain and Theology
Whether the ontology of the mind-brain is monistic or dualistic is an issue that naturally lends itself to theological discussion since the church has long had a historic interest in ontology. In both the Bible and subsequent theological development, the mind-brain problem is not addressed. But the same underlying issues were dealt with at length in the Patristic period (A.D. 300-500) by the church fathers and are still present. 11-16.
A second important aspect of biblical religion relevant to the mind-brain problem is the difference in ontological orientation between the concrete and wholistic Hebrew or biblical mind and the abstract and dualistic Neo-Platonist mind of the New Testament and Patristic era. Because Neoplatonism gave the church fathers philosophical categories or concepts with which to express Christian dogma, some of its concepts have remained an integral part of theological formulation. The interaction of the biblical "raw data" with a Platonic philosophical base has created a juxtaposition of ideas that are not easily harmonized. A prime example, close to the mind-brain problem, is the relationship of soul (or spirit) to body. Perhaps new ontological ideas, such as those being developed empirically in Al, will cast new light on the contrast between the Platonic belief in immortality of the soul and the biblical emphasis on resurrection of the body.
Conclusions
In conclusion, an approach to the mind-brain problem that combines ontological monism with epistemological dualism has been presented. It reflects central
aspects of orthodox ontology while overcoming the
faults inherent in identity theory and epiphenomenalism, and is consistent with what has been learned from
the empirically oriented activity of knowledge representation in Al.
ACKNOWLEDGEMENTS
The refinement of this paper was aided by the
useful critiques given by Dr. Loren Wilkenson of
Regent College, Vancouver, B.C., and Dr. Judy Manley Toronchuck of Delta, B.C. at the CSCA meeting at
which this paper was originally presented. Also, discussi . ons with Dane Waterman and Dr. Michael J.
Freiling were influential.
2 Richard Bellman, An Introduction To Artificial Intelligence: "Can Computers Think?", ch. 11, "Mathematical Models of the Mind," Boyd & Fraser, 1978, pp. 116ff.
3Arthur C. Custance, The Mysterious Matter of Mind, Zondervan/Probe, 1980.
4 D. Gareth Jones, "The Relationship Between the Brain and the Mind," JASA, Vol. 33, No. 4, December 1981, pp. 193ff.
5Johan de Kleer, "Causal and Teleological Reasoning in Circuit Recognition," Ph.D. thesis, MIT, EE/CS Dept., September 1979, Report no. AI-TR529.
6D. G. Bobrow, ed., "Qualitative Reasoning about Physical Systems," Artificial Intelligence, (North-Holland), Vol. 24, Nos. 1-3, December 1984 inclu. J. de Kleer, "How Circuits Work," pp. 205ff.
7Gordon Clarke, "Does Artificial Intelligence Threaten Genuine Faith," Faith and Thought, Vol. 109, No. 1, 1982, sect, 4: "The Mind-Brain Problem," pp. 43ff.
8Donald M - MacKay, Brains, Machines, and Persons, Eerdmans, 1980, pp. 11-20,81-84.
9Donald M. MacKay, "Ourselves and Our Brains: Duality without Dualism," Psychoneuroendocrinology, Vol. 7, No. 4,1982, pp. 285-294.
10Donald M. MacKay, "Man as a Mechanism," Faith and Thought, Vol. 91, No. 4, Winter 1960, pp. 145-157.
11George Eldon Ladd, "The Greek versus the Hebrew View of Man," The Pattern of New Testament Truth, Eerdmans, 1968, pp. 13-40.
12 Robert D. Brinsmead, "Man (Part 1)," ch. 2: "Man as Body and Soul," Verdict, Vol. 1, No. 1, August 1978, pp. 12-19.
13John Nolland, "Christian Thought in the Greek World," Crux ,Vol. 17, No. 4, December 1980, pp. 9-12.
14Dennis L. Feucht, "The Influence of Greek Thought on Christian Theology: Part I, " ACC journal, Vol. 1, No. 2, Fall 1983, pp 16-22.
15Gerald Bray, Creeds, Councils, and Christ, IVP, 1984.