Those who have what Searle describes as the "Strong AI" agenda are =
typically functionalists. To put it in layman's terms, "consciousness =
is as consciousness does", where "does" means "exhibits empirically =
observable characteristics". So -by redefinition- (of "consciousness") =
a computer (or room with a person in it or....) that exhibits the =
relevant behavior, especially if combined with internal (but at least in =
principle observable) states that are mappable-on-to/interpretable-as =
human-like cognitive states simple IS conscious.
Key points about this approach to mind:
(1) It involves a redefinition of consciousness, away from (a) the vague =
but common-sense, fundamentally ostensive (pointing to ourselves), =
introspective/perceptive definition, toward (b) a more scientifically =
useful but also qualitatively different functional/engineering model of =
the mind. The justification given for this redefinition is either (i) =
for the sake of argument, (ii) we need to make it empirical/scientific =
or we can't even talk about it intelligently, (iii) we need a definition =
that we (pragmatic scientists) can USE, (iv) prove that this isn't the =
right definition, and/or (v) it's a free country -- we can define terms =
any way we want.
(2) Acceptance of (1)(b) is either very innocuous or extremely =
confusing, depending on whether or not one realizes that one has =
basically changed the subject from "consciousness as we ordinarily mean =
it" to "consciousness as we ordinarily mean it OR anything that is, from =
an engineering/operationalistic perspective, functionally roughly =
equivalent". This is why the Turing test is seen as so profound, e.g.: =
consciousness is, on this redefinition, as consciousness does. If =
anything -performs as well as- a conscious person at some task for which =
we rely on consciousness, then said thing -is conscious-.
This is innocuous if one realizes there are two very distinct uses of =
the terms, and something could be conscious in the engineering sense =
(sense (b)) without being conscious in the common sense/philosophical =
sense (sense (a)), at least so far as the meanings of the terms go. =
(Maybe there's always a one-to-one correlation, but that's not something =
that can be determined a priori -- one needs a theory to correlate the =
two.)
E.g., HAL 9000 was clearly "conscious" in sense (b), but given that we =
(presumably) have a full, without-residue explanation of everything "he" =
does and is that makes no reference to consciousness in sense (a), then =
we can safely conclude that he was not conscious in sense (a). (Might =
one say that "he" still -may- possess a consciousness in sense (a) that =
was an epiphenomenon [a phenomenon caused by something, but with no =
causal powers of its own -- a causally inert byproduct]? This would be =
without any known physical basis [or known metaphysical basis, for that =
matter], and so IF we assume that HAL is COMPLETELY explainable by (say) =
solid-state physics, and if such physics make no reference =
to/explanation of consciousness in sense (a), than we can conclude that =
HAL is NOT conscious in sense (a), even epiphenomenally.)
(If anyone wants me to respond to the argument "Well, but how do you =
know PEOPLE are conscious? By their behavior. So if computers behave =
the same way, we should draw the same conclusions, unless we are (to use =
Hofstader's delightful term) 'meat-based chauvinists'", let me know, but =
it may be obvious how I'd reply based on my HAL comments.)
(3) None of the justifications for adopting the redefinition are at all =
compelling if sense (b) is meant as a REPLACEMENT for sense (a). =
Several of them may be good enough if one is adopting the redefinition =
as an SCIENTIFIC/ENGINEERING SUPPLEMENT for sense (a).
Remember that science and engineering are very pragmatic, so better by =
far a useful falsehood than a useless truth. If science doesn't have a =
good grip on consciousness in sense (a) -- and it doesn't -- but does in =
the new sense (b), then it should proceed on sense (b), particularly =
since, at least wrt humans, there seems to be some causal, neurological =
linkage between the (a) and (b) senses, the resolution of which is a big =
part of the real philosophical mystery of consciousness.
Also, sense (b) has independent utility. Let's suppose, as I think we =
must, that HAL 9000 had exactly zero consciousness in sense (a). Does =
this mean the HAL project was a failure? Maybe from a narrowly =
philosophical perspective. But that's not much of a criticism AT ALL. =
HAL would still be a truly revolutionary breakthrough in computing, and =
would have an extraordinary impact on human life, completely without =
regard to "his" having or lacking consciousness in the ordinary sense =
(a). At any rate, I certainly would rather have an unconscious HAL =
running my spaceship (with certain software modifications!!) or doing =
almost anything else, rather than a conscious, say, Salamander running =
it!
(4) Reductionists usually claim either to eliminate consciousness (if =
they're talking about sense (a)), or accept it on a functionalist =
redefinition (sense (b)). In either case, they eliminate consciousness =
in sense (a). Some try to have it both ways by blurring the distinction =
(e.g., Hofstadter, ontologically and ultimately being reductionistic and =
eliminative, but practically and superficially [given that we need to =
deal with people on a macro-object level] saying he believes in =
consciousness in the ordinary sense (a) -- i.e., consciousness in sense =
(a) is a useful, even practically necessary -fiction-, emphasis and even =
phrasing mine). But the distinction persists nonetheless.
How in the world could one eliminate consciousness in sense (a)? I =
mean, what could be more self-evident, more epistemically foundational =
than the fact that we're conscious? I hear ya, buddy -- and I agree =
with you.
But eliminative materialists -- a radical but courageously honest and =
intellectually consistent bunch -- counterargue that (a) we've abandoned =
all sorts of "folk concepts" as science has progressed. Folk =
psychological concepts -- beliefs, feelings, images, pains, pleasures, =
etc. etc. -- will fall by the wayside just as did the Olympian gods, =
evil spirits, witches, etc. etc., when confronted with scientific =
explanations and ontologies. So there's precedent. (b) We don't really =
lose anything by eliminating these things, since they don't really exist =
anyway. These terms arose for evolutionary reasons (our brains weren't =
designed to deal with micro-realities, so they invented macro-fictions =
as crude shorthand for the unimaginably complex underlying =
micro-realities), but we've outgrown them now and can deal with the =
underlying reality itself, at least when functioning as scientists. So =
we lose nothing -real-. So no big deal.
Not surprisingly, many (most all?) eliminative materialists accept =
scientism (the attitude that nothing is true, or perhaps that nothing is =
rationally believed, without scientific justification): these folk =
psychology concepts are fine as far as they go, but since there's no =
scientific evidence for them, and if anything conclusive scientific =
evidence against them (just what is a "pain", exactly, in terms of =
physics or microbiology? Or an "image", in the same terms? Or....), =
even while there's compelling scientific evidence for the neurological =
basis of the FUNCTIONALITY (we CAN answer, at least very roughly, =
questions like "what is 'pain' FUNCTIONALLY", "what is an 'image' =
FUNCTIONALLY", etc.), we should accept the science and reject the folk =
wisdom, now that we've intellectually come of age.
While I don't find these arguments even remotely persuasive, I probably =
would if I found scientism for some reason compelling. Why would =
someone find it so? Got me. I mean, there are good reasons to be =
skeptical of many sources of information and many mechanisms of belief =
formation; but I've seen nothing that makes scientism even =
intellectually seriously optional, let alone mandatory.
(5) This is critical: most people working and talking about this stuff =
don't do so very precisely FROM A DEEP-TRUTH-ORIENTED PHILOSOPHICAL =
PERSPECTIVE. They're often -very- precise from an pragmatic engineering =
or scientific perspective. Hence, they're do great work in the lab, =
where that latter precision rules, but very sloppy work philosophically.
So they won't make the distinctions between consciousness in sense (a) =
and sense (b), typically either thinking (b) is all there is (confusing =
a ort of methodological naturalism with a sort of ontological =
naturalism), or sloppily conflating the two and making (b) dominant. =
Why sense (b)? Again, because (b) is the empirically accessible =
scientific/engineering definition. Philosophy is not usually =
utility-driven. Science and especially engineering are.
(6) Accepting a strong sense of consciousness in the ordinary sense, =
sense (a), does not -require- Cartesian/substance dualism, but I suspect =
it requires either -property- dualism (that is, in addition to physical =
properties as described by physics, there are also mental properties, =
not currently so described, and MAYBE in principle indescribable that =
way) or a new sort of monism that goes beyond current physics. (Some of =
the philosophically oriented physicists, e.g., Penrose, are working in =
this direction.) Right now there aren't any good, hard theories for =
consciousness in sense (a), so far as I know. Some think in principle =
there can't be, -perhaps- -at best- there being some hard theory for =
things that CORRELATE, stochastically or deterministically, with =
consciousness in sense (a). Others disagree, and say that perhaps with =
a revolutionary new scientific paradigm, we could get a hard theory of =
consciousness itself, where things like "pains" and "images" and =
"beliefs" (or their NON-eliminative, more precise successor concepts -- =
this is vague, I realize) will be parts of the new physical ontology. =
Time will tell.
Boy -- this turned into a book. Sorry about that, and about the lack of =
precise organization in my presentation. But however poorly outlined, =
these are important points if one is to discuss this issue carefully.
THE END.
--John
----------
From: Bill Hamilton[SMTP:hamilton@predator.cs.gmr.com]
Sent: Friday, January 31, 1997 3:45 pm
To: evolution@ursa.calvin.edu
Subject: Turing test
Thanks for an interesting post, Brian. I believe Roger Penrose' book =
"The
emperor's new mind" opens with a story about the demonstration of a
computer that supposedly can pass the Turing test. In this particular =
demo
the MC invites a small boy to ask the first question. The kid says, =
"How
does it feel to be a computer?" which stumps the computer. (Of course
there's an implicit assumption in this that the computer is not =
programmed
to attempt deception)
I tend to believe that a mind is something quite different from a =
computer,
and that a brain is more like a computer than a mind. So the people who
are saying thinking is algorithmic have to show how will, intent,
initiative, etc. can be algorithmic.
Bill Hamilton
-------------------------------------------------------------------------=
-
William E. Hamilton, Jr, Ph.D. | Staff Research Engineer
Chassis and Vehicle Systems | General Motors R&D Center | Warren, MI
810 986 1474 (voice) | 810 986 3003 (FAX) | whamilto@mich.com (home =
email)