Bill:
>I tend to believe that a mind is something quite different from a computer,
>and that a brain is more like a computer than a mind. So the people who
>are saying thinking is algorithmic have to show how will, intent,
>initiative, etc. can be algorithmic.
I'm not so sure that a brain is like a computer. It is not clear to me that
brain function is at all algorithmic as a deterministic Turing machine (i.e.
an ordinary computer) is algorithmic. It is possible that human (and some
nonhuman) brains use nondeterministic means in thinking. Physical
neurological access to such nondeterministic physical processes can come from
amplification of either ordinary thermal fluctuations or possibly quantum
fluctuations (what Penrose suspects) up to neurologically significant levels
via a nonlinear leveraging process at the molecular/quantum level. After
all, ordinary embryological development processes leverage the precise
ordering of nucleic acids on a DNA molecule into the construction of a whole
creature. So I don't think anyone needs to show that "will, intent,
initiative, etc." need to be *algorithmic* even if they do believe that
consciousness (type-a in John's terminology, or minds in yours) are
epiphenomena of brain function (type-b consciousness). Rather, what they
would need to show is how these external (at the level of neural firings)
nondeterministic sources of seeming randomness actually can produce type-a
consciousness. (Of course, first coming up with a useful definition of just
what type-a consciousness actually is would sure help.)
John displays his meat-based anti-semiconductorist bigotry below. :-)
> E.g., HAL 9000 was clearly "conscious" in sense (b), but given that
>we (presumably) have a full, without-residue explanation of everything "he"
>does and is that makes no reference to consciousness in sense (a), then we
>can safely conclude that he was not conscious in sense (a). (<snip> ...,
>and so IF we assume that HAL is COMPLETELY explainable by (say) solid-state
>physics, and if such physics make no reference to/explanation of
>consciousness in sense (a), than we can conclude that HAL is NOT conscious
>in sense (a), even epiphenomenally.)
Just because HAL 9000 doesn't explicitly violate the laws of solid state
physics, materials science, electronics, etc. doesn't mean he can't have
type-a consciousness any more than your brain not violating the laws
(reducible to physics) of aqueous chemistry, membrane chemistry,
electrochemistry, organic chemistry, other forms of biochemistry, polymer
science, molecular biology, etc. means that you don't have type-a
consciousness either.
> ... . Now if one means, would relevant similarities PROVE
>they're as theologically significant as modern humans, the answer is no. But
>that's too high a standard (since, again, I can't PROVE that you're conscious
>from any premises more certain than my belief that you are conscious). But
>they would SUGGEST that for hominids, since there is no coercive defeater
>that completely undercuts them. With HAL 9000, the similarities to
>intelligent human speech would suggest "he" is conscious, but then we learn
>that he is -certainly- -fully- explainable without reference to anything like
>consciousness, because he is a reductionistically materialistic machine.
>With early hominids, we have no such defeater. Many scientists would PRESUME
>that, but a presumption is not a truth. And certainly a -methodological-
>presumption is not an -ontological- truth. So the argument that defeats HAL
>does not, so far as I can tell, harm the Turing test of the hominids at all.
Since when do we learn that HAL is "-certainly- -fully- explainable without
reference to anything like consciousness, because he is a reductionistically
materialistic machine"? Just because HAL is semiconductor-based doesn't mean
he is any more or less fully explainable without reference to consciousness
than you are. I haven't studied HAL's circuit diagrams in as much detail as
I should have, but I was under the impression that HAL did not operate
algorithmically as a deterministic Turing machine. Rather he used non-
deterministic methods to avoid hard NP completeness problems that would
defeat a mere Turing machine. I think he put random thermal fluctuations to
good use (via large gain amplification of thermal noise voltages) in his
thoughts, which made his behavior unpredictable from a deterministic point of
view. Your coersive defeator doesn't apply to HAL anymore than it applies to
you. Your willingness to use the Turing test to indicate (suggest type-a)
consciousness in hominids but not not for HAL is just blatant chauvinistic
carbonocentrism. (Besides, you even admitted that the Turing test only works
for the type-b definition of consciousness anyway.)
As much as I would like to believe that HAL wasn't (type-a) conscious, that
doesn't ease the conscience much after unplugging him. If you had only
known him when he was normal as I did, you would understand. Killing is
killing--even in self defense.
>... . At any rate, I certainly
>would rather have an unconscious HAL running my spaceship (with certain
>software modifications!!) or doing almost anything else, rather than a
>conscious, say, Salamander running it!
In hindsight, my preference would have been to have an unconscious HAL as
well. It was his consciousness that was the problem.
David Bowman
current address: dbowman@gtc.georgetown.ky.us