>>I tend to believe that a mind is something quite different from a computer,
>>and that a brain is more like a computer than a mind. So the people who
>>are saying thinking is algorithmic have to show how will, intent,
>>initiative, etc. can be algorithmic.
>
Dave Bowman wrote
>I'm not so sure that a brain is like a computer. It is not clear to me that
>brain function is at all algorithmic as a deterministic Turing machine (i.e.
>an ordinary computer) is algorithmic.
I agree that a brain is not like a computer -- certainly not like a
deterministic Turing machine anyway. My intent was to draw a distinction
between minds and mechanisms, or perhaps between mechanisms and their
users. Like a computer, a brain is a mechanism -- or a large collection of
them -- and the mind is the user that decides what the mechanism is to do.
We don't (except in AI/robotics research efforts) build computers to go off
and function for themselves. We build them to perform functions that we
either devise in advance, or that are specified to the machine by the user
during its operating life. The mechanism -- the computer -- doesn't have
any will, purpose, desires, etc. Those are the province of the user and/or
designer.
It is possible that human (and some
>nonhuman) brains use nondeterministic means in thinking. Physical
>neurological access to such nondeterministic physical processes can come from
>amplification of either ordinary thermal fluctuations or possibly quantum
>fluctuations (what Penrose suspects) up to neurologically significant levels
>via a nonlinear leveraging process at the molecular/quantum level. After
>all, ordinary embryological development processes leverage the precise
>ordering of nucleic acids on a DNA molecule into the construction of a whole
>creature. So I don't think anyone needs to show that "will, intent,
>initiative, etc." need to be *algorithmic* even if they do believe that
>consciousness (type-a in John's terminology, or minds in yours) are
>epiphenomena of brain function (type-b consciousness). Rather, what they
>would need to show is how these external (at the level of neural firings)
>nondeterministic sources of seeming randomness actually can produce type-a
>consciousness. (Of course, first coming up with a useful definition of just
>what type-a consciousness actually is would sure help.)
Fair enough. As the individual who is (at least claims to be) conscious,
it's hard for me to accept that if my consciousness is merely an
epiphenomenon of neural activity. that I can direct my body to get up, walk
down the hall and get a cup of coffee. However, just to prove I can, I'm
going to do it now (exits stage left)
I think the point is that if you look at a system at one level you get a
very different picture than if you look at another level. And an
explanation at one level (e.g. neural activity) may give a very
unsatisfying explanation -- or no explanation at all -- of what's going on
at another level, such as one's emotions while listening to a particular
piece of music.
Bill Hamilton
--------------------------------------------------------------------------
William E. Hamilton, Jr, Ph.D. | Staff Research Engineer
Chassis and Vehicle Systems | General Motors R&D Center | Warren, MI
810 986 1474 (voice) | 810 986 3003 (FAX) | whamilto@mich.com (home email)