Imagine a race of artificial creatures that live in a computer. These
creatures are quite advanced compared to the artificial life creatures
people play with today. They are sufficiently advanced that they have
something that passes for consciousness, and their creator, a chaos
theorist who likes to write C++ code, communicates with them. The
creatures have some ability to utilize the computer's facilities -- network
connections, attached videocameras (the chaos theorist works in an
industrial vision research lab) to find out about the world around them,
and they carry on dialogs with their creator in an effort to understand
their world. The creator tries to answer their questions, but no matter
how many times he answers certain questions, regardless of the level of
detail, there seems to be little or no understanding on the part of the
electronic creatures. For example, the question of why he created them in
the first place. He has answered quite honestly that he created them
because he thought there would be considerable value in having "smart
assistants" available which could perform various search and control tasks.
He has answered that question many times, and he has told them many times
why he needs to do the tasks in the first place. They seem somewhat
satisfied with the answer "to learn things," but he's also told them that
he needs to perform these tasks to put food on the table, and that has
invariably led to more questions with ever more detailed answers, with the
creatures finally breaking off the questioning, indicating they have given
up and are drifting off to other, more fruitful, pursuits.
Could it be that we simply lack the sensory and/or mental capacity to
understand the answers?
Bill Hamilton | Vehicle Systems Research
GM R&D Center | Warren, MI 48090-9055
810 986 1474 (voice) | 810 986 3003 (FAX)
hamilton@gmr.com (office) | whamilto@mich.com (home)