"Richard Wein" writes
in message <001601bfb032$187d9540$732e70c3@richard.wein.virgin.net>:
<snip>
> > My reasoning is this, briefly. It seems more likely that an ID
> > would be a member of a race rather than the only one of its kind
> > simply because suggesting another form of evolution as the
> > explanation for its origin is superior to assuming that the ID
> > always existed (a god) or popped into existence from nowhere (divine
> > creation of an ID?).
>
> This is a reasonable assumption, but not a necessary one. Perhaps the ID
> which created life on Earth was itself a one-off creation of a species that
> evolved! Anyway, it ignores the possibility (at least from a theist's point
> of view) of an eternally existing God outside the universe. Since almost all
> IDers are theists, they're hardly likely to be persuaded by such an
> argument.
I'm actually very curious to see if an IDer can make a consistent
argument without appealing to some kind of god-like force
somewhere at some point. I think they realize that when you
appeal to a god, you lose the scientific battle; I'm trying
to see if such an argument can actually work when persued
to its full implications.
> > If we agree that an ID with intelligence must have some kind of
> > empathy, then we can conclude that the most logical basis for
> > applying empathy --that is, finding the attribute of any given
> > organism to deem worthy of empathic feelings-- should be
> > self-awareness. In humans, empathy without knowledge might
> > allow us to feel that humans of our own race are the only ones
> > worth empathizing with. However, empathy with knowledge tells
> > us that all humans -- or even all organisms capable of feeling
> > pain and pleasure the way we do -- are worth that.
>
> Empathy with knowledge may *tell* us that other organisms are worth
> empathizing with. That doesn't necessarily mean that we *will* empathize
> with them. There's a big difference between knowing something and feeling
> it. That's why we often do things that we *know* to be wrong and don't
> necessarily feel bad about it.
>
> The question is how well does empathy extend from our close circle of
> friends and relatives to other members of our species, and, more
> significantly, to other species? Judging by mankind's history of violent
> conflicts and inhumanity towards other species, I would have to say not very
> well. Our attitudes towards other races and species do seem to be improving
> with better education, but the old ingrained prejudices keep resurfacing.
Ah, but if attitudes are improving at all, that makes my point.
We observe a direction which can not be easily explained by any
force other than knowledge and empathy. Outlawing slavery,
allowing women to vote, permitting same-sex marriages -- there
is a direction here, clearly, that is most consistent with
increased perceived value of any individual solely because he
or she *feels*, don't you think? I've ruled out religion as
the source of this attitude, mainly because religions most often
seem to resist that sort of change.
> Empathy can easily disappear when it conflicts with other interests. For
> example, we may well empathize with a lamb playing in a field. But how many
> of us feel bad about eating meat? Then again, vegetarianism is on the
> increase (at least in the West, but maybe not in the developing world,
> where traditional diets are being replaced by western fast foods!).
>
> Perhaps the designer gains more in pleasure from watching the antics of the
> human race than he loses in empathic suffering!
But if we assume that attitudes are moving towards the highest
and purest value for the individual solely because of his, her
or its abililty to experience reality, it would seem most consistent
that this designer would be as far advanced in this attitude as
in its intelligence.
> > I've never known a moral code which didn't have -- at its base --
> > the goals of maximizing pleasure and minimizing pain. Think about
> > any moral rule whatsoever. Ultimately, it reduces to just that.
>
> That might be the original impulse for morality, but, like our other natural
> impulses, it has been modified under the influence of our intelligent mind.
> As a result, we have such moral imperatives as duty to country, upholding of
> religious and political dogma, work ethic, etc.
How are these not based, though, in pain and pleasure principles?
For example, duty to country is based on the rational assumption
that one's country is the strongest determiner of one's success
and personal freedom (among other things). However, in a global
economy, we can very quickly see the rational basis for being
equally concerned with our duty to the world and how our country
interacts with the other countries. Duty to one's country
is typically abandoned when one's country works directly
against one's interest.
Isn't it most rational to ground all moral imperatives so as to
reduce as much as possible pain and suffering and maximize
success and freedom for the most (as two possible "pleasure"
indicies)? Isn't it most *irrational* to follow a rule for any
other reason but that? (At least, I can't think of any good
reason to follow a rule that doesn't "make sense", as we usually
put it, making the assumption that morals must have goals that
bring good or avert bad).
> Who can say that that the
> intelligent designer is not working under some other moral imperative.
I can't see moral imperatives surviving for any length of time
without a rational basis in minimizing suffering and maximizing
pleasure.
> Perhaps he has developed the power of his rational mind (or his technology)
> to the point where he can switch off his feelings of empathy.
The thing, though, is that switching off empathy is like the
worst possible form of self-mutilation. With it goes the
ability to value anyone but one's self. In human terms,
that means turning off the ability to love. I can't imagine a
goal that could be rationally worth that.
> All in all, while your argument is reasonable, it's far from
> conclusive.
While obviously I agree it isn't conclusive, I have trouble
seeing why it is far from conclusive. I find that the main
criticism I have against my own argument is a vestige of a
religious belief in a being like Satan. And yet, the whole
concept of a super-intelligent being wholly intent on evil is
utterly self-contradictory!
> And I think we have much better arguments against
> ID. (At least against ID as a scientific theory.)
That's certainly true.
This archive was generated by hypermail 2b29 : Wed May 03 2000 - 16:59:10 EDT