[comp.ai.digest] submission: Ethics of AI

JCOGGSHALL@HAMPVMS.BITNET (Jeffy) (04/05/88)

From:<ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu  (Rick Wojcik)>
  (I have deleted much text from in between the following lines)
>But the payoff can be tremendous.
>In the development stage, AI is expensive,
>but in the long term it is cost effective.
>the demand for AI is so great that we have no choice but to
>push on.

        I would question the doctrine of "what is most _cost-effective_
(in the long term of course) is best." I think that, as Caroline Knight
said,
        "Whatever the far future uses of AI are
        we can try to make the current uses as
        humane and as ethical as possible."
        I mean, what are we developing it for anyway? It often seems that
AI is being developed for a specific purpose, but nobody seems to want to
be explicit about what it is. Technology is not neutral. If you develop AI
mainly as a war technology, then you will have a science that is most
easily suited for war (as far as I know, DARPA is _the_ main funder for AI
projects).
        Here is a quote from a book by Marcus Raskin and Herbert Bernstein:
        (they are talking about the Einstein-Bohr debate here, and how the
results of Quantum Mechanics show us an observer created universe):
        "Bohr's position puts man, or at least his machines, at the center
of scientific inquiry. If he is correct, science's style and purpose has to
change. The problem has been that the physicists have not wanted to make
any critical evaluation of their scientific work, an evaluation which their
research cried out for just because of their belief that human beings
remain at the center of inquiry, and man cannot know fundamental laws of
nature. They rejected Einstins's conception of a Kantian reality and
without saying it, his view of scientific purpose. Even though no
fundamental laws can be found independent of man's beliefs and machines he
constructs, scientists abjure making moral judgements as part of their
work, even though they know - and knew - that the very character of science
had changed.
        Standards for rational inquiry demand that moral judgements should
be added as an integral part of any paricular experiment. Unless shown
otherwise, I do not see how transformations of social systems, or the
generation of a new consciousness can occur if we hold on to narrow
conceptions of rational inquiry. Inquiry must now focus on relationsips.
The reason is that rational inquiry is not, cannot, and should not be
sealed from everyday life, institutional setting or the struggles which are
carried on throughout the world. How rational inquiry is carried on, who we
do it for, what we think about and _what we choose to see_ is never
insulated or antiseptic. Once we communticate it through the medium of
language, the symbols of mathematics, the metaphors and clishes of everyday
like, we call forth in the mids of readers of fellow analysts other issues
and considerations that may be outside of the four cornedrs of the
experiment or inquiry. What they bring to what they see, read, or replicate
is related to their purpose or agenda or their unconscious interpretations"
        - from: "NEW WAYS OF KNOWING", page 115
        In any case,

        >But the payoff can be tremendous.
        >In the development stage, AI is expensive,
        >but in the long term it is cost effective.
        >the demand for AI is so great that we have no choice but to
        >push on.

        is a wrong way to view what one is doing, when one is doing AI.

                                                - Jeff Coggshall
                                        (JCOGGSHALL@HAMPVMS.BITNET)


        (This article is also in response to:
                From: ARMAND%BCVMS.BITNET@MITVMA.MIT.EDU
                Subject: STATUS OF EXPERT SYSTEMS?)