[talk.philosophy.misc] Why discuss consciousness and free will?

ray@bcsaic.UUCP (Ray Allis) (12/22/88)

>From: huntley@nprdc.arpa (David Huntley)
>Subject: a small subset of AI
>
>Comp.ai suggests to me something very specific:  AI (as we understand the
>engineering discipline) on computers.  AI, as an engineering discipline,
>is in my mind an indisputably useful one.  AI research has resulted in
>formal languages better suited to certain kinds of practical problems.
>Neural networks, a software/hardware creation, extends the applications to
>which computers can be put.  Process control, inventory management,
>forecasting, industrial robots are all improved by AI techniques.  Also,
>work is being done on computer models of the brain in the general area of
>physiology.  If a computer model of the brain sufficiently complex to
>allow a mapping between "damage" and degraded performance can be
>developed, so much the better. With computers that already perform the
>written-to-spoken conversion, a tool for the study of aphasia seems to be
>at hand.
>
>Can anyone tell me how AI on computers entails consideration of free-will,
>self-awareness, consciousness, subconsciousness, unconsciousness or, in
>short, non-biological human replication?
>
>I understand that one can fantasize, create and share in science fiction.
>But why not do that in genetic.engineering.slave.colony ?
>
>David huntley@nprdc.arpa
>    "These two cents belong exclusively to me."

Here's your two cents, and I'll raise you two...

In almost any application, computer systems will continue to become more
sophisticated and complex.  It's already impossible to guarantee the entire
repertoire of performance of our most sophisticated systems.  It is clearly
foolish to put such systems in control of critical situations.

Even though we cannot *guarantee* the performance of humans, we are
somewhat comforted by the belief that, in general, they share our *values*,
and we can predict with some confidence what other people will do by
imagining what we would do.  Not so with machines.  How would an "expert
system" handle the decision as to exactly when to give up searching for
survivors in the Armenian earthquake, remove desperate fathers, grieving
mothers, husbands, wives, and children from the area and begin bulldozing?

Present forecasting systems are just accountant's worksheets which have
been mechanized; any significance of the input parameters is judged by the
human designer.  Any planner (e.g. battle) or predictor (e.g. economic)
worth the name should be able to predict *my* behavior should it, for
example, threaten my daughter's life.

Artificial Intelligence is NOT a specialty interest within Computer
Science, the inverse is true.  That attitude simply reflects a dismal lack
of imagination.  And AI is most definitely not yet an engineering
discipline.  The discussion in comp.ai demonstrates that definitions are
still vague and most questions are open.  Put bluntly, AI's poor reputation
and embarrassing lack of results are largely due to a mistaken belief in
the supremacy of formal logic and premature attempts at engineering without
benefit of theory.  We're still not agreed what intelligence IS, for
heaven's sake!

Considering the investment involved, we would like our Mars Explorer to
avoid stupid acts (like falling down some slope) which will compromise its
mission.  The way things are, we'll have to try to foresee every possible
situation the robot may encounter, try to  provide explicit instructions
for every condition, and cover with delayed-action corrections or
additions.  We *cannot* tell an autonomous machine enough for it to handle
all the situations it will encounter.  There *will* be unexpected problems
and the machine will have to cope or fail in its mission.

Conversely, how good it would have been to collect a couple thousand
airplane-manufacturing robots from the Boeing plants here in Seattle and
fly them to Leninakan and Spitak.  "O.K.," we'd say, "You little guys
explore down in the rubble for survivors.  Be careful not to cause it to
collapse and crush anybody.  When you find someone, tell these big guys,
the reachers and heavy lifters, so they can move the concrete and steel and
remove the trapped people."  This alone seems to me an excellent reason to
try to build smart machines.

I'd be delighted with a gadget to which I could say "Squeaky, you're
responsible for the family diet.  Make up menus, procure the groceries and
prepare the meals.  We have normal requirements, no one is diabetic or
anything like that, but I hate cooked carrots, and little Jimmie is
allergic to strawberries.  For now, you'll have two hundred dollars per
week.  Let me know if you have problems."

If we are to *communicate* with our machines we need to *understand* each
other (two more poorly defined words).  These "machines" will need (among
many other things) most of the "normal" complement of human sensory
equipment.  Otherwise, how can they understand the purpose of agriculture,
(But WHY do you cultivate plants?) or culinary art (But WHY do you heat
bovine muscular tissue until it begins to decompose?) or rage ("Rambo" made
*how much* money?  WHY?), or sexual attraction, or just about ANYTHING
about people and the world in which we (and they) exist.

Really high-payoff autonomous systems (say nuclear power plants) will be
useless (or worse, dangerous) as long as they don't *think*.  We are
currently haggling over the meaning of notions such as *intelligent*,
*think*, *believe*, *emotion*, *self-consciousness* etc., etc. and etc. in
the expectation that such understanding is necessary to make machines
*think*.

I look forward to much more than dexterous robots.  I want a library which
can give me a useful answer to "What is Muslim?" or "Why is there an
Arab-Israeli conflict?".  I want a "machine" (don't assume digital
computers) which can translate Goethe and Shakespeare without losing the
author's imagery and *intent*.  I want a machine which, answering my
questions, includes Arabic, Hebrew, Chinese, Russian literature, religious
and philosophical material in the original languages.  To do this, the
machine must *understand* language.  Nothing even remotely close to that
exists today.  Maybe you could have a system which can answer *your*
question.

Why do I want to build thinking machines?  Well, for one thing, I believe
they can help us manage our world, assist in the terribly complex problems
of global society.  And damn' right I want machines smarter than the
average human.  Who's going to help eliminate the conflict between
Catholics and Protestants in Ireland?  Politicians?  Ha, ha.  Where are the
machines which will help us find a way for everyone on the African
continent to live a productive life?  Who's going to sort out food
production, distribution and political bullshit so nourishment is in the
right place at the right time and I don't have to look at pictures of
two-year-olds with distended bellies?

I imagine machines (artifacts constructed by humans) who will participate
as partners in the pursuit of the goals of a common society.  Of course
there will be variation from "normal" human intellect.  We should actively
explore such variations, but exploration should be deliberate, planned and
purposeful.  Suppose we build a thinker which is not limited to the "seven
plus or minus 2" limit, say extend it to "7,000 plus or minus 100"?  Or
seven hundred million?  Suppose the Library of Congress was contained in
one mind which could associate anything with anything else?  Sure it's
science fiction.  My sixth grade teacher laughed at me for believing people
would go to the moon.  The payoffs for AI will be much larger!

Culture (cultural knowledge) is one of the major advantages humans have
over the other animals on this planet.  But culture is useless just sitting
in libraries.  Some *one* has to *use* it.  We want to make machines which
can *learn* this body of knowledge, add to it, and *do something* with it.
I want to build *minds*, and that's why I wrinkle my forehead and
wonder about consciousness and free will.

Disclaimer?  Redundant, my employer has already disclaimed me.
ray@atc.Boeing.com

dmocsny@uceng.UC.EDU (daniel mocsny) (12/23/88)

In article <9378@bcsaic.UUCP>, ray@bcsaic.UUCP (Ray Allis) writes:
>  And damn' right I want machines smarter than the
> average human.  Who's going to help eliminate the conflict between
> Catholics and Protestants in Ireland?  Politicians?  Ha, ha.

Let's hope that while they're smarter, they are also nobler.

"Hello, M-1, how's that Ireland situation going?"

"I eliminated the conflict, boss. It didn't run you much, since
I optimally allocated the warheads."

Cheers,

Dan Mocsny
dmocsny@uceng.uc.edu

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (12/24/88)

From article <9378@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
" ...  The discussion in comp.ai demonstrates that definitions are
" still vague and most questions are open.

With the possible exception of those postings which have attempted a
taxonomy of intelligent behavior, surely no one thinks that this sort of
arguing over definitions could ever advance the discipline.  You can't
find out new things by constructing better definitions -- you may be
able to prevent discoveries by constraining the search.

" Put bluntly, AI's poor reputation
" and embarrassing lack of results are largely due to a mistaken belief in
" the supremacy of formal logic

How can attention to the nature of inference with due respect for
precision and rigor be mistaken?

" and premature attempts at engineering without benefit of theory....

Amen.

" Considering the investment involved, we would like our Mars Explorer to
" avoid stupid acts (like falling down some slope) which will compromise its
" mission. ...

This is close to a premise of a James Hogan SF novel.  In the future,
development of intelligent machines will be driven by necessity.
We will become so dependent on computers that their unintelligent
advice will prove disastrous for us.

			Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mbunix.mitre.org (Barry W. Kort) (12/28/88)

In article <9378@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) offers
his vision of smart, helpful robots who assist rescue efforts
in Armenia, who prepare nutritious meals, who respond to questions
with insightful commentary, and who contribute toward the solution
of world hunger and political conflict.

I applaud Ray for his vision, courage, and commitment to direct
our technology toward worthwhile and challenging goals.  If Ray's
robots inherit his intelligence, values, and desire, I, for one,
would be comfortable in vesting them with responsibility now
reserved for our elected officials.

--Barry Kort

ray@bcsaic.UUCP (Ray Allis) (01/08/89)

>>From: lee@uhccux.uhcc.hawaii.edu (Greg Lee)
>
>>From article <9378@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
>>" ...  The discussion in comp.ai demonstrates that definitions are
>" still vague and most questions are open.
>
>With the possible exception of those postings which have attempted a
>taxonomy of intelligent behavior, surely no one thinks that this sort of
>arguing over definitions could ever advance the discipline.`

I, too, suspect that no one (or hardly anyone) "thinks that this sort of
arguing over definitions could ever advance the discipline", but it *is*
important to agree on the objects of discussion and investigation, or there
is no discipline.  You assume that the definitions already exist, but the
"intelligence" I want to understand is (my definition) an inherent capability
of a thing; quantitatively and maybe qualitatively different in dogs,
chimpanzees and man.  Gilbert Cockton, Richard OKeefe and Don Norman have
each asserted that intelligence is a social construct.  It looks to me like
we're talking about different things.  We can't have a discussion without
definitions.  Of course, we should have had that out of the way some
time ago :-).

>" Put bluntly, AI's poor reputation 
>" and embarrassing lack of results are largely due to a mistaken belief in 
>" the supremacy of formal logic 
>
>How can attention to the nature of inference with due respect for 
>precision and rigor be mistaken?  

Of course it's not, and of course I didn't say it was.  What I said was, the
[traditional AI] *belief* that formal logic is sufficient for intelligence is
mistaken.  Take human language.  It's an old wish, older than digital
computers, to construct a "predicate calculus".  The underlying assumption,
which may never have been an explicit hypothesis, is that human language is a
symbol *system*.  True, language uses symbols, but language is not a symbol
*system*.  *Arithmetic* is a symbol system.  Algebra makes it explicit; the
inter-relationships and operations (the *form*) are paramount, the symbols
are placeholders and spacers.  That's *form*-al logic. 

Some of the more interesting "intelligent behaviors", e.g. language use,
induction and analogy, are not formal in the above sense.  In each of these,
the important components are the subjective experiences the symbols
"symbolize".  There are no (interesting) relationships among or operations on
the symbols alone.  There is no reason to suppose that people compare the
*words* "orange" and "red", they compare the subjective experiences the words
symbolize.  

A predicate calculus is a formal logic.  It is therefore *inappropriate* for
human "natural" language, analogy or induction insofar as these phenomena are
based on *experience*.  Intelligent behavior requires (in addition to formal
reasoning) perception, memory and association of *experience*.  That is the
promise of "biologically-inspired" AI.

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/10/89)

From article <9504@bcsaic.UUCP>, by ray@bcsaic.UUCP (Ray Allis):
"...
" computers, to construct a "predicate calculus".  The underlying assumption,
" which may never have been an explicit hypothesis, is that human language is a
" symbol *system*.  True, language uses symbols, but language is not a symbol
" *system*.  *Arithmetic* is a symbol system.  Algebra makes it explicit; the
" inter-relationships and operations (the *form*) are paramount, the symbols
" are placeholders and spacers.  That's *form*-al logic. 

Formal syntax, you mean.  Formal logic has syntax and semantics as
subdisciplines.

Without granting that there is any fundamental difference between
natural language and formal logic, even if there were, this would
not mean that it was inappropriate to use formal logic in theories
or as a theory of language.  You don't sneer at an accountant who
uses arithmetic that he mistakes dollars for integers.  Just because
theories have symbols doesn't mean they are theories of symbols.

		Greg, lee@uhccux.uhcc.hawaii.edu