[net.ai] Parallelism & Consciousness

RICKL%MIT-OZ@MIT-MC.ARPA (10/29/83)

This message is empty.

perlis%umcp-cs%CSNet-Relay@sri-unix.UUCP (10/30/83)

From:  Don Perlis <perlis%umcp-cs@CSNet-Relay>


             From: BUCKLEY@MIT-OZ
             --  of  what relevance is the issue of time-behavior of an
             algorithm to the phenomenon  of  intelligence,  i.e.,  can
             there  be  in  principle  such  a  beast  as    a    slow,
             super-intelligent program?

        From: RICKL%MIT-OZ@mit-mc
        gracious,  isn't  this  a bit chauvinistic?  suppose that ai is
        eventually    successful   in  creating  machine  intelligence,
        consciousness, etc.   on  nano-second  speed  machines  of  the
        future:    we  poor humans, operating only at rates measured in
        seconds and above, will seem incredibly slow  to  them.    will
        they engage in debate about the relevance of our time- behavior
        to our intelligence?  if there cannot in principle  be  such  a
        thing  as a slow, super-intelligent program, how can they avoid
        concluding that we are not intelligent?  -=*=- rick

It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another--after all, a very slow  thinker  may  nonetheless
think  very  effectively and solve a problem the rest of us get nowhere
with.  Rather I suggest that intelligence be regarded as effectiveness,
namely,  as coping with the environment.  Then real-time issues clearly
are significant.

A  supposedly brilliant algorithm that 'in principle' could decide what
to do about an impending disaster,  but  which  is  destroyed  by  that
disaster  long  before  it manages to grasp that there is a disaster,or
what its dimensions are, perhaps should not be called  intelligent  (at
least on the basis of *that* event).  And if all its potential behavior
is of this sort, so that it never really gets anything settled, then it
could  be  looked  at  as really out of touch with any grasp of things,
hence not intelligent.

Now  this  can be looked at in numerous contexts; if for instance it is
applied to the internal ruminations of the agent, eg  as  it  tries  to
settle  Fermat's  Last  Theorem, and if it still can't keep up with its
own physiology, ie,  its  ideas  form  and  pass  by  faster  than  its
'reasoning  mechanisms' can keep track of, then it there too will fail,
and I doubt we would want to say it 'really' was bright.  It can't even
be  said  to be trying to settle Fermat's Last theorem, for it will not
be able to keep that in mind.

This  is in a sense an internal issue, not one of relative speed to the
environment.  But considering that the internal and external events are
all  part  of  the  same  physical  world,  I  don't  see a significant
difference.  If the agent *can* keep track of  its  own  thinking,  and
thereby  stick  to the task, and eventually settle the theorem, I think
we would call it bright indeed,  at  least  in  that  domain,  although
perhaps  a moron in other matters (not even able to formulate questions
about them).

RICKL%MIT-OZ@MIT-MC.ARPA (10/31/83)

r question we
have been chasing around is:  ``can intelligence be regarded as survivability,
(or more generally as coping with an external environment)?''.  In the strong
form this position equates the two, and this position seems to be too
strong.  Amoebas cope quite well and have survived for unimaginably longer
than we humans, but are generally acknowledged to be un-intelligent (if
anyone cares to dispute this, please do).  Survivability and coping with
the environment, alone, therefore fail to adequately capture our intuitions
of intelligence.
                        -=*=- rick

JAY%USC-ECLC@sri-unix.UUCP (10/31/83)

From:  Jay <JAY@USC-ECLC>

    From: RICKL%MIT-OZ@MIT-MC.ARPA

                ...
    the question we are really discussing seems to be: ``can an entity be
    said to be intelligent in and of itself, or can an entity only be said
    to be intelligent relative to some world?''.  I don't think I believe
    in "pure, abstract intelligence, divorced from the world".
                ...
    another question we have been chasing around is: ``can intelligence be
    regarded as survivability, (or more generally as coping with an
    external environment)?''.  [...]

  I believe intelligence to be the  ability to cope with CHANGES in  the
enviroment.  Take  desert tortoises,  although  they are  quite  young
compared to  amobea,  they  have  been  living  in  the  desert  some
thousands, if  not  millions  of  years.   Does  this  mean  they  are
intelligent? NO! put a freeway through their desert and the  tortoises
are soon dying.  Increase the rainfall and they may become unable  to
compete with  the rabbits  (which  will take  full advantage  of  the
increase in vegitation and produce  an increase in rabbit-ation).   The
ability to cope with  a CHANGE in  the enviroment marks  intellignece.
All a tortoise need do is not  cross a freeway, or kill baby  rabbits,
and then they could  begin to claim  intellignce.  A similar  argument
could be made against intelligent amobea.

  A posible problem with this view  is that biospheres can be  counted
intelligent, in the desert an increase  in rainfall is handled by  an
increase in vegetation, and then  in herbivores (rabbits) and then  an
increase in carnivores (coyotes).  The end result is not the end of  a
biosphere,  but  the  change  of  a  biosphere.   The  biosphere   has
successfully coped  with  a  change  in  its  environment.   Even  more
ludicrous, an argument  could be  made for an  intelligent planet,  or
solar system, or even galaxy.

  Notice, an  organism  that  does  not  change  when  its  environment
changes,  perhaps  because  it  does  not  need  to,  has  not   shown
intelligence.  This is,  of course,  not to say  that that  particular
organism is  un-intelligent.   Were  the world  to  become  unable  to
produce rainbows, people would change little, if at all.

My behavioralism is showing,
j'

JBA%MIT-OZ@MIT-MC.ARPA (10/31/83)

 manages to convince an
alien that he is intelligent, so the aliens immediately begin a purge.
Who wants intelligent cockroaches?  -- KIL]

blenko@rochester.UUCP (Tom Blenko) (11/01/83)

Interesting to see this discussion taking place among people
(apparently) committed to an information-processing model for
intelligence.

I would be satisfied with the discovery of mechanisms that duplicate
the information-processing functions associated with intelligence.

The issue of real-time performance seems to be independent of
functional performance (not from an engineering point of view, of
course; ever tell one of your hardware friends to "just turn up the
clock"?).  The fact that evolutionary processes act on both the
information-processing and performance characteristics of a system may
argue for the (evolutionary) superiority of one mechanism over another;
it does not provide prescriptive information for developing functional
mechanisms, however, which is the task we are currently faced with.

	Tom

MINSKY%MIT-OZ@MIT-MC.ARPA (11/02/83)

What I meant is that defining intelligence seems as pointless as
defining "life" and then arguing whether viruses are alive instead of
asking how they work and solve the problems that appear to us to be
the interesting ones.  Instead of defining so hard, one should look to
see what there is.

For example, about the loop-detecting thing, it is clear that in full
generality one can't detect all Turing machine loops.  But we all know
intelligent people who appear to be caught, to some extent, in thought
patterns that appear rather looplike.  That paper of mine on jokes
proposes that to be intelligent enough to keep out of simple loops,
the problem is solved by a variety of heuristic loop detectors, etc.
Of course, this will often deflect one from behaviors that aren't
loops and which might lead to something good if pursued.  That's life.


I guess my complaint is that I think it is unproductive to be so
concerned with defining "intelligence" to the point that you even
discuss whether "it" is time-scale invariant, rather than, say, how
many computrons it takes to solve some class of problems.  We want to
understand problem-solvers, all right.  But I think that the word
"intelligence" is a social one that accumulates all sorts of things
that one person admires when observed in others and doesn't understand
how to do.  No doubt, this can be narrowed down, with great effort,
e.g., by excluding physical; skills (probably wrongly, in a sense) and
so forth.  But it seemed to me that the discussion here in AILIST was
going nowwhere toward understand intelligence, even in that sense.

In other words, it seems strange to me that there is no public
discussion of substantive issues in the field...

hakanson@orstcs.UUCP (11/03/83)

#R:sri-arpa:-1309000:orstcs:2600006:000:699
orstcs!hakanson    Nov  2 10:21:00 1983

No, no, no.  I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence.  Who's to say that
an amoeba is not intelligent?  It might be.  But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment.  And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are.  (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)

Marion Hakanson		{hp-pcd,teklabs}!orstcs!hakanson	(Usenet)
			hakanson@{oregon-state,orstcs}		(CSnet)

ISAACSON%USC-ISI@sri-unix.UUCP (11/04/83)

  From Minsky:
  That's what you get for trying to define things too much.

Coming, as it does, out of the blue, your comment appears to
negate the merits of this discussion.  The net effect might
simply be to bring it to a halt.  I think that it is, inadvertent
though it might be, unkind to the discussants, and unfair to the
rest of us who are listening in.

I agree.  The level of confusion is not insignificant and
immediate insights are not around the corner.  However, in my
opinion, we do need serious discussion of these issues.  I.e.,
questions of subcognition vs.  cognition; parallelism,
"autonomy", and epiphenomena; algorithmic programability vs.
autonomy at the subcognitive and cognitive levels; etc.  etc.

Perhaps it would be helpful if you give us your views on some of
these issues, including your views on a good methodology to
discussing them.

-- JDI

rigney@uokvax.UUCP (11/09/83)

#R:sri-arpa:-1309400:uokvax:900005:000:1727
uokvax!rigney    Nov  3 08:45:00 1983

Perhaps something on the order of "Intelligence enhances survivability
through modification of the environment" is in order.  By modification
something other than the mere changes brought about by living is indicated
(i.e. Rise in CO2 levels, etc. doesn't count).

Thus, if Turtles were intelligent, they would kill the baby rabbits, but
they would also attempt to modify the highway to present less of a hazard.

Problems with this viewpoint:

	1) It may be confusing Technology with Intelligence.  Still, tool
	making ability has always been a good sign.

	2) Making the distinction between Intelligent modifications and
	the effect of just being there.  Since "conscious modification"
	lands us in a bigger pit of worms than we're in now, perhaps a
	distinction should be drawn between reactive behavior (reacting
	and/or adapting to changes) and active behavior (initiating
	changes).  Initiative is therefore a factor.

	3) Monkeys make tools(Antsticks), Dolphins don't.  Is this an
	indication of intelligence, or just a side-effect of Monkeys
	having hands and Dolphins not?  In other words, does Intelligence
	go away if the organism doesn't have the means of modifying
	its environment?  Perhaps "potential" ability qualifies.  Or
	we shouldn't consider specific instances (Is a man trapped in
	a desert still intelligent, even if he has no way to modify
	his environment.)
	   Does this mean that if you had a computer with AI, and 
	stripped its peripherals, it would lose intelligence?  Are
	human autistics intelligent?  Or are we only considering 
	species, and not representatives of species?

In the hopes that this has added fuel to the discussion,

		Carl
		..!ctvax!uokvax!rigney
		..!duke!uok!uokvax!rigney

ISAACSON%USC-ISI@sri-unix.UUCP (11/09/83)

>From Minsky:

    ...I think that the word "intelligence" is a social one
    accumulates all sorts of things that one person
    admires observed in others and doesn't understand how to
    do...

    In other words, it seems strange to me that there
    is no public discussion of substantive issues in the
    field...


Exactly...  I agree on both counts.  My purpose is to help
crystallize a few basic topics, worthy of serious discussion, that
relate to those elusive epiphenomena that we tend to lump under
that loose characterization: "Intelligence".  I read both your LM
and Jokes papers and consider them seminal in that general
direction.  I think, though, that your ideas there need, and
certainly deserve, further elucidation.  In fact, I was hoping
that you would be willing to state some of your key points to
this audience.


More than this.  Recently I've been attracted to Doug
Hofstadter's ideas on subcognition and think that attention
should be paid to them as well.  As a matter of fact, I see
certain affinities between you two and would like to see a good
discussion that centers on LM, Jokes, and Subcognition as
Computation.  I think that, in combination, some of the most
promising ideas for AI are awaiting full germination in those
papers.

laura@utcsstat.UUCP (Laura Creighton) (11/12/83)

	The other problem with the "turtles should be killing baby
rabbits" definition of intelligence is that it seems to imply that
killing (or at least surviving) is an indication of intelligence.
i would rather not believe this, unless there is compelling evidence
that the 2 are related. So far I have not seen the evidence.

Laura Creighton
utcsstat!laura

ags@pucc-k (Seaman) (11/15/83)

Faster = More Intelligent.  Now there's an interesting premise...

According to relativity theory, clocks (and bodily processes, and everything
else) run faster at the top of a mountain or on a plane than they do at sea
level.  This has been experimentally confirmed.

Thus it seems that one can become more intelligent merely by climbing a
mountain.  Of course the effect is temporary...

Maybe this is why we always see cartoons about people climbing mountains to
inquire about "the meaning of life" (?)

				Dave Seaman
				..!pur-ee!pucc-k!ags

karl@trsvax.UUCP (11/20/83)

#R:sri-arpa:-1309000:trsvax:45200001:000:773
trsvax!karl    Nov  9 19:13:00 1983

***** trsvax:net.ai / orstcs!hakanson /  9:53 pm  Nov  4, 1983
No, no, no.  I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence.  Who's to say that
an amoeba is not intelligent?  It might be.  But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment.  And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are.  (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)

Marion Hakanson		{hp-pcd,teklabs}!orstcs!hakanson	(Usenet)
			hakanson@{oregon-state,orstcs}		(CSnet)
----------