[comp.ai] What AI is exactly.

pnettlet@gara.une.oz.au (Philip Nettleton) (09/06/90)

In article <3797@se-sd.SanDiego.NCR.COM>, jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
> In article <38294@siemens.siemens.com> wood@jfred.siemens.edu (Jim Wood) writes:
> >    Artificial Intelligence is a computer science and engineering
> >    discipline which attempts to model human reasoning methods
> >    computationally.
> >
> 
> I think this is a pretty good definition, taken from the engineers point
> of view.  A psychologist might take a different view of the definition/
> purpose of AI.
> 
> One thing I'd include is that it's a cognitive psychological as well as
> computer science and engineering discipline.  You have to know something
> about how people think in order to model human reasoning methods.

I think it is a terribly poor definition, actually, for the following reasons:

a)	Human Intelligence is NOT the only form of intelligence. This is an
	extremely one eyed view point. Dolphins are extremely intelligent and
	the only reason we cannot communicate with them to date is because of
	the extreme differences in our vocal ranges and auditory senses. There
	is also a huge cultural gap. What concerns do dolphins have? What form
	does their communication take? We need to know these BEFORE we can
	even look at syntax and semantics. Hence their intelligence is very
	alien to ours.

b)	People tend to assume that at machine cannot be intelligent. Human
	Intelligence is well documented, much research has been done into
	Animal Intelligence, but what of Machine Intelligence? Is there a
	specific type of intelligence that a machine can have? Is there any
	need to base this intelligence on Human or Animal Intelligence?

Saying that AI is modelling "Human Intelligence" is totally inadequate. It
may not even be possible because we have such a limited understanding of the
processes involved. Artificial Intelligence means:

	An intelligent system designed by mankind to run on a man-made
	artifact, ie, a computer. The term Machine Intelligence is more
	succinct because it identifies the type of intelligence created.

Please no arguments about:

	What is intelligence?

This has been discussed ad nauseum, and obviously, we don't know. However,
it must exibit intelligence behaviour. With regards to intelligent human
behaviour, we can test this with the Turing Test. As for intelligent animal
behaviour, there is no appropriate test. And what is intelligent behaviour
for a machine? It could be quite alien in appearance from the other two.

Let us produce a general requirement for intelligent behaviour:

a)	The system MUST be able to learn. This implies that the system MUST have
	a memory for learning to be maintained. Also learning comes in a
	number of varieties:

	i)	It MUST be able to learn from its own experiences. These can
		be broken down into further criteria:

		1)	Learning through trial and error.
		2)	Learning through observation.
		3)	Learning through active deduction (see reasoning).

	ii)	It SHOULD be able to learn by instruction, but this is not
		necessary. At the very least the system MUST have preprogrammed
		instincts. This is a boot strap for the developing intelligence.
		Without a starting point, the system cannot progress.

b)	The system MUST be autonomous. This can be disected as:

	i)	The system MUST be able to effect its environment based on
		its own independent conclusions.

	ii)	The system MUST be its own master and therefore doesn't
		require operator intervention.

	iii)	The system MUST be motivated. It must have needs and
		requirements that can to be satisfied by its own actions.

c)	The system MUST be able to reason. That is to say, it must use some
	form of deductive reasoning, based on known facts and capable of
	producing insights (deductions) which later become known facts.

d)	The system MUST be self aware. This is related to autonomy, reasoning
	and learning, but also embodies the need for external senses. Without
	external senses there is no way of appreciating the difference between
	"me" and "outside of me". Sensationations of pain and pleasure can
	provide motivation.

It is clear to see that a human easily satisfies these requirements and so is
an intelligent system. A cat also satisfies these requirements. So we now have
a common basis for known intelligent behaviour. An intelligent machine would
need to satisfy these requirements to be classed as an intelligent system.

One last point of clarifaction:

	The ENVIRONMENT in which the intelligent system operates need not
	be the physical environment of the world around us. It could be a
	computer environment.

I invite responses from those who would like to clarify any points made here
or those who would like to extend or advance further points into a
constructive debate. But please, if you are hung up on the divity of the human
race or you want to bring the Searle debate into this, do us all a favour and
refrain.

		With Regards,

				Philip Nettleton,
				Tutor in Computer Science,
				Department of Maths, Stats, and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.

lynch@aristotle.ils.nwu.edu (Richard Lynch) (09/06/90)

I liked most of the comments of Philip Nettleton, BUT...
Just how autonomous is a human?
I mean, "No man is an island unto himself." (Dunne ?  I have no idea really.)
Certainly an intelligent machine should be able to handle many things for
itself, but clearly at some point it must be capable of depending on others,
dealing and negotiating with others.  I think it's possible that this req. 
could be dropped as long as the awareness of self is there and something
about developing a interdependence relationships that tend toward survival...
or something like that.  
Q:  Are ethics and/or morals a requirement of intelligence?
Obviously, this is not something with a definitive answer, but INTELLIGENT
NON-FLAMING  NON-ABSURD-ASSUMPTIONS  discussion would be interesting to me.

"TANSTAAFL" Rich lynch@aristotle.ils.nwu.edu

wood@jfred.siemens.edu (Jim Wood) (09/06/90)

I originally wrote:

>>    Artificial Intelligence is a computer science and engineering
>>    discipline which attempts to model human reasoning methods
>>    computationally.

and pnettlet@gara.une.oz.au (Philip Nettleton) writes [and I edit]:

>I think it is a terribly poor definition, actually, for the following
>reasons:

>a)	Human intelligence is NOT the only form of intelligence.  This is an
>	extremely one-eyed viewpoint.  Dolphins are extremely intelligent, and
>	the only reason we cannot communicate with them to date is because of
>	the extreme differences in our vocal ranges and auditory senses.
>	There is also a huge cultural gap.  What concerns do dolphins have?
>	What form does their communication take?  We need to know these
>	BEFORE we can even look at syntax and semantics.  Hence their
>	intelligence is very alien to ours.

Agreed with (a), but I do not recall having implied human intelligence is
the only form of intelligence.  However, it is certainly the most
interesting to artificial intelligence scientists and engineers.  From the
practical perspective, it is the only type of intelligence which interests
industry, from which the purse flows.

My definition involves a model of human REASONING methods.  The strongest
areas of artificial intelligence, in my opinion, are expert systems (modeling
the knowledge of an expert), natural language systems (modeling languages
and how humans process them), robotics (modeling human sensory and motor
functions), and neural networks (modeling the cognitive processes of the
human brain).  Each of these involves human reasoning.

>b)	People tend to assume that a machine cannot be intelligent.  Human
>	intelligence is well documented, and much research has been done into
>	animal intelligence, but what of machine intelligence?  Is there a
>	specific type of intelligence that a machine can have?  Is there any
>	need to base this intelligence on human or animal intelligence?

Your reference to machine intelligence is a good one, but it is a mistake
to overshadow human intelligence with it in defining artificial intelligence.
A machine is no more than an extension of human computability.  There is
nothing which a machine does which is not a direct product of the exercise
of human intelligence.  Consequently, machine intelligence is a subset of
human intelligence.

>Saying that AI is modeling "human intelligence" is totally inadequate.  It
>may not even be possible because we have such a limited understanding of
>the processes involved.

I did not say AI models human intelligence.  I was very specific to say that
it models human reasoning methods.  I also believe our knowledge of human
reasoning is limited, but that does not stop AI scientists and engineers
from developing theories and applications.

>Artificial Intelligence means:
>	An intelligent system designed by mankind to run on a man-made
>	artifact, for example, a computer. The term Machine Intelligence
>	is more succinct because it identifies the type of intelligence
>	created.

Artificial intelligence is not a system, any more than computer science is
a system.  Intelligent systems are the product of artificial intelligence
METHODOLOGIES.  For example, an expert system is not "artificial
intelligence", rather it is the result of applying artificial intelligence
methodologies.
--
Jim Wood [wood@cadillac.siemens.com]
Siemens Corporate Research, 755 College Road East, Princeton, NJ  08540
(609) 734-3643

forbis@milton.u.washington.edu (Gary Forbis) (09/06/90)

In article <3543@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>b)	The system MUST be autonomous. This can be disected as:
>
>	iii)	The system MUST be motivated. It must have needs and
>		requirements that can to be satisfied by its own actions.
>
>d)	The system MUST be self aware. This is related to autonomy, reasoning
>	and learning, but also embodies the need for external senses. Without
>	external senses there is no way of appreciating the difference between
>	"me" and "outside of me". Sensationations of pain and pleasure can
>	provide motivation.

I think that b)iii) is important but d) may not be required.  Self awareness
does not exist in very young children yet their intelligence seems apparent to
me.  Defining the limits of "me" is one of the first tasks an intelligence 
has to solve; these limits are fuzzy.  I think it is enough to learn how to
interact with one's environment to satisfy one's needs (even if one does not
know those needs or what one has done to satisfy them). I don't know how I
move my arms to grab an apple and shove it in my mouth yet I can do so when
ever I desire.  Am I less intelligent becuase the linkage between my desire
for action and the action itself falls outside my awareness?

--gary forbis@milton.u.washington.edu

bdelan@apple.com (Brian Delaney) (09/07/90)

In article <3543@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip 
Nettleton) writes:
> The system MUST be self aware. This is related to autonomy, reasoning
>  and learning, but also embodies the need for external senses. Without
> external senses there is no way of appreciating the difference between
> "me" and "outside of me". Sensationations of pain and pleasure can
>  provide motivation.

The question of self-awareness is one of the things that Searle's CR 
gedankenexperiment is supposed to address.  One interpretation of his 
basic claim is that a system can fulfill all of the rest of your 
requirements, and still not "know" anything, because it is not self-aware. 
Consider Searle's wording, "*I* don't understand Chinese."

These characteristics are also not binary.  A cat is *probably* 
self-aware, or at least, it behaves in a fashion whose simplest 
explanation is self-awareness.  People are also self-aware, but to a very 
different degree.  And amongst people, there are those whose 
self-awareness is quite developed. It includes an understanding of 
history, and personal psychology, and can consider questions like, "Who 
will I be in ten years?"  And there are those people whose self-awareness 
leans more toward the feline direction. ( I'm hungry, I'm horny, I'm 
scared, I'm tired, etc. )  What degree of self-awareness is necessary to 
qualify as intelligent?

Personally, I think that "intelligence" is an analog quantity. People, or 
machines, have it to greater or lesser degrees.  The example of expert 
systems was mentioned before: that they display the appearence of 
intelligence, but that the system becomes very brittle when it gets 
outside its familiar domain.  However, this is also true of humans.  The 
difference between human intelligence and machine intelligence is one of 
degree rather than one of kind.  This difference may be one of several 
dozen orders of magnitude, but it is still one of degree.

For that matter, some people ( not you ) apply standards to machine 
intelligence that many humans couldn't pass.  We set up a 5-way Turing 
test at school once, where the questioner was able to "talk" to 3 humans 
and an "Eliza" style program via keyboard.  The questioner correctly 
identified the program as being artificial.  However, the questioner also 
identified *me* as being artificial.  She claimed that she could tell that 
I was a machine because I did not display a strong "emotional" reaction to 
questions she thought I should.  She said that she could tell that I 
really didn't know what words like "love" meant, that I was just using the 
word syntactically rather than semantically.  ( This amused my girlfriend 
no end.  :-)  )  Just because  I discussed some emotional topics in a 
matter-of-fact way.

You require that an intelligent being learn from experiences.  How does 
this apply to a person who consistently screws up in the same way? It must 
reason about the universe.  Must it do so correctly?  Human reasoning 
about the universe is an inconsistent phenomenon at best.  It is still far 
better than we can code, but it is still far from perfect.  Even our 
simplest AI projects show occasional bursts of lucidity and "insight" that 
surprise the creators.  And those creators can show occasional moments of 
mechanical route thinking.  Does that mean that, at that brief, fleeting 
instant, that maybe the machine would qualify as intelligent, and the 
researcher would not?

***************************************************************************
Brian "High Tech Sex and Affordable Firepower" Delaney
Disclaimer: NOBODY, least of all Apple, thinks the way I do.
***************************************************************************

pnettlet@gara.une.oz.au (Philip Nettleton) (09/07/90)

In article <38801@siemens.siemens.com>, wood@jfred.siemens.edu (Jim Wood) writes:
> I originally wrote:
> 
> >>    Artificial Intelligence is a computer science and engineering
> >>    discipline which attempts to model human reasoning methods
> >>    computationally.
> 
> and pnettlet@gara.une.oz.au (Philip Nettleton) writes [and I edit]:
> 
> >I think it is a terribly poor definition, actually, for the following
> >reasons:
> 
> >a)	Human intelligence is NOT the only form of intelligence.  This is an
> >	extremely one-eyed viewpoint.  Dolphins are extremely intelligent, and
> >	the only reason we cannot communicate with them to date is because of
> >	the extreme differences in our vocal ranges and auditory senses.
> >	There is also a huge cultural gap.  What concerns do dolphins have?
> >	What form does their communication take?  We need to know these
> >	BEFORE we can even look at syntax and semantics.  Hence their
> >	intelligence is very alien to ours.
> 
> Agreed with (a), but I do not recall having implied human intelligence is
> the only form of intelligence.  However, it is certainly the most
> interesting to artificial intelligence scientists and engineers.  From the
> practical perspective, it is the only type of intelligence which interests
> industry, from which the purse flows.

Who said what? -> ignored as unconstructive.

As for there being more money in Human Intelligence "simulators", I don't
think DARP (Defence Advanced Research Project) would agree. Their director
some years ago stated (something similar to):

	I would be glad to have a tank which could hunt down and kill
	other tanks as well as my cats hunt down birds.

Not that I agree even in part with Autonomus Weapons Systems, but the point
is clear. Defence spending is HUGE. Producing a tank based on Human
Intelligence is counter productive. Humans tend to moralise, desert or
defect. A certain amount of controlled animal wildness is critical in
military operations.

> Your reference to machine intelligence is a good one, but it is a mistake
> to overshadow human intelligence with it in defining artificial intelligence.

Human divinity? -> ignored as unconstructive.

> I did not say AI models human intelligence.  I was very specific to say that
> it models human reasoning methods.  I also believe our knowledge of human
> reasoning is limited, but that does not stop AI scientists and engineers
> from developing theories and applications.

Who said what? -> ignored as unconstructive.

No it does not, but it may put the blinkers on their thinking about other
possibilities, which is what this is really all about.

> >Artificial Intelligence means:
> >	An intelligent system designed by mankind to run on a man-made
> >	artifact, for example, a computer. The term Machine Intelligence
> >	is more succinct because it identifies the type of intelligence
> >	created.
> 
> Artificial intelligence is not a system, any more than computer science is
> a system.  Intelligent systems are the product of artificial intelligence
> METHODOLOGIES.  For example, an expert system is not "artificial
> intelligence", rather it is the result of applying artificial intelligence
> methodologies.

Granted. I should have said an "Artificial Intelligence System means:".
Sorry, but one can only edit one's own work so far.

Artificial Intelligence is the artibute that such a system would exibit.
Artificial Intelligence is also a field of research aimed at bringing this
about.

		Regards,

				Philip Nettleton,
				Department of Maths, Stats and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.

erich@eecs.cs.pdx.edu (Erich Boleyn) (09/07/90)

lynch@aristotle.ils.nwu.edu (Richard Lynch) writes:


>I liked most of the comments of Philip Nettleton, BUT...

   I did too.

>Just how autonomous is a human?
>I mean, "No man is an island unto himself." (Dunne ?  I have no idea really.)
[extra deleted]

   I think by "autonomous" he meant that it must be able to decide things
for itself without having absolute overriding queries to a user or some
similar situation.  (i.e. you can't direct its actions by hard-wired
controls, it has to follow by choice, so to speak).

>Q:  Are ethics and/or morals a requirement of intelligence?

   Hmmm...  I don't think so, but I think that to exist efficiently in a
society, that is a requirement, otherwise there is no point of banding
together at all.  And we are very societal, so all of us use some kind of
moral system (referring to mammals, of course, the only examples of
intelligence we have).  If an intelligent being did not exist in contact
with other intelligent beings, would it need to have morals and/or ethics
at all?  We have them since it is a *survival* feature to develop them.

   Regards,
        Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/08/90)

In article <3543@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>I think it is a terribly poor definition, actually, for the following reasons:
>a)	Human Intelligence is NOT the only form of intelligence. This is an
>	extremely one eyed view point. Dolphins are extremely intelligent and
>	the only reason we cannot communicate with them to date is because of
>	the extreme differences in our vocal ranges and auditory senses. There
>	is also a huge cultural gap. What concerns do dolphins have? What form
>	does their communication take? We need to know these BEFORE we can
>	even look at syntax and semantics. Hence their intelligence is very
>	alien to ours.

Intuitivly, I'd agree.  But lets keep this rigorous.  We have no indication
that human intelligence isn't the only form of intelligence.  Admittedly,
our definition (flimsy as it is) doesn't incorporate much beyond what we
observe in humans.  But so far, it's the only example we've got. 
There's also no solid indication that Dolphins are intelligence (I
believe they have some ability for it, but they haven't done much to
demonstrate it in human terms).  I don't think we haven't communicated
with them simply because of vocal ranges.  That's a problem easily 
remedied.  There seems to be little common semantics between our two
species, at least, if not a lack of ability to communicate on the part
of the Dolphins.
I wouldn't say that just because they surf in the wake of boats and
copulate multiple times daily that they're intelligent (on the other hand,
maybe they're lots MORE intelligent than humans...).


>Saying that AI is modelling "Human Intelligence" is totally inadequate. It
>may not even be possible because we have such a limited understanding of the
>processes involved. Artificial Intelligence means:

Human intelligence is the only example of intelligence we've identified.
The tasks we try to make computers do are tasks that humans are good at.
What other modelling could it be?

>Please no arguments about:
>	What is intelligence?
>This has been discussed ad nauseum, and obviously, we don't know. However,
>it must exibit intelligence behaviour. With regards to intelligent human
>behaviour, we can test this with the Turing Test. As for intelligent animal
>behaviour, there is no appropriate test. And what is intelligent behaviour
>for a machine? It could be quite alien in appearance from the other two.

Then how can you say Dolphins are intelligent?
Just because a machine passes the Turing Test doesn't mean it's intelligent.
As Turing said, we must simply assume that it is intelligent, because we
can't tell the difference between its actions and the actions of an
entity we know to be intelligent.

>Let us produce a general requirement for intelligent behaviour:
>a)	The system MUST be able to learn. This implies that the system MUST have
>	a memory for learning to be maintained. Also learning comes in a
>	number of varieties:

I would reject this.  There are many ways to exhibit intelligence without
learning.  Learning is, in fact, a subset of the field call "AI".
Neural Nets, once they are trained, no longer learn.  Their training doesn't
come from within them either.  It's imposed by support software on the
outside.  Similarly with expert systems.
Information retrieval is the same story.  The search heuristics are
written by a programmer.  Some systems have the ability to adapt to
user's needs, but most are, at heart, deterministic.

>b)	The system MUST be autonomous. This can be disected as:
>	i)	The system MUST be able to effect its environment based on
>		its own independent conclusions.
>	ii)	The system MUST be its own master and therefore doesn't
>		require operator intervention.
>	iii)	The system MUST be motivated. It must have needs and
>		requirements that can to be satisfied by its own actions.

I would reject this too.  Intelligent systems can exist that are
supported and maintained by others, and are unable to effect its environment
(e.g., systems that only give advice or conclusions).

>c)	The system MUST be able to reason. That is to say, it must use some
>	form of deductive reasoning, based on known facts and capable of
>	producing insights (deductions) which later become known facts.

Reasoning is a cognitive tools developed by civilized Man.  Before Man was
civilized, he was intelligent.  It took intelligence to develop reason.

>d)	The system MUST be self aware. This is related to autonomy, reasoning
>	and learning, but also embodies the need for external senses. Without

The is the common confusion between "being", "soul",or "self" and 
intelligence.  Many think these two issues can be separated.

>It is clear to see that a human easily satisfies these requirements and so is
>an intelligent system. A cat also satisfies these requirements. So we now have

Umm, a cat can't reason, or learn in any human sense.  You can train it
(not very well - dogs are easier), but the kind of cognition requried by
the animal for this I think is different from "learning".


- Jim Ruehlin

dave@tygra.UUCP (David Conrad) (09/08/90)

In article <11770@accuvax.nwu.edu>, lynch@aristotle.ils.nwu.edu
 (Richard Lynch) writes:
} Q:  Are ethics and/or morals a requirement of intelligence?

Well, IMHO ethics and morals (surely as difficult to define as "learning"
and "intelligence") are probably emergent qualities of intelligence as
opposed to prerequisites. 
 
And I defy anyone to deny that the kitten who has figured out how to get
my attention by attacking my legs isn't 'learning'.  It has also been
working on the "cat flap" problem and improving greatly its strategems
for play-fighting with the resident adult cat.
(This isn't a response to Richard Lynch, but to someone else on
the net who denied that cats actually learn, as such.)
Cats acquire data, remember past situations, and heuristically improve
on their responses to similar situations.  Or appear to.  Everything
they do certainly isn't hardcoded in their DNA.  They respond adaptively,
or at least differently, to repeated 'inputs' (or previously encountered
situations).  IMHO.
--
David R. Conrad
dave@tygra.ddmi.com
Disclaimer: This article has no disclaimer.
Errata: a) The disclaimer is incorrect and should be deleted.
        b) For "Errata" read "Erratum".
The local system just *loves* to add this:
-- 
=  CAT-TALK Conferencing Network, Prototype Computer Conferencing System  =
-  1-800-825-3069, 300/1200/2400/9600 baud, 8/N/1. New users use 'new'    - 
=  as a login id.    <<Redistribution to GEnie PROHIBITED!!!>>>           =
E-MAIL Address: dave@ThunderCat.COM

dmark@acsu.buffalo.edu (David Mark) (09/08/90)

In article <3815@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

  [90 lines deleted]
>
>Umm, a cat can't reason, or learn in any human sense.  ...
                                      ^^^
Hope you are not offended, Jim, but I think this claim is just plain silly.
Cats, and other mammals, and birds, and indeed even many invertebrates,
DO learn things!  I remember an article in SCIENCE a few years back that
showed that the time required for a butterfly to insert its proboscis into
the nectaries of a flower decreases with number of trials.  That
is "learning", isn't it?   And it is A type of learning that humans
undoubtedly exhibit.  Thus the "any" in the above quote seems inappropriate.
(Anyone test a human on time needed to, say, thread a needle?)  Yet I don't
think I would want to claim that butterflies are "intelligent" in a realistic
sense.  

But, by my everyday definition of "intelligence", cats and crows and many
other birds and mammals certainly have it.  Their "intelligence" does not
seem to be as elaborate or as developed as ours.  But they do "learn", and 
"remember" (experiments with food caching and re-finding in birds; I
can find references if you want), and "solve problems" (parrot pulling string
"foot over beak" to raise food to its perch), and even "form generalizations".
For the latter, I was told of an appartment-raised cat whose owner moved to
a house with a front door and a back door.  Initially, the cat would "ask" to 
go out one of the doors, and if it was raining, it would retreat and then "ask"
at the other door.  But within a few days, the cat, when seeing rain at one
door, would NOT attempt the other.  It seems obvious that the cat
had "generalized" that rain out one door meant rain out the other, 
or had "learned" that the two doors connect to the "same real world."
And as for communication, many animal species have fairly elaborate
vocal and behavioral methods for "communicating".  And the experiments with
signing apes, even if interpreted rather enthusiastically by the authors,
seem to indicate abilities at fairly complex communication for these
creatures.

It seems to me that human "intelligence" differs from the "intelligence"
of other vertebrates in degree rather than kind.  (I agree that the
degree is VERY large in most cases.)  Is there any "EVIDENCE"
that humans have "kinds" of "intelligence" that no other species
exhibits even to a primitive degree?  (By the usual standards of science,
I would guess that solid "evidence" either way would be pretty hard to 
come by.)

And finally, is the domain or goal of "Artificial Intelligence" really
"Artificial HUMAN Intelligence" ?  Or do folks mostly want to claim that
"Artificial Human Intelligence" is redundant, that "intelligence" is
a strictly-human trait?  And if so, is it strictly-human BY DEFINITION? 
And if so, what do we want to call the collective set of cognitive
abilities to "learn", "communicate", "solve problems", etc., that many
"higher" vertebrates seem to possess?

David Mark
dmark@acsu.buffalo.edu

geb@dsl.pitt.edu (Gordon E. Banks) (09/10/90)

In article <3815@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>Umm, a cat can't reason, or learn in any human sense.  You can train it
>(not very well - dogs are easier), but the kind of cognition requried by
>the animal for this I think is different from "learning".

If by reason you mean use of formal logic, you are probably correct.
But your definition of learning would seem to be idiosyncratic, perhaps
confined to a population of 1 (yourself).  I can't think of any
animal that has been well studied that does not demonstrate some
ability to learn, even simple worms.  Learning simply means that
the animal is able to modify its behavior according to its past
experience with the environment.  Anyone who has observed cats
recognizes that they do this quite readily.  The abilty to train them
to perform tricks is not necessarily a good gauge of learning ability.
The main difference between humans and other animals is the number of
neurons in the neocortex, which is the programmable part of the brain.

In addition, what gives you the idea that cats and dolphins don't
communicate?  Of course they do.  Even ants communicate.  Maybe
you meant they don't talk or they don't use language.

Much of human behavior that we consider quite intelligent does not
involve the use of "reasoning", including language.

pnettlet@gara.une.oz.au (Philip Nettleton) (09/11/90)

In article <4123@servax0.essex.ac.uk>, dewhn@Sol24.essex.ac.uk (Dewhurst N E J) writes:
> Reading all this raises a question: I'd be very grateful if one of the
> AI types reading this could answer it. When you talk about "intelligence"
> in the context of AI, what are you looking for? When dolphins are said to be
> "extremely intelligent", I'd take it to mean that their brains work
> similarly to (if less well than) our own. But what's written above
> suggests the poster had something different in mind. What? 

The problem in all the debates so far is that people have been personalising
intelligence, saying that it a characteristic of human beings. The test for
an "Intelligent System" that we are working on attempts to impersonalise
our appraisal of intelligence by reducing it into components, ie, learning,
autonomy, reasoning and self-awareness. When the test is applied to humans
we can clearly say: "Yes, we are intelligent". When applied to other animals
we can start to say things like: "Yes, a cat is intelligent", unless you've
never had a kitten, in which case you may say some of the stupid things some
people choose to post as news.

> Similarly: when you talk about an "intelligent machine", you're
> presumably talking about a system that behaves in a certain way. 
> But how can you detect its "intelligence", other than by 
> observing that behaviour, and squaring it with what you know about 
> the workings of your own head? And given that, how does the
> idea of a "non-humanlike intelligence" make sense?

If we now have a machine, a man-made artifact, which is claimed to be
intelligent, you can start making observations and doing tests to
determine whether it learns, is autonomous, reasons and is self-aware. You
may not understand anything of "how it works" but, after a while, you
should be able to say, "yes" or "no" to whether it is intelligent.

The ONLY way to detect intelligence is through observation and testing with
a specific criteria in mind. You may say, "But I'm intelligent", but in
all honesty, from my point of view, it's all hearsay :-).

> Sorry if this is a naive or stupid question. Feel free to
> ignore. :-)

Never be afraid to ask a sensible question.

			Regards,

						Philip Nettleton,
						Tutor in Computer Science,
						University of New England,
						Armidale,
						New South Wales,
						2351,
						AUSTRALIA.

atterlep@vela.acs.oakland.edu (Alan T. Terlep) (09/13/90)

In article <3815@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

>We have no indication
>that human intelligence isn't the only form of intelligence.  Admittedly,
>our definition (flimsy as it is) doesn't incorporate much beyond what we
>observe in humans.  But so far, it's the only example we've got. 

  As a side point, I'd like to say that this is untrue.  In fact, there are
examples of intelligent behavior in many animals.  The example of the primates
that speak sign langauge has been proven since one of the researchers walked in
to being teaching a new chimp sign language only to find that the chimp had
already learned the signs.  The reason these aren't seen as indications of 
intelligence is that humans aren't going to give up their special status in the
world without a fight.
 (If you want another example, I heard secondhand of a report that claimed that
pigeons could identify a cup of water with the ocean, signifying abstract 
thinking.)

>Human intelligence is the only example of intelligence we've identified.
>The tasks we try to make computers do are tasks that humans are good at.
>What other modelling could it be?
  
  This is true.  Still, from a theoretical point of view, it's important to
realize that modelling non-human intelligence is possible--if only for non-
humans.
 -- 
Alan Terlep     			"Violence is the last refuge of the 
Oakland University, Rochester, MI	   incompetent."
atterlep@vela.acs.oakland.edu				     --Isaac Asimov

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/14/90)

In article <387@tygra.UUCP> dave@tygra.UUCP (David Conrad) writes:
>And I defy anyone to deny that the kitten who has figured out how to get
>my attention by attacking my legs isn't 'learning'.  It has also been
>working on the "cat flap" problem and improving greatly its strategems
>for play-fighting with the resident adult cat.
>(This isn't a response to Richard Lynch, but to someone else on
>the net who denied that cats actually learn, as such.)

Gee, I wrote about a number of points in my previous posting, but everyone
seems to jump on my statement about cats!  OK, I'll give it a shot...

My basis for saying cats (that particular pet is just an example - I don't
have anything out for cats!) can't learn is that just because it exhibits
behaviour that LOOKS like learning, that doesn't NECESSARILY mean that it
is learning.  There's a lot of anti-behaviourist people in the AI field,
but lots of them will say that as long as the behaviour exists, then the
phenomenon exists.

There's lots we don't know about how humans think.  Even less about how
cats think.  We can say humans learn, but there are other explainations
for the behaviour that cats display, such as a modification to a stimulus
based on the hope/expectation of being fed.

>Cats acquire data, remember past situations, and heuristically improve
>on their responses to similar situations.  Or appear to.  Everything
>they do certainly isn't hardcoded in their DNA.  They respond adaptively,
>or at least differently, to repeated 'inputs' (or previously encountered
>situations).  IMHO.

I've seen examples of this, but also counter examples.  We used to have
a cat we called the "Artichoke Cat", because her level of cognition was
roughly equivalent to that very vegetable.  This thing couldn't modifiy
her behaviour if her life depended on it!

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/14/90)

In article <35282@eerie.acsu.Buffalo.EDU> dmark@acsu.buffalo.edu (David Mark) writes:
>  [90 lines deleted]
>>
>>Umm, a cat can't reason, or learn in any human sense.  ...

90 lines of info, and they STILL want to talk about cats!  Guess y'all
agree with the rest of it ... :-)

>Hope you are not offended, Jim, but I think this claim is just plain silly.
   Naw, it takes more than that to offend me.  Call my Fender Telecaster
   silly, THEN I'll be offended!

>Cats, and other mammals, and birds, and indeed even many invertebrates,
>DO learn things!  I remember an article in SCIENCE a few years back that
>showed that the time required for a butterfly to insert its proboscis into
>the nectaries of a flower decreases with number of trials.  That
>is "learning", isn't it?   

Good question!  Looking inside the "black box" called "learning organism",
are there low-level cognitive similarities?  Or even high-level ones?
I doubt it - humans and butterflys are very different.  

Perhaps the crux of this problem is the definition of "learning" as
a purely behavioural one.  IMO, learning is more than just displaying
certain behaviour.

>Thus the "any" in the above quote seems inappropriate.

Agreed, if you look merely at the behavioural aspects of learning.  Otherwise,
maybe there's little similarities between the exhibited behaviour in humans
and cats.

>But, by my everyday definition of "intelligence", cats and crows and many
>other birds and mammals certainly have it.  

How do you tell?  You indicate that there is similar behaviour between
the butterfly and mammals, but say the butterfly doesn't have intelligence
while the mammals do.  You may be right, but the question is:  beyond
behaviour, what differentiates between the intellegence (learning) and
non-intelligence?

>Their "intelligence" does not
>seem to be as elaborate or as developed as ours.  But they do "learn", and 
>"remember" (experiments with food caching and re-finding in birds; I
>can find references if you want), and "solve problems" (parrot pulling string
>"foot over beak" to raise food to its perch), and even "form generalizations".

Is this learning or behaviour designed to acquire food?

>And as for communication, many animal species have fairly elaborate
>vocal and behavioral methods for "communicating".  And the experiments with
>signing apes, even if interpreted rather enthusiastically by the authors,
>seem to indicate abilities at fairly complex communication for these
>creatures.

Agreed.  My intention here was to ask if they display "intelligent" 
communication.  Since we haven't detected them talking about 
epistimology and metaphysics we can't know for sure that these communications
are much more than evolved actions.

>And finally, is the domain or goal of "Artificial Intelligence" really
>"Artificial HUMAN Intelligence" ?  

We haven't positively located any other species that is intelligent, so
we have only ourselves to base creating intelligent systems on.  I'm not
saying there isn't other intelligent species (to a greater or lesser
degree than us), just that we haven't identified them yet.

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/14/90)

In article <1990Sep10.140437.19913@cadre.dsl.pitt.edu> geb@dsl.pitt.edu (Gordon E. Banks) writes:
>If by reason you mean use of formal logic, you are probably correct.

Yes, that's exactly what I mean.

>But your definition of learning would seem to be idiosyncratic, perhaps
>confined to a population of 1 (yourself).  I can't think of any

Yes, flames _are_ easier than thinking...

>animal that has been well studied that does not demonstrate some
>ability to learn, even simple worms.  Learning simply means that
>the animal is able to modify its behavior according to its past
>experience with the environment.  

If that's the definition you're using, I agree with you.  But as I've
explained in other postings, I think that to be rigorous with this
question (in terms of AI) requires looking beyond the behaviour displayed.
The "Eliza" program exhibits some intelligent behaviuor - even to the point
of one receptionist telling her boss he couldn't be in the room while
in session with the program - but it's not intelligent.

>Anyone who has observed cats
>recognizes that they do this quite readily.  The abilty to train them
>to perform tricks is not necessarily a good gauge of learning ability.
>The main difference between humans and other animals is the number of
>neurons in the neocortex, which is the programmable part of the brain.

I think that's the area we need to look at now if we talk about learning.

>In addition, what gives you the idea that cats and dolphins don't
>communicate?  Of course they do.  Even ants communicate.  Maybe
>you meant they don't talk or they don't use language.

Yes, that more accurately describes what I ment.  Thanks for the 
clarification.

>Much of human behavior that we consider quite intelligent does not
>involve the use of "reasoning", including language.

I agree, but "reasoning" is a cognitive tool that required intelligence
to develop.  Cats have never developed a cognitive tool.

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/14/90)

In article <3640@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>In article <4123@servax0.essex.ac.uk>, dewhn@Sol24.essex.ac.uk (Dewhurst N E J) writes:
>> Reading all this raises a question: I'd be very grateful if one of the
>> AI types reading this could answer it. When you talk about "intelligence"
>> in the context of AI, what are you looking for? When dolphins are said to be
>> "extremely intelligent", I'd take it to mean that their brains work
>> similarly to (if less well than) our own. But what's written above
>> suggests the poster had something different in mind. What? 

If you're referring to my posting, my point about dolphins is that while
they have a large brain to body mass ratio, we haven't seen them exhibit
any intelligence.  It may be an intelligence we don't recognize, but like
all sciences there's probably some basic axioms that describe intelligent
systems.  Hopefully, we can someday figure these out and see if dolphins
are indeed intelligent creatures

>The problem in all the debates so far is that people have been personalising
>intelligence, saying that it a characteristic of human beings. The test for
>an "Intelligent System" that we are working on attempts to impersonalise
>our appraisal of intelligence by reducing it into components, ie, learning,
>autonomy, reasoning and self-awareness. 

All of which are phenomena of being human.

>When the test is applied to humans
>we can clearly say: "Yes, we are intelligent". When applied to other animals
>we can start to say things like: "Yes, a cat is intelligent", unless you've
>never had a kitten, in which case you may say some of the stupid things some
>people choose to post as news.

There's must be a very large intersection between the groups of cat lovers
and AI hobbyists.  Look folks, I like cats and I've raised kittens.  As
far as the opinion that my comment was "stupid", I've addessed the cat
issues in previous postings so I won't re-iterate here.  After reading
my clarifications on the issue you still think it's stupid, well...
thats life.

>> Similarly: when you talk about an "intelligent machine", you're
>> presumably talking about a system that behaves in a certain way. 
>> But how can you detect its "intelligence", other than by 
>> observing that behaviour, and squaring it with what you know about 
>> the workings of your own head? And given that, how does the
>> idea of a "non-humanlike intelligence" make sense?

Wow, great question!  Here's someone who'se actually been thinking about
the issues!  
Detecting intelligence might be able to be done by observing neuronal
activity, performing research on how neural nets work, and basicly by
trying to get inside our heads and see HOW things work, not just WHAT.

>If we now have a machine, a man-made artifact, which is claimed to be
>intelligent, you can start making observations and doing tests to
>determine whether it learns, is autonomous, reasons and is self-aware. You
>may not understand anything of "how it works" but, after a while, you
>should be able to say, "yes" or "no" to whether it is intelligent.

If you don't know how it works, how can you say it's intelligent?  Applying
the Turing test, which is just observation of behaviour, results in the
conclusion that you must treat the system as intelligent because you
don't KNOW whether or not it's intelligent.  That's why we need to get
inside the system and understand how it works as well.

>The ONLY way to detect intelligence is through observation and testing with
>a specific criteria in mind. You may say, "But I'm intelligent", but in
>all honesty, from my point of view, it's all hearsay :-).

As I pointed out above, this will only get you as far as knowing how to
relate to the "intelligent" system, not whether it's actually intelligent
or not.


-Jim Ruehlin

geb@dsl.pitt.edu (Gordon E. Banks) (09/15/90)

In article <3850@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>I've seen examples of this, but also counter examples.  We used to have
>a cat we called the "Artichoke Cat", because her level of cognition was
>roughly equivalent to that very vegetable.  This thing couldn't modifiy
>her behaviour if her life depended on it!
>
Gee, I've known people like that too.

geb@dsl.pitt.edu (Gordon E. Banks) (09/15/90)

In article <3851@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

>
>Good question!  Looking inside the "black box" called "learning organism",
>are there low-level cognitive similarities?  Or even high-level ones?
>I doubt it - humans and butterflys are very different.  
>
But humans and monkeys aren't.  Same basic hardware, you know.
Cats aren't that different either.
>
>Agreed.  My intention here was to ask if they display "intelligent" 
>communication.  Since we haven't detected them talking about 
>epistimology and metaphysics we can't know for sure that these communications
>are much more than evolved actions.
>
My goodness, by your definition 99% of all humans aren't intelligent!
How many of them talk about epistemology or metaphysics?

geb@dsl.pitt.edu (Gordon E. Banks) (09/15/90)

In article <3852@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>In article <1990Sep10.140437.19913@cadre.dsl.pitt.edu> geb@dsl.pitt.edu (Gordon E. Banks) writes:
>>If by reason you mean use of formal logic, you are probably correct.
>
>Yes, that's exactly what I mean.
>
Then you should recognize that there are many humans that have no idea
of logic and thus under your definition can't reason.

>>But your definition of learning would seem to be idiosyncratic, perhaps
>>confined to a population of 1 (yourself).  I can't think of any
>
>Yes, flames _are_ easier than thinking...
>
That wasn't a flame.  I think you are using non-standard definitions
for terms that have a well-defined meaning in psychology and
cognitive science.  When you do that, you are bound to get people
arguing with you because they don't understand what you are really
talking about since they assume the standard definitions.

>>Much of human behavior that we consider quite intelligent does not
>>involve the use of "reasoning", including language.
>
>I agree, but "reasoning" is a cognitive tool that required intelligence
>to develop.  Cats have never developed a cognitive tool.
>

Formal logic is a cultural construct that only humans have *sufficient*
intelligence to have developed.  Some humans never can fathom such concepts,
actually.  It is a matter of degree not of kind, in my view.  The
total number of neurons that humans have available for such tasks 
exceeds mightily those available to other species (on this planet, at
least).  Just as some humans are more intelligent than others, some
species are more intelligent than others.  That doesn't mean that
the dumber members of the human race or of the animal kingdom lack
*all* intelligence, does it?  Just so, some AI programs are more
intelligent than others.  None are yet as intelligent (at least in
a general way) as even a dog, let alone a human.  But that isn't
to say with better hardware it won't happen.

augs@cray.com (Paul Algren) (09/15/90)

AI is a term used to describe a quest to solve increasingly complex problems
with a computer.

Let's face it no program today is doing more than following the instructions 
some programmer has given it.  Some how I can never assign intelligence to 
following directions tediously (don't try to be philosophical here!).
What some programs are able to do is solve very complex problems consistently
enough to leave us in awe!  It can be argued that the computer can solve 
some problems better than a human can just because it is more rigorous.
But, I don't think to be rigorous has anything to do with intelligence either.
 
The problem is as the problems we want to solve become more complex, the 
time and effort put forward to solve them becomes too much given traditional
languages.

AI tools provide the added functionality needed to encode a solution
clear enough and with enough economy to produce results, when a traditional 
language would become incomprehensable.
 
If it solves a real problem don't knock it!! 
Corollary: Who cares if people think it's A`I'?

If someday we do create a computer which captures the essence of intelligence,
I hope it desires to follow directions tediously without pay. ????????

 

dmark@acsu.buffalo.edu (David Mark) (09/15/90)

In article <3851@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin
, Cognitologist domesticus) writes:
>
>Perhaps the crux of this problem is the definition of "learning" as
>a purely behavioural one.  IMO, learning is more than just displaying
>certain behaviour.
>
>>Thus the "any" in the above quote seems inappropriate.
>
>Agreed, if you look merely at the behavioural aspects of learning.  Otherwise,
>maybe there's little similarities between the exhibited behaviour in humans
>and cats.

Jim, it is difficult to discuss issues such as these if people are
using the key terms to mean sharply different things.  Would you please
provide us with the definition of "learning" that you are using,
either by making up your own or by quoting some source?  I presume that
we are not disagreeing much about the facts of animal behavior and
human behavior, but are disagreeing about what definitions of "intelligence"
and "learn" are appropriate.  And since "intelligence" is such a slippery
one, let's start with "learn" or "learning".  In particular, could you detail
what the "non-behavioral" aspects of learning are?

David Mark
dmark@acsu.buffalo.edu

dnk@frankland-river.aaii.oz.au (David Kinny) (09/15/90)

In article <3853@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>Detecting intelligence might be able to be done by observing neuronal
>activity, performing research on how neural nets work, and basicly by
>trying to get inside our heads and see HOW things work, not just WHAT.
>
>	[Quotation omitted]
>
>If you don't know how it works, how can you say it's intelligent?  Applying
>the Turing test, which is just observation of behaviour, results in the
>conclusion that you must treat the system as intelligent because you
>don't KNOW whether or not it's intelligent.  That's why we need to get
>inside the system and understand how it works as well.
>

Enough of this homocentric stupidity.  Intelligence is not something you
can "detect" by observing neuronal activity, it is an emergent property
of extremely complex systems, it derives from their structure, it is
manifested in their behaviour, and it comes on a sliding scale.  Some cats
are more intelligent than others.  Cats are more intelligent than slugs,
and less intelligent than most humans.  (Slug lovers, no flames please!)

Do not make the mistake of defining intelligence to be "What humans do".
Firstly, it begs the question.  Secondly, it degenerates rapidly into
"What X believes humans do", where X is you, me, or some other know-all
who probably has an extremely shallow understanding of *what* it is that
humans do, let alone *how*.  People who, for whatever reason, insist on
equating intelligence with "What (and how) humans do" should at least have
the decency to speak about "human intelligence", leaving the unqualified
word free to describe a wider range of phenomena.

Have you considered the possibility that it may not be possible for a
system to be intelligent enough to understand its own workings?
Certainly, if you insist on knowing *how* a system works before you ascribe
intelligence to it, then you cannot claim yourself to be intelligent.
How does your memory work?  How do you recognise objects in the real world
so easily?  How does your ability to make abstractions arise?  You do not
know the answers to these questions.  Understanding how complex systems
work is *very* difficult.  If you know *how* a system works, then you can
conclude that you're probably much more intelligent than it is, and hence
that it's not very intelligent.

We must content ourselves with behavioural definitions of intelligence, such
as the Turing Test, at least until such time as we have a far more profound
understanding of how intelligence arises.  Claims about a given system
passing the Turing Test, while clearly not being intelligent, are specious
unless it is clear that such a system is (in principle) constructible, and
would in fact pass the test.  Remember that we currently do not know how
to construct *any* system that passes such a test, except by unskilled
labour.  If and when such a system is achieved or encountered, the chances
are that those not blinded by prejudice will admit its intelligence, but
few if any of us will begin to understand how it works.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
David Kinny                       Australian Artificial Intelligence Institute
dnk@aaii.oz.AU                                  1 Grattan Street
Phone: +61 3 663 7922                  CARLTON, VICTORIA 3053, AUSTRALIA

cam@aipna.ed.ac.uk (Chris Malcolm) (09/16/90)

In article <3853@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>If you don't know how it works, how can you say it's intelligent?

Well, I don't know how I work, so I'm not intelligent. Nor is Jim
Ruehlin, or anyone else, unless there's some recent breakthrough in
cognitive psychology or neurobilogy I don't know about :-)

That's the strong form of the intelligence-is-method argument. It is
more commonly found in its weaker form, which takes humans as being
intelligent by definition. Then all one hs to do to demonstrate that the
latest AI toy is not intelligent is to show that no matter how well it
performs, it doesn't do it quite the same way that we do.

Now we know that technology gives us many ways of doing the same thing.
One can fly like a bird, or like an aeroplane. One can tell the time
with a clockwork or digital watch. One can search for problem solutions
forwards or backwards. So let us suppose, for the sake of argument, that
my complete mental capabilities have been implemented in some other
technology than biological, using other methods. This has given rise to
certain minor differences in performance, for example, my simulacrum is
faster at mental arithmetic than me but a bit wobblier on a bike, but
these differences are within the normal variability of human
performance. But, since how it's done is crucial, I am intelligent, but
my simulacrum is not, and the research effort (and success) of building
the simulacrum has not advanced our understanding of intelligence at
all.

Is this is useful position to adopt?

I suspect that the popularity of the how-it-works argument comes from
knowing that intelligence is not easily recognised. Even if intelligence
can be defined completely in termw of behaviour, in practice it would be
impossible to observe enough behaviour to be really sure, just as in
practice on can never test a complex program completely. So in practice
the attribution of intelligence depends on lots of presumptions about
unobserved behavioural capabilities. But there is another way of finding
out how something would behave: prediction from knowing how it works.
That's very useful in any complex system which is understood; but it
doesn't mean that how it works is more important than what it does: how
it works is a way of getting a handle on what is important -- what it
does.
-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

muttiah@rye.ecn.purdue.edu (Ranjan S Muttiah) (09/16/90)

In article <2495@frankland-river.aaii.oz.au> dnk@frankland-river.aaii.oz.au (David Kinny) writes:
>Enough of this homocentric stupidity.  Intelligence is not something you
>can "detect" by observing neuronal activity, it is an emergent property
>of extremely complex systems, it derives from their structure, it is

Here is a goody:

	Intelligence is what intelligence test measure much like temperature
	is what a thermometer measures, pressure is what a pressure gauge
	measures etc etc etc.
	
	Unfortunately, I think, the more we know what "intelligence" really is
	the less likely we will find a decent DEF. since it is NOT a static
	phenomena.

sticklen@cps.msu.edu (Jon Sticklen) (09/16/90)

From article <3030@aipna.ed.ac.uk>, by cam@aipna.ed.ac.uk (Chris Malcolm):
> In article <3853@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
...
> unobserved behavioural capabilities. But there is another way of finding
> out how something would behave: prediction from knowing how it works.
...


But this "how it works" is ambiguous because there are many ways of
knowing "how it works." Eg, one way to know how something works is
describing the underlying implemetnation - for human intelligence,
I then have to describe things in terms of neurophysiology. But another,
perhaps more illuminating way of describing "how it works" is in terms of
the information processing that has to go on to support the activity that
is called intelligent. What is sought then is an implementation-free
description of "how it works." You might call it a "Knowledge
Level Architecture." This would be a level of intelligent system
description above the symbol level, but below the KL (as described by
Newell). In fact, I *did* call it that in

        Sticklen, J. (1989). Problem Solving Architectures at the
        Knowledge Level. Journal of Experimental and Theoretical
        Artificial Intelligence. 1 (pp. 1-52).


  ---jon---

person@plains.NoDak.edu (Brett G. Person) (09/16/90)

Ok, I've thought about this for a while.  Doesn't the term intelligent
conote a sense of "understanding" in terms of interest?

I mean wouldn't a program have to be aware of it's surroundings and
interactions with it's own environment to be considered intelligent.  We
consider most live things (animals, birds etc) to be intelligent because
they are actively involved with with their environments and make adjustments
accordingly.
Curiostiy would also factor in here. And just how the heck would you make a
program curious?  Could you give it a desire to learn? Could you make it
wonder about the world around it?

   Our thoughts are essentially the independent stringing together of random
pieces of information to a cohereent conclusion.
  For example, I took what little knowledge I have about AI, some knowledge
about life, some knowledge gainned by a couple of years of philosohpy
classes, and came up with this article.I've strung pieces of innane trivia
together tp form my own opinion.  And that is most probably what
intelligence and AI are all about.
-- 
Brett G. Person
North Dakota State University
uunet!plains!person | person@plains.bitnet | person@plains.nodak.edu

pmm@acsu.buffalo.edu (patrick m mullhaupt) (09/17/90)

>a) The system MUST be able to learn.
>b)	The system MUST be autonomous.
>c)	The system MUST be able to reason.
>d)	The system MUST be self aware.
>It is clear to see that a human easily satisfies these requirements and so is
>an intelligent system. A cat also satisfies these requirements. So we now have
>a common basis for known intelligent behaviour. An intelligent machine would
>need to satisfy these requirements to be classed as an intelligent system.
>
>		With Regards,
>
>				Philip Nettleton,
>				AUSTRALIA.






	I don't have any problems with these constraints.  I do have a
question though.

	Would a group of individuals, say the congress of the USA,
qualify as an "intelligent system"? :-)  More generally, do you allow
collective intelligences?  I would guess that you might not, but your
definition seems to allow it.

	G'day,
		Patrick Mullhaupt

Newsgroups: comp.ai
Subject: 
Followup-To: 
Distribution: world
Organization: SUNY Buffalo
Keywords: 

Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Summary: 
Expires: 
References: <25392@boulder.Colorado.EDU> <3797@se-sd.SanDiego.NCR.COM> <3543@gara.une.oz.au>
Sender: 
Followup-To: 
Distribution: 
Organization: SUNY Buffalo
Keywords: 

>a) The system MUST be able to learn.
>b)	The system MUST be autonomous.
>c)	The system MUST be able to reason.
>d)	The system MUST be self aware.
>It is clear to see that a human easily satisfies these requirements and so is
>an intelligent system. A cat also satisfies these requirements. So we now have
>a common basis for known intelligent behaviour. An intelligent machine would
>need to satisfy these requirements to be classed as an intelligent system.
>
>		With Regards,
>
>				Philip Nettleton,
>				AUSTRALIA.

	I don't have any problems with these constraints.  I do have a
question though.

	Would a group of individuals, say the congress of the USA,
qualify as an "intelligent system"? :-)  More generally, do you allow
collective intelligences?  I would guess that you might not, but your
definition seems to allow it.

	G'day,
		Patrick Mullhaupt

sen@cl.bull.fr (sen) (09/17/90)

-----------------------------------------

would somebody tell me the meaning of the expression "STRONG AI" used
by almost everybody with different meanings.

						 - siddhartha


--
--------------------------------------sen@saphir.cl.bull.fr---
Siddhartha Sen, F7 1 D 5, BULL S.A.  ##    Office (33) (1) 34.62.70.00 ext 3911
78340 Les Clayes sous Bois, FRANCE   ##    Res    (33) (1) 34.60.47.52
**** COGITO ERGO SUM - JE PENSE DONC JE SUIS - I THINK THEREFORE I AM ****

BKort@bbn.com (Barry Kort) (09/17/90)

In article <5907@plains.NoDak.edu> person@plains.NoDak.edu (Brett G. 
Person) writes:

> Ok, I've thought about this for a while.  Doesn't the term intelligent
> connote a sense of "understanding" in terms of interest?

To my mind, an intelligent system must not only be able to think and solve 
problems, it must also be able to learn and evolve over time.  The 
frontiers of learning are the focus of one's interests.  The internal 
representations of the acquired knowledge (corresponding to our mental 
models) reflect one's understanding or comprehension.  (The etymology of 
"comprehend" is instructive:  it means "to capture with".  We capture 
knowledge with models and other symbolic representations.)

> I mean wouldn't a program have to be aware of it's surroundings and
> interactions with it's own environment to be considered intelligent?  We
> consider most live things (animals, birds etc) to be intelligent because
> they are actively involved with with their environments and make 
> adjustments accordingly.

Awareness of surroundings gives rise to consciousness.  First, a system 
needs sensors to gather raw data.  Then it needs to interpret sensory data 
and integrate it into a structured representation of the external state of 
affairs.  These representations could be models or frames, or other forms 
of knowledge representation.

> Curiosity would also factor in here. And just how the heck would you 
> make a program curious?  Could you give it a desire to learn? Could you 
> make it wonder about the world around it?

Curiosity is a key emotion of a learning system.  It goes along with 
related emotions such as interest, fascination, boredom, anxiety, 
satisfaction, and confidence.

>    Our thoughts are essentially the independent stringing together of 
> random pieces of information to a coherent conclusion.

Thoughts are sentences we say to ourself.  When the pieces of information 
are thrown together haphazardly, we move from disciplined thought to 
dreaming and flights of fancy and fantasy.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/18/90)

In article <2992@vela.acs.oakland.edu> atterlep@vela.acs.oakland.edu (Alan T. Terlep) writes:
>In article <3815@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>>We have no indication
>>that human intelligence isn't the only form of intelligence.  Admittedly,
>>our definition (flimsy as it is) doesn't incorporate much beyond what we
>>observe in humans.  But so far, it's the only example we've got. 
>  As a side point, I'd like to say that this is untrue.  In fact, there are
>examples of intelligent behavior in many animals.  The example of the primates
>that speak sign langauge has been proven since one of the researchers walked in
>to being teaching a new chimp sign language only to find that the chimp had
>already learned the signs.  The reason these aren't seen as indications of 
>intelligence is that humans aren't going to give up their special status in the
>world without a fight.

I'll say again, these chimps are passing the Turing test.  They're
displaying intelligent _behaviour_.  Whether they are intelligent or not is
a matter for definition (of what intelligence is) and research of the
mechanism, not the observed behaviour.

While I've got your attention, I'll clarify something else.  It may be
that some of the people responding to my postings on this issue believe
that I don't think animals (specifically mammals) have intelligence - or
more specifically, higher level cognitive capabilities.  I do think that
they might, in fact probably do.  My arguments here are ment to convey
that I think we need to be more rigorous in our definitions about such 
things.  An insect or slug may look like it learns something, but it's
lack of much of a nervous system makes it unlikely.  Obviously, mammals
have much more developed nervous systems, so it's more likely their
behaviour is actually representative of intelligence.  But without a
deeper understanding of how intellgence is implemented (on any system,
not just humans), we might not be able to say for sure.

> (If you want another example, I heard secondhand of a report that claimed that
>pigeons could identify a cup of water with the ocean, signifying abstract 
>thinking.)

I'm not sure I'd ascribe abstract thinking to that.  The two are sufficiently
different to just be separate objects to a bird.


- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/18/90)

In article <1990Sep14.172527.16601@cadre.dsl.pitt.edu> geb@dsl.pitt.edu (Gordon E. Banks) writes:
>In article <3852@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>>In article <1990Sep10.140437.19913@cadre.dsl.pitt.edu> geb@dsl.pitt.edu (Gordon E. Banks) writes:
>>>If by reason you mean use of formal logic, you are probably correct.
>>Yes, that's exactly what I mean.
>Then you should recognize that there are many humans that have no idea
>of logic and thus under your definition can't reason.

You hardly need to read the newspaper to see the truth of that.  Even those
who've learned to reason usually don't do it as a natural act, even when
not under pressure or strain.
If people were rational, we wouldn't be in much of the mess we as humans
get into.  So I think that lots of people don't (not can't ) reason.

>>>But your definition of learning would seem to be idiosyncratic, perhaps
>>>confined to a population of 1 (yourself).  I can't think of any
>>Yes, flames _are_ easier than thinking...
>That wasn't a flame.  I think you are using non-standard definitions
>for terms that have a well-defined meaning in psychology and
>cognitive science.  When you do that, you are bound to get people
>arguing with you because they don't understand what you are really
>talking about since they assume the standard definitions.

Good point.  I appologize for my comment.
As far as being well-defined, they may be in psychology but all my
experience with cognitive science indicates otherwise.  The field is
too new, the research different enough that usually you have to give
a little leeway to definitions.
So here's a quick, informal summary:
	Learning:  An ability to acquire information and apply it
		to a variety of situations and circumstances.  This
		includes applying the information to areas that are
		unrelated to the original domain.
	Intelligence:  The ability to create, modify, and obtain
		cognitive structures/abilities that enhance this same
		ability.
Like all definitions of these words, I know lots of people won't like
them.  I don't debate definitions anymore because it never seems to
get anywhere.  But I'd be happy to talk about what's possible in
your definition of intelligence.

>>>Much of human behavior that we consider quite intelligent does not
>>>involve the use of "reasoning", including language.

>>I agree, but "reasoning" is a cognitive tool that required intelligence
>>to develop.  Cats have never developed a cognitive tool.

>Formal logic is a cultural construct that only humans have *sufficient*
>intelligence to have developed.  Some humans never can fathom such concepts,

I disagree.  While some humans don't fathom such concepts, I'd say all are
capable of doing so by virtue of being intelligent in the way humans are.

>Just as some humans are more intelligent than others, some
>species are more intelligent than others.  That doesn't mean that
>the dumber members of the human race or of the animal kingdom lack
>*all* intelligence, does it?  Just so, some AI programs are more
>intelligent than others.  None are yet as intelligent (at least in
>a general way) as even a dog, let alone a human.  But that isn't
>to say with better hardware it won't happen.

I agree.  As I said in another posting, I wouldn't rule out that there are
animals out there without some degree of intelligence.  They certainly
have some kind of cognition going on (in mammals, at least).  But it's
pretty much opinion and conjecture until we can generate more quantitative
measurements of intelligence (which would include a solid definition).

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/18/90)

In article <2495@frankland-river.aaii.oz.au> dnk@frankland-river.aaii.oz.au (David Kinny) writes:
>Enough of this homocentric stupidity.  Intelligence is not something you
>can "detect" by observing neuronal activity, it is an emergent property
>of extremely complex systems, it derives from their structure, it is
>manifested in their behaviour, and it comes on a sliding scale.  Some cats
>are more intelligent than others.  Cats are more intelligent than slugs,
>and less intelligent than most humans.  (Slug lovers, no flames please!)

Oh, come on David.  Don't get so emotional.  This is supposed to be FUN,
remember?
I'm not being homocentric, I'm trying to be rigorous in what we're saying.
No one has "proved" intelligence in any other species or software.  We
presume we're intelligent, which is fair enough because we're the ones
trying to define it and we don't have anyone else coming along saying
"Hey you humans, you don't know nothing about intelligence!"
We also presume some species have some level of intelligence, and some
software has some kind of intelligence.  These may be convenient
presumptions, but we can't "know" until we've got a solid benchmark
for what intelligence is.
Which brings us to emergence.  I honestly don't mean this as a flame
David, but whenever I hear someone bring this up I want to puke.
Emergence doesn't explain anything, it only explains away what cognitive
science is trying to uncover.  In order to accept emergence, you have to
be willing to ignore the scientific method and reductive discovery.  I might
do that if a paradigm shift seems in order, but so far reduction has served
us quite well, and it doesn't seem like it's time to dump it quite yet.
As far as the other three aspects of intelligence are concerned, can you
back it up with some empirical data?  Can you show intelligence is derived
from structure?  That it's ALWAYS manifested in behaviour (I might be
having an intelligent thought without telling anyone about it) (Maybe
that's what lots of people think I've been doing all along... :-))?
I'll concede the third point - I too think it's a sliding scale.

>Do not make the mistake of defining intelligence to be "What humans do".
>Firstly, it begs the question.  Secondly, it degenerates rapidly into
>"What X believes humans do", where X is you, me, or some other know-all
>who probably has an extremely shallow understanding of *what* it is that
>humans do, let alone *how*.  People who, for whatever reason, insist on
>equating intelligence with "What (and how) humans do" should at least have
>the decency to speak about "human intelligence", leaving the unqualified
>word free to describe a wider range of phenomena.

OK.  I think definitionally it's redundant, but I'll try to remember to
specify "human" intelligence when arguing about the evidence of "x"
intelligence.

>Have you considered the possibility that it may not be possible for a
>system to be intelligent enough to understand its own workings?

Oh my yes!  Interesting question.  I think it's highly likely that, 
because you need a very complex platform upon which to implement intel-
ligence, the intelligence can't comprehend that complexity.  I haven't
seen any of the "experts" talk about this (in books, etc.), but I think
it deserves some thought.

>Certainly, if you insist on knowing *how* a system works before you ascribe
>intelligence to it, then you cannot claim yourself to be intelligent.

Since I'm framing the question, I'll take the liberty.  I know it seems
a bit contradictory, but hey, I'm only human... :-)

>We must content ourselves with behavioural definitions of intelligence, such
>as the Turing Test, at least until such time as we have a far more profound
>understanding of how intelligence arises.  

I think we know barely enough to begin to try to define intelligence in
terms of implementation or processing, rather than behaviour. Only looking
at behaviour doesn't move us towards the root of the problem.


- Jim Ruehlin

cowan@marob.masa.com (John Cowan) (09/18/90)

In article <59525@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>Thoughts are sentences we say to ourself.

Oh yes?  And what sentences were passing through Beethoven's mind when
he was thinking about his Ninth Symphony, please?

I'll never understand how verbal people can believe that >all< thinking
is verbal.  Reminds me of Quine's claim that when a mouse fears a cat,
he fears that a certain sentence is true!

-- 
cowan@marob.masa.com			(aka ...!hombre!marob!cowan)
			e'osai ko sarji la lojban

geb@dsl.pitt.edu (Gordon E. Banks) (09/18/90)

In article <3873@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

>they might, in fact probably do.  My arguments here are ment to convey
>that I think we need to be more rigorous in our definitions about such 
>things.  An insect or slug may look like it learns something, but it's
>lack of much of a nervous system makes it unlikely.  

Despite what you think unlikely, it can be rigorously proved that such
animals learn.  THis is done by comparing their behavior over many
trials with random.  So even simple nervous systems learn.

sarima@tdatirv.UUCP (Stanley Friesen) (09/19/90)

In article <3851@se-sd.SanDiego.NCR.COM> jim@se-sd (Jim Ruehlin) writes:
>In article <35282@eerie.acsu.Buffalo.EDU> dmark@acsu (David Mark) writes:
>>Cats, and other mammals, and birds, and indeed even many invertebrates,
>>DO learn things! ...	
>
>Good question!  Looking inside the "black box" called "learning organism",
>are there low-level cognitive similarities?  Or even high-level ones?
>I doubt it - humans and butterflys are very different.  

At a high level this is true, but at a basic neurological level it is not.
Neurons operate the same in humans and in buterflies, and at this level
learnng most certainly does take place in almost all known animals. 

>Perhaps the crux of this problem is the definition of "learning" as
>a purely behavioural one.  IMO, learning is more than just displaying
>certain behaviour.

yes indeed, you seem to be using a very different definition of learning
than most biologists.  As a trained bologist, I would sya that the standard
definition of learning runs something like this:
	A change in neurological responses due to repeated stimulus
	that tends to alter behavior in a way that is responsive to
	changing environmental situations.
[Note this is only an aproximation - I do not have the formal definition handy]
By this definition the example with the buterflies is a *clear* and
*definitive* example of learning.

>Agreed, if you look merely at the behavioural aspects of learning.  Otherwise,
>maybe there's little similarities between the exhibited behaviour in humans
>and cats.

Not just the behavioral similarities, but also the identity of basic
neurological mechanism.  Even if the *cognitive* processes are different
in humans, cats and buterflies, the neuronal mechanisms are still the same.

>>But, by my everyday definition of "intelligence", cats and crows and many
>>other birds and mammals certainly have it.  
>
>How do you tell?  You indicate that there is similar behaviour between
>the butterfly and mammals, but say the butterfly doesn't have intelligence
>while the mammals do.  You may be right, but the question is:  beyond
>behaviour, what differentiates between the intellegence (learning) and
>non-intelligence?

I would distinguish between learning, which is shown by all forms with a
nervous system, and intelligence which involves creative behaviors - the
initiation of new behavior by mechanisms other than simple trail and error.
This includes anticipation, modeling, improvization, recombination of
behavioral primitives & c.  Cats show the latter, as well as learning.
I have never observed un-programmed behavior in any insect, so I doubt
that any insect is particualrly intelligent.

>>Their "intelligence" does not
>>seem to be as elaborate or as developed as ours.  But they do "learn", and 
>>"remember" (experiments with food caching and re-finding in birds; I
>>can find references if you want), and "solve problems" (parrot pulling
>>string"foot over beak" to raise food to its perch), and even "form
>>generalizations".
>
>Is this learning or behaviour designed to acquire food?

It is both!  Just what do you think 'learning' is?  It is a `software' design
to increase adaptability, and thus survival, by allowing behavior to be
modified by past experience.  Learning is *primarily* a behavior designed
to acquire food (and escape predators, and ...).
Indeed in these examples not only do the birds show learning, they show
intelligence.

>
>Agreed.  My intention here was to ask if they display "intelligent" 
>communication.  Since we haven't detected them talking about 
>epistimology and metaphysics we can't know for sure that these communications
>are much more than evolved actions.

A good point.  I do not think anyone is claiming *equivalence* in learning
or intelligence, just that it exists to various degrees in many animals.
Intelligence is not an all or none thing, it is a measure of tendency.
Some entities have more of this tendency, some less.  Humans appear to
have an order of magnitude more of it than anything else, but cats still
have more of it than butterflies (and probably more of it than dogs)

>>And finally, is the domain or goal of "Artificial Intelligence" really
>>"Artificial HUMAN Intelligence" ?  
 
>We haven't positively located any other species that is intelligent, so
>we have only ourselves to base creating intelligent systems on.  I'm not
>saying there isn't other intelligent species (to a greater or lesser
>degree than us), just that we haven't identified them yet.

Perhaps by your definitions. But most biologists would disagree.  We know
of many relatively intelligent species, some more so, some less so.  The
porpoise appears to be second only to humans with the chimpanzee in the
same general vicinity.

If you wish to use different definitions of intelligence and learning than
biologists and psychologists, feel free to do so.  But then you should
try to give us a clear specification of *your* definition, so we can talk
the same language.

sarima@tdatirv.UUCP (Stanley Friesen) (09/19/90)

In article <59525@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>To my mind, an intelligent system must not only be able to think and solve 
>problems, it must also be able to learn and evolve over time.  The 
>frontiers of learning are the focus of one's interests.  The internal 
>representations of the acquired knowledge (corresponding to our mental 
>models) reflect one's understanding or comprehension.  

	Hmm, this is very interesting.  This may actually be a clue to our
current difficulty in making real progress in AI.  We are trying to add
learning capacity to existing reasoning systems (called expert systems).
Evolution appears to have done it the other way around!  In evolution
*learning* came *first*, and it was only after this was well established
that anything resembling intelligence developed.  Even a planarian shows
the persistant changes of behavior due to prior experience that we define
as learning.

	Perhaps we should scrap all our nifty, complicated reasoning
engines and concentrate on designing a program that does nothing *but*
learn.
>
>Awareness of surroundings gives rise to consciousness.  First, a system 
>needs sensors to gather raw data.  Then it needs to interpret sensory data 
>and integrate it into a structured representation of the external state of 
>affairs.  These representations could be models or frames, or other forms 
>of knowledge representation.

I would say that consciousness requires even more than this.  Most 'higher'
primates, and perhaps many carnivores (like cats) show this kind of
intelligence (awareness of external state using internal mental models, which
can be used to reason).  Consiousness also requires *internal* sensors
which incorporate the entities own state into the internal world models.

>Curiosity is a key emotion of a learning system.  It goes along with 
>related emotions such as interest, fascination, boredom, anxiety, 
>satisfaction, and confidence.

Quite likely, and solving the problem of making a computer curious might well
be a major breakthrough in AI.

davis@barbes.ilog.fr (Harley Davis) (09/19/90)

In article <3873@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

   I'll say again, these chimps are passing the Turing test.  They're
   displaying intelligent _behaviour_.  Whether they are intelligent or not is
   a matter for definition (of what intelligence is) and research of the
   mechanism, not the observed behaviour.

Whatever intelligent behaviour the chimps may display, I doubt they
could pass the Turing test.

-- Harley

 I think that I shall never see
 Poetry written by a chimpanzee.

--
------------------------------------------------------------------------------
Harley Davis			internet: davis@ilog.fr
ILOG S.A.			uucp:  ..!mcvax!inria!davis
2 Avenue Gallie'ni, BP 85	tel:  (33 1) 46 63 66 66	
94253 Gentilly Cedex		
France

BKort@bbn.com (Barry Kort) (09/19/90)

In article <26F62D1A.94F@marob.masa.com> cowan@marob.masa.com (John Cowan) 
writes:

> Oh yes?  And what sentences were passing through Beethoven's mind when
> he was thinking about his Ninth Symphony, please?

My friend, not this dischordant note.  All mankind shall be as brothers.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

BKort@bbn.com (Barry Kort) (09/19/90)

In article <147@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
> ... solving the problem of making a computer curious might well
> be a major breakthrough in AI.

As I see it, emotions are an emergent property of any learning system.  To 
put it poetically, emotions are the expression of vanishing ignorance.  Or 
to put it more mathematically, if K(t) denotes accumulated knowledge over 
time, then emotions correspond to the time derivative, dK(t)/dt.

Thus any learning system, be it made of silicon or made of meat, will 
exhibit emotions indicative of its progress or lack of progress in 
acquiring significant new knowledge.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

dmark@acsu.buffalo.edu (David Mark) (09/20/90)

In article <3875@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

>I'm not being homocentric, I'm trying to be rigorous in what we're saying.
>No one has "proved" intelligence in any other species or software. 
             ^^^^^^                  ^^^^^^^^^

Jim, can you point me at a PROOF that humans ARE intelligent?

David Mark, dmark@acsu.buffalo.edu

cerebus@corona.bu.edu (Timothy Miller) (09/20/90)

In article <36796@eerie.acsu.Buffalo.EDU>, dmark@acsu.buffalo.edu (David Mark) writes:
|> In article <3875@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
|> 
|> >I'm not being homocentric, I'm trying to be rigorous in what we're saying.
|> >No one has "proved" intelligence in any other species or software. 
|>              ^^^^^^                  ^^^^^^^^^
|> 
|> Jim, can you point me at a PROOF that humans ARE intelligent?
|> 
|> David Mark, dmark@acsu.buffalo.edu

	That particular question is the bane of many a philosophy student:

	"Prove to me that I think."

	Can't be done.  I may *act* like I think, but what *proof* is that?

	For that matter, prove to me that you *exist*.  Even Descartes got stuck in that one.

				Just trying to make your existence a 
				little less secure,

					Timothy J. Miller
					cerebus@bu-pub.bu.edu

e343ca@tamuts.tamu.edu (Colin Allen) (09/20/90)

In article <147@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <59525@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>>To my mind, an intelligent system must not only be able to think and solve 
>>problems, it must also be able to learn and evolve over time.  The 
>	Hmm, this is very interesting.  This may actually be a clue to our
>current difficulty in making real progress in AI.  We are trying to add
>learning capacity to existing reasoning systems (called expert systems).
>Evolution appears to have done it the other way around!
> {stuff deleted}
>	Perhaps we should scrap all our nifty, complicated reasoning
>engines and concentrate on designing a program that does nothing *but*
>learn.

The trouble with this proposal is that we don't have a few eons to sit
around and wait for the results while the systems evolve.  The
suggestion might be useful if we knew how to take simple learning
systems modeled on organisms like Aplysia and transform them into
devices capable of conversing in a natural language like English, but
we don't.  Neither is it clear that focusing on simple learning
devices will tell us how to get to the more complicated things.  We
just have to jump right in with the hard stuff.

Colin Allen				e343ca@tamuts.tamu.edu
Department of Philosophy
Texas A&M University			(409) 845-3606
College Station, TX 77843-4237

sarima@tdatirv.UUCP (Stanley Friesen) (09/20/90)

In article <1990Sep18.144452.9530@cadre.dsl.pitt.edu> geb@dsl.pitt.edu (Gordon E. Banks) writes:
>In article <3873@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
 
>>  My arguments here are ment to convey
>>that I think we need to be more rigorous in our definitions about such 
>>things.  An insect or slug may look like it learns something, but it's
>>lack of much of a nervous system makes it unlikely.  
 
>Despite what you think unlikely, it can be rigorously proved that such
>animals learn.  THis is done by comparing their behavior over many
>trials with random.  So even simple nervous systems learn.

Absolutely, using the standard biological definition of learning even slugs
can *learn*.  (They are almost entirely devoid of *intelligence*, that is
they only possess the first of the 4 characteristics poroposed as defining
intelligence - learning).

In fact I would even go further than you did.  Even a *single* *neuron* is
capable of learning, at least in a very simple-minded way.  In fact this
is probably a large part of the biological purpose of neurons.  Except 
where the stimulus and response are in different parts of an organism,
using a neuron is far less efficient than simply having the effector respond
directly to the stimulus.

tarquin@athena.mit.edu (Robert P Poole) (09/20/90)

>I'll never understand how verbal people can believe that >all< thinking
>is verbal.  Reminds me of Quine's claim that when a mouse fears a cat,
>he fears that a certain sentence is true!

I was wondering when somebody was going to bring this up.  I agree, nonverbal
thought is at least as important as verbal thought.  In fact, Einstein wrote
that most of his thinking in formulating General Relativity was entirely
nonverbal -- he played with three and four dimensional contours in his head.
(Yes, folks, he had an incredible geometric intuition which most of us can't
match, but I think this was not a special case.)

--
Robert P. Poole                       tarquin@athena.mit.edu
46 Massachusetts Avenue               MIT Course VIII
311B Bexley Hall                      "I love the smell of napalm early in the
Cambridge, MA  02139                   morning.  Smells like... victory!"

jwi@cbnewsj.att.com (Jim Winer @ AT&T, Middletown, NJ) (09/20/90)

Harley Davis writes:

>  I think that I shall never see
>  Poetry written by a chimpanzee.

> ------------------------------------------------------------------------------
> Harley Davis			internet: davis@ilog.fr
> ILOG S.A.			uucp:  ..!mcvax!inria!davis
> 2 Avenue Gallie'ni, BP 85	tel:  (33 1) 46 63 66 66	
> 94253 Gentilly Cedex		
> France

I have never heard of any experiments in which it was shown that a chimp
appreciated human poetry. (Of course I haven't looked very hard, but it
seems an unlikely subject for research funding.)

It is entirely possible that you may see poetry written by a chimp, but
neither recognize it as poetry nor appreciate it -- the same can be said
for the scribblings of many immature humans -- unless, of course,  you 
wish to extend this discussion to include *what is poetry?*

Jim Winer -- jwi@mtfme.att.com -- Opinions not represent employer.
------------------------------------------------------------------
"No, no: the purpose of language is to cast spells on other people ..."
								Lisa S Chabot
								

cowan@marob.masa.com (John Cowan) (09/20/90)

In article <59555@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>In article <26F62D1A.94F@marob.masa.com> cowan@marob.masa.com (John Cowan) 
>writes:
>
>> Oh yes?  And what sentences were passing through Beethoven's mind when
>> he was thinking about his Ninth Symphony, please?
>
>My friend, not this dischordant note.  All mankind shall be as brothers.

Gotcha!  That's only the 4th movement!

The first three movements have no words at all y'know.
-- 
cowan@marob.masa.com			(aka ...!hombre!marob!cowan)
			e'osai ko sarji la lojban

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/20/90)

In article <146@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <3851@se-sd.SanDiego.NCR.COM> jim@se-sd (Jim Ruehlin) writes:
>>Good question!  Looking inside the "black box" called "learning organism",
>>are there low-level cognitive similarities?  Or even high-level ones?
>>I doubt it - humans and butterflys are very different.  
 
>At a high level this is true, but at a basic neurological level it is not.
>Neurons operate the same in humans and in buterflies, and at this level
>learnng most certainly does take place in almost all known animals. 

I'm glad we've got a biologist in on this conversation.  We need more of
them to involve themselves in this area.

Neurons do, but what about cognitive structures?  No doubt we have different
and more powerful neurally computational (call it mid-level cognition)
abilities, probably due to some of the specialized neurons in the 
cerbral cortex (I hope I'm naming it correctly - I aint no biologist!).
We can make up all sorts of cognitive structures that they can't (as
hypothesized by frames, scripts, etc. etc.).

>yes indeed, you seem to be using a very different definition of learning
>than most biologists.  As a trained bologist, I would sya that the standard
>definition of learning runs something like this:
>	A change in neurological responses due to repeated stimulus
>	that tends to alter behavior in a way that is responsive to
>	changing environmental situations.
>[Note this is only an aproximation - I do not have the formal definition handy]
>By this definition the example with the buterflies is a *clear* and
>*definitive* example of learning.

That's seems to me to be a fair definition for what biologists are trying
to study.  I certainly don't know enough to discuss it in that realm
anyway.  But to apply that definition to cognitive science leaves out
the capability to just sit back and think, to have thoughts or engage
in cognitive activity with NO change in behaviour.  I can sit back and
enjoy the memory of the date I had last night, or draw conclusions
about S&L presidents who didn't run the banks properly.  But these don't
necessarily change my behaviour.  This is learning, as I'm arriving at
new data (e.g., "Those S&L guys are crooks!"), but I'm not changing
my behaviour (I don't bank at S&L's anyway).

>>Agreed, if you look merely at the behavioural aspects of learning.  Otherwise,
>>maybe there's little similarities between the exhibited behaviour in humans
>>and cats.
>Not just the behavioral similarities, but also the identity of basic
>neurological mechanism.  Even if the *cognitive* processes are different
>in humans, cats and buterflies, the neuronal mechanisms are still the same.

Yes, but we have some different neurons (e.g., cerebral cortex vs.
hypocamus (sp?), and more of them.  The "hardware" is important, but
what we can do on top of it is what makes learning, or intelligence,
what it is.

>I would distinguish between learning, which is shown by all forms with a
>nervous system, and intelligence which involves creative behaviors - the
>initiation of new behavior by mechanisms other than simple trail and error.
>This includes anticipation, modeling, improvization, recombination of
>behavioral primitives & c.  Cats show the latter, as well as learning.
>I have never observed un-programmed behavior in any insect, so I doubt
>that any insect is particualrly intelligent.

You may be right.  But in some of the examples you cite (such as modeling)
we currently don't have a way to see if cats model internally.  The only
way we can with humans so far (as far as I know, anyway) is to query
them as to what cognitive process is occuring.  So while cats might be
doing just that, we don't know if they really are, and won't until we
have a more accurate and language-free method of determining if this is
true.

>>Is this learning or behaviour designed to acquire food?
 
>It is both!  Just what do you think 'learning' is?  It is a `software' design
>to increase adaptability, and thus survival, by allowing behavior to be
>modified by past experience.  Learning is *primarily* a behavior designed
>to acquire food (and escape predators, and ...).
>Indeed in these examples not only do the birds show learning, they show
>intelligence.

While Man goes beyond this in civilized society, I'll concede this as
at least the original purpose or function of the ability to learn.

>A good point.  I do not think anyone is claiming *equivalence* in learning
>or intelligence, just that it exists to various degrees in many animals.
>Intelligence is not an all or none thing, it is a measure of tendency.

I agree.  The reason I go on about learning and intelligence in other
animals is because we arn't very rigorous about what these things are
and how to study them.  We often rely solely on behaviour without
regard to the internal activity going on.  I don't think we should
be suprised if we find out that while some behaviours look the same
between humans and animals, the motivations or internal mechanisms that
cause them are very different.  In other words, we're naturally prone
to anthropomorphism.

>>We haven't positively located any other species that is intelligent, so
>>we have only ourselves to base creating intelligent systems on.  I'm not
>>saying there isn't other intelligent species (to a greater or lesser
>>degree than us), just that we haven't identified them yet.
 
>Perhaps by your definitions. But most biologists would disagree.  We know
>of many relatively intelligent species, some more so, some less so.  The
>porpoise appears to be second only to humans with the chimpanzee in the
>same general vicinity.

>If you wish to use different definitions of intelligence and learning than
>biologists and psychologists, feel free to do so.  But then you should
>try to give us a clear specification of *your* definition, so we can talk
>the same language.

I won't argue with the biologist definition of intelligence - if the
distinction works will for them, that's fine.  As a trained cognitive
scientist, I take the cognitive approach to the definition.

As I'm sure everyone in this conversation knows, EVERYONE has a different
definition of intelligence, some of them wildly different.  I think when
discussing this issue, we need to be aware that we'll always be talking
from different definitions, looking to see where the weak and strong
points are in them.

I've already posted what my definitions of intelligence and learning are,
so I won't waste time and space here again.

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/20/90)

In article <59556@bbn.BBN.COM> BKort@bbn.com (Barry Kort) writes:
>As I see it, emotions are an emergent property of any learning system.  To 
>put it poetically, emotions are the expression of vanishing ignorance.  Or 
>to put it more mathematically, if K(t) denotes accumulated knowledge over 
>time, then emotions correspond to the time derivative, dK(t)/dt.

I disagree.  Emotions are physical sensations coupled with memories or
a particular thought.  I don't think their necessary for "being", 
"intelligence", or "learning".
Not to flame, but I wish people would stop using the term "emergence".
I know they won't, but could you state what you mean?  My assessment
of the idea of emergence is that it's not possible within our curernt
paradigm of science and rationality.  The impression I get is that
people throw the term around when they need a hand-waving explaination
of some mental phenomenon.

>Thus any learning system, be it made of silicon or made of meat, will 
>exhibit emotions indicative of its progress or lack of progress in 
>acquiring significant new knowledge.

I can take a multitude of drugs that will allow me to feel any emotion
at all.  I wouldn't consider this a measurement of progress in learning
anything.

- Jim Ruehlin

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (09/20/90)

In article <36796@eerie.acsu.Buffalo.EDU> dmark@acsu.buffalo.edu (David Mark) writes:
>In article <3875@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>>I'm not being homocentric, I'm trying to be rigorous in what we're saying.
>>No one has "proved" intelligence in any other species or software. 
              ^^^^^^                  ^^^^^^^^^
>Jim, can you point me at a PROOF that humans ARE intelligent?

Only definitionally.  Since we can form the question "what is intelligence?",
and we appear to be the only species around (so far) that can, for the
purposes of pursuing the question we must be intelligent.

In other words, since we ask the question, we're intelligent.  You can
argue that we may not be intelligent in the "absolute" sense, but if not,
who cares? If we arn't, we'll never know it anyway, so why worry about it?

- Jim Ruehlin

sarima@tdatirv.UUCP (Stanley Friesen) (09/21/90)

In article <3874@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>Good point.  I appologize for my comment.
{stuff deleted}
>So here's a quick, informal summary:
>	Learning:  An ability to acquire information and apply it
>		to a variety of situations and circumstances.  This
>		includes applying the information to areas that are
>		unrelated to the original domain.

Well, I now understand where your coming from.  By *this* definition I must
agree there are very few, if any, animals other than humans which display this
sort of learning.  Perhaps some of the great apes do, but even the porpoise
(otherwise more intelligent than a chimp) does not seem capable of cross-over
learning.

However, I think this may be too restrictive to be useful.  It seems to require
a very sophisticated conceptual framework to operate.  Indeed it seems to be
very specialized, since other animals that *can* reason, cannot apply lessons
across problem domain boundries.  I would thus include this type of behavior
as a special case of reasoning, rather than linking it to learning.  [Or I
might place it in a seperate category of its own, perhaps called 'creativity'].
In short, I do not see it as useful to make learning depend on reasoning by
definition - I would rather keep them clearly and fully seperated [i.e. they
should be atomic concepts].
[BTW the simpler definition is also more easily measured]

sarima@tdatirv.UUCP (Stanley Friesen) (09/21/90)

In article <3893@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>In article <146@tdatirv.UUCP> sarima@tdatirv.UUCP (I) write:
>>At a high level this is true, but at a basic neurological level it is not.
>>Neurons operate the same in humans and in buterflies, and at this level
>>learnng most certainly does take place in almost all known animals. 

>Neurons do, but what about cognitive structures?  No doubt we have different
>and more powerful neurally computational (call it mid-level cognition)
>abilities, probably due to some of the specialized neurons in the 
>cerbral cortex 
>We can make up all sorts of cognitive structures that they can't (as
>hypothesized by frames, scripts, etc. etc.).

In the brain concepts &c. are essentially patterns of activity in (very)
large sets of neurons.  In short the mind is a hierarchical structure, the
capabilities at each level are based on composition of lower level functions.
Concpetual learning involves many individual neurons learning to cooperate
in particular ways.  [Actually there are several levels between the neuron
and anything that we would call 'concepts']
 
>>[Tentative definition of learning]
>>	A change in neurological responses due to repeated stimulus
>>	that tends to alter behavior in a way that is responsive to
>>	changing environmental situations.
 
>That's seems to me to be a fair definition for what biologists are trying
>to study.  I certainly don't know enough to discuss it in that realm
>anyway.  But to apply that definition to cognitive science leaves out
>the capability to just sit back and think, to have thoughts or engage
>in cognitive activity with NO change in behaviour.

Ah, but I would call this kind of process abstract reasoning, *not* learning.
In short, it useful to clearly seperate the concept of learning per se from
the other components of intelligence, like reasoning.  This allows each piece
to be studied and characterized on its own merits.

I do admit that I may have made a mistake in including 'neurological' in the
above definition.  Basicly what I have in mind is that any system that modifies
its behavior on the basis of prior experience shows learning.  This concept
can certainly be applied to cognitive science, since it is level independent.
Changes in conceptual structures due to experience, at least in living things,
invariably led to changes in behavior.

> I can sit back and
>enjoy the memory of the date I had last night, or draw conclusions
>about S&L presidents who didn't run the banks properly.  But these don't
>necessarily change my behaviour.  This is learning, as I'm arriving at
>new data (e.g., "Those S&L guys are crooks!"), but I'm not changing
>my behaviour (I don't bank at S&L's anyway).

I rather think that your behavior is changed more than you might think by this.
The changes are likely to be subtle, and hard to link with your opinion of
S&L operators - but they will still be there.  [For instance it might lead
you to say 'Those S&L guys are crooks!', which you would not otherwise have
said]

>Yes, but we have some different neurons (e.g., cerebral cortex vs.
>hypocamus (sp?), and more of them.  The "hardware" is important, but
>what we can do on top of it is what makes learning, or intelligence,
>what it is.

Well, the different types of neurons are not really all that different.
They differ mainly in the type and pattern of connections they make.  They
may also differ in the type of signal they send.  However all of these types
of variation exist in butterflies and even worms - it is only the exceedingly
primitive forms like hydra and planaria that lack internal differentiation
of nerve cell types.
[BTW the hippocampal neurons are essentially identical to the cortical ones,
there is actually more variation *within* the cortex or hippocampus than
between them.  A pyramidal cell is far more different from a stellate cell
than a coritcal pyramidal cell is different from a hipoocampal pyramidal cell]

>>I would distinguish between learning, which is shown by all forms with a
>>nervous system, and intelligence which involves creative behaviors - the
>>initiation of new behavior by mechanisms other than simple trail and error.
>>This includes anticipation, modeling, improvization, recombination of
>>behavioral primitives & c.
 
>You may be right.  But in some of the examples you cite (such as modeling)
>we currently don't have a way to see if cats model internally.  The only
>way we can with humans so far (as far as I know, anyway) is to query
>them as to what cognitive process is occuring.  So while cats might be
>doing just that, we don't know if they really are, and won't until we
>have a more accurate and language-free method of determining if this is
>true.

There are some behaviors which appear to require internal modelling which
can be observed without needing language.  The classic example that I know
of involved an ape rather than a cat.  In this experiment with the ape it
was placed in a tall cage with a bunch of banannas hung on a string from the
top, out of the ape's reach.  Also in the cage were a crate and a stick.
After jumping up and grabbing at the fruit for awhile, the ape went and sat
down and thought for awhile.  Then it got up, placed the box under the fruit,
picked up the stick and knocked the banannas down.  This was done *without*
any significant amount of trial and error - the ape clearly had some idea
of what it was doing.  I maintain that this is proof of some sort of internal
modelling, in which the ape did the trail and error in its head.

>While Man goes beyond this in civilized society, I'll concede this as
>at least the original purpose or function of the ability to learn.

Quite.  This is an example of what biologists call 'pre-adaptation' - a
feature or capability of an organism that evolved for one purpose that, by
accident, is useful for something entirely different.

>I agree.  The reason I go on about learning and intelligence in other
>animals is because we arn't very rigorous about what these things are
>and how to study them.  We often rely solely on behaviour without
>regard to the internal activity going on.  I don't think we should
>be suprised if we find out that while some behaviours look the same
>between humans and animals, the motivations or internal mechanisms that
>cause them are very different.  In other words, we're naturally prone
>to anthropomorphism.

I do agree with this.  I try to be more rigorous than many people here.
I find that many of the sign-language experiments with chimpanzees
particularly wanting here.  However, I am not sure that a difference in
mechanism should necessarily rule out the use of terms like intelligence
or learning.  Certainly, I doubt that the ape above was accompanying his
thought with the kind of running monologue humans tend to use in that
situation.  What I would say is that behavior which involves novelty
or flexibility is evidence of some degree of intelligence, whatever the
internal mechanism.

Certain mechanisms are likely to be self-limiting, and thus unable to
give rise to more sophisticated levels of intelligence.  Thus it may be
that human level intelligence may be achieved by only one mechanism, and
that is why, say, porpoises have not achieved that level of intelligence.
[That is porpoise intelligence may be based on a mechanism that precludes
the level of abstraction humans are capable of].

>I won't argue with the biologist definition of intelligence - if the
>distinction works will for them, that's fine.  As a trained cognitive
>scientist, I take the cognitive approach to the definition.

The main advantage of the biological defintions is that they provide a
cleaner seperation of the concepts into more basic, more easily studied
components.  This allows for greater 'modularity' in working with them.
[You can study learning without also studying conceptualization, or you
can study concept formation without dealing with learning and so on].
I think that cognitive science would be well served by applying this
level of reductionism.

bhanafee@ADS.COM (Brian Hanafee) (09/22/90)

Could we please redirect this discussion to comp.ai.philosophy, which
was created expressly for this debate.  Perhaps we could at least
cross-post to that group to get it started.

Thanks,

Brian Hanafee

BKort@bbn.com (Barry Kort) (09/22/90)

In article <3894@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim 
Ruehlin, Cognitologist domesticus) writes:

> I can take a multitude of drugs that will allow me to feel any emotion
> at all.  I wouldn't consider this a measurement of progress in learning
> anything.

The release of neurotransmitters is an epiphenomenon of emotions.  In 
silicon-based systems, one would expect a flood of state changes to 
manifest itself through corresponding solid-state epiphenomena.  But
mimicing the epiphenomena is not the same thing as experiencing genuine learning.  Perhaps we need to coin a new term to distinguish emotions
mediated by the learning process from chemically induced pharmacological effects.


Barry Kort
Visiting Scientist
BBN Labs
Cambridge, MA

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (09/22/90)

In article <3893@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>That's seems to me to be a fair definition for what biologists are trying
>to study.  I certainly don't know enough to discuss it in that realm
>anyway.  But to apply that definition to cognitive science leaves out
>the capability to just sit back and think, to have thoughts or engage
>in cognitive activity with NO change in behaviour.  

Well, of course you do have a behavior, namely changes in your
neural state (action potentials, neurotransmitters going across
synapses, etc).  Single-unit and multiple-unit recording should be a valid
tool for Congitive Science/Physiological Psychology.  Of course, there
are limits to what it can tell us.

>Yes, but we have some different neurons (e.g., cerebral cortex vs.
>hypocamus (sp?), and more of them.  The "hardware" is important, but
>what we can do on top of it is what makes learning, or intelligence,
>what it is.

I wouldn't say we have different neurons.  We definately have
extra brain areas, and some of our areas have different shape/sizes
than animals.

>You may be right.  But in some of the examples you cite (such as modeling)
>we currently don't have a way to see if cats model internally.  The only
>way we can with humans so far (as far as I know, anyway) is to query
>them as to what cognitive process is occuring.  So while cats might be
>doing just that, we don't know if they really are, and won't until we
>have a more accurate and language-free method of determining if this is
>true.

Introspection (that is, asking people to explain what they are thinking
about) is not a dependable method.  The data is corrupted, because 
people may be thinking in one way when they don't have to explain it,
but have to come up with another method when they actually explain what
they are doing.  I personally doubt if most people's thoughts about
how they think are anything like what they are actually doing.  Otherwise,
we'd have all this cognitive science stuff solved.  Neuronal recording
and examining behavior from lesioned and non-lesioned people/animals
are the best tools of cognitive science.  And this can be done for
cats as well as people.  Granted, it is difficult to get a cat to do
anything he or she doesn't really want to.

>I agree.  The reason I go on about learning and intelligence in other
>animals is because we arn't very rigorous about what these things are
>and how to study them.  We often rely solely on behaviour without
>regard to the internal activity going on.  I don't think we should
>be suprised if we find out that while some behaviours look the same
>between humans and animals, the motivations or internal mechanisms that
>cause them are very different.  In other words, we're naturally prone
>to anthropomorphism.

That is exactly why congitive science depends on lesion analysis to
legitimize theories which predict behavior.  In other words, one
may theorize a certain box diagram of a cognitive action with
sensory input comming in, motor activity going out, and lots of
parallel and/or serial paths of cognition in the middle.  If a part
of a persons brain is injured, we examine the behavior of that patient
and compare it with "normal" people.  If we can reasonably explain the
loss of cognitive ability with one box (or more) being incapacitated,
then there is some reason to believe that there is some truth to your
theory.  Of course, it is diffcult to find lesions which affect every
box you want to check, and it is possible for one lesion to wipe
out more than one box, or only halfway wipe it out.  But suprisingly,
there are all kinds of people who have wierd behavior after brain
damage.  For instance, there are people who cannot pronounce
non-words they read, but can pronounce real words.  And there are people
who can pronounce non-words, but have difficulty understanding real
words they read.  Congitive Scientists consider these results to
indicate two parallel paths to spoken word pronounciation, one
which is based on a grapheme to phoneme conversion which allows non-words
(such as "sokutad") to be pronounced, and another which is a 
grapheme to semantics converter which picks up the meaning of words,
and in serial with that unit, a semantic to phoneme converter.
One or the other path can be damaged by lesions to different
areas of the brain.
  Anyway, for those of you who are interested, there is a book which
I believe is called "Congitive Science" from MIT Press (I used it last
year in a course, so I am a little sketchy on the details of who wrote
it and such).  Cognitive  Science accepts a computational model of the
brain, and asks the question "How can we truly prove that a theory
of computation in brain is what is really happening?"

-Thomas Edwards

geb@dsl.pitt.edu (Gordon E. Banks) (09/24/90)

In article <3893@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
>
>Neurons do, but what about cognitive structures?  No doubt we have different
>and more powerful neurally computational (call it mid-level cognition)
>abilities, probably due to some of the specialized neurons in the 
>cerbral cortex (I hope I'm naming it correctly - I aint no biologist!).

No.  There are no types of neurons found in humans that aren't
found in other animals as well.  It is in the numbers and organization
of the neurons that we need to look for the reasons for our
cognitive superiority, not in the structure of the neurons.

geb@dsl.pitt.edu (Gordon E. Banks) (09/24/90)

>  I think that I shall never see
>  Poetry written by a chimpanzee.

No, but you may have seen modern art painted by one.  A few years
ago someone as a prank had a chimp paint some pictures which 
subsequently got good reviews at an art show, including praise
from Picasso.  When the prank was unveiled, a reporter asked
Picasso if he wasn't embarassed over his mistake.  He went
into the other room, then came back in shrieking and jumping
about chimp-like, hopped over to the reporter and bit him!

feedback (Bryan Bankhead) (09/25/90)

> I was wondering when somebody was going to bring this up.  I agree, nonverbal
> thought is at least as important as verbal thought.  In fact, Einstein wrote
> that most of his thinking in formulating General Relativity was entirely
> nonverbal -- he played with three and four dimensional contours in his head.
> (Yes, folks, he had an incredible geometric intuition which most of us can't
> match, but I think this was not a special case.)
> 
> --

Interestingly as an artist who is also a minor computer jock I am 
interested in the concept of the relationship between verbal and 
non-verbal thinking.  In reading the notebooks of Leonardo Da Vinci I am 
employing both.  I am processing verbal messages and turning them into the 
data for use in creating paintings, but at the same time I cannot learn to 
paint just by reading books!  Indeed, being an artist even, a part time 
sunday painter will cause the reader to extract far more information from 
LDV's notes than someonw who doesn't, even though much of the learning in 
painting is non verbal.
The fact that these ideas cannot be reduced to simple dichotomies of 
verbal/nonverbal thinking is caused by the inherently massively parralell 
processing going on in the brain.  All types of processing are going on at 
the same time and interacting with each other.