[comp.ai] What AI is Exactly - Another Update.

pnettlet@gara.une.oz.au (Philip Nettleton) (09/18/90)

Some new people have recently entered this debate so I thought it was
time to repost the definition of an "Intelligent System" that we have
developed so far. Pinning this debate back to its origins, we would
be interested in hearing from anyone with a CONSTRUCTIVE critism of
any part of the definition or any additions they feel are necessary.
Remember, the underlying assumption is that to be human is not a
necessary condition for being intelligent, this point has been flogged
to death in recent postings.

Let us produce a slightly more refined "general requirements" for the
behaviour of an "intelligent system".

----------------------------------------------------------------------
			DEFINITION:
	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.

a)	The system MUST be able to learn. This implies that the
	system MUST have a memory for learning to be maintained.
	Also learning comes in a number of varieties:

	i)	It MUST be able to learn from its own experiences.
		These can be broken down into further criteria:

		1)	Learning through trial and error.
		2)	Learning through observation.
		3)	Learning through active deduction (see
			reasoning).

	ii)	It SHOULD be able to learn by instruction, but this
		is not necessary. At the very least the system MUST
		have preprogrammed instincts. This is a boot strap
		for the developing intelligence.  Without a starting
		point, the system cannot progress.

b)	The system MUST be autonomous. That is to say, it MUST be
	able to do things by itself (however may choose to accept
	aid).  This can be disected as:

	i)	The system MUST be able to effect its environment
		based on its own independent conclusions.

	ii)	The system MUST be its own master and therefore
		doesn't require operator intervention.

	iii)	The system MUST be motivated. It must have needs and
		requirements that can to be satisfied by its own
		actions.

c)	The system MUST be able to reason. That is to say, it must
	use some form of deductive reasoning, based on known facts
	and capable of producing insights (deductions) which later
	become known facts.

d)	The system MUST be able to develop self awareness. This is
	related to autonomy, reasoning and learning, but also
	embodies the need for external senses. Without external
	senses there is no way of appreciating the difference between
	"me" and "outside of me". Sensationations of pain and
	pleasure can provide motivation.
----------------------------------------------------------------------
			DEFINITION OF TERMS.

1)	A "system" CAN be comprised of multiple subsystems, each one
	of these could be a system in its own right (systems theory).

2)	The "environment" in which the system exists MUST be external
	to the system, but that is as far as the definition of the
	environment goes (it could be computer generated).

3)	The terms "learning", "reasoning" and "autonomy" are
	BEHAVIOURAL characteristics, further supported by our
	understanding (to date) of how they MIGHT work.

4)	The term "self awareness" is based on learning, reasoning
	and autonomy, and is the state where the system is aware
	(has knowledge) of its own existence as separate from its
	environment.

5)	"Intelligence" is a BEHAVIOURAL phenomena displayed by
	intelligent systems.
----------------------------------------------------------------------

NOTE:	If you step OUTSIDE the boundaries of the "definition of
	terms", your comments will simply be ignored, but feel free to
	add definitions or modify them if it will help clarify the
	"general requirements for an intelligent system".

		With Regards,

				Philip Nettleton,
				Tutor in Computer Science,
				Department of Maths, Stats, and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.

forbis@milton.u.washington.edu (Gary Forbis) (09/19/90)

I've continued to think about the current attempt to define "intelligent 
system".  I feel like I am nit-picking.  I have taken a stance in another
conference which makes this minor point important to me right now.

In article <3734@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>----------------------------------------------------------------------
>			DEFINITION:
>	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.
>
>a)	The system MUST be able to learn. This implies that the
>	system MUST have a memory for learning to be maintained.

Provided that the system is in a sufficiently rich environment I would
concede this point.  The problem arises when the system exists in a very
poor environment.  Is intellegence contextual?  That is could a system
be said to be intelligent when in one environment and not when in another?

The second complaint is that once the system HAS learned can it still be
said to be able to learn and is this necessary?  After learning has taken
place would the same response to the same stimulus take the system out of
the realm of intelligence? 

Is intelligence the process or the results?

--gary forbis@cac.washington.edu

sarima@tdatirv.UUCP (Stanley Friesen) (09/20/90)

In article <3734@gara.une.oz.au> pnettlet@gara.une.oz.au (Philip Nettleton) writes:
>[it is] time to repost the definition of an "Intelligent System" that we have
>developed so far. Pinning this debate back to its origins, we would
>be interested in hearing from anyone with a CONSTRUCTIVE critism of
>any part of the definition or any additions they feel are necessary.
...
>Let us produce a slightly more refined "general requirements" for the
>behaviour of an "intelligent system".

O.K here goes, my general comments on the definition.

In general I think it is very good.  It seems to capture in a fairly clear
and concise way most of my intuitive definition of intelligence.

Now for some detail comments:

>			DEFINITION:
 
>a)	The system MUST be able to learn. This implies that the
>	system MUST have a memory for learning to be maintained.

>	ii)	It SHOULD be able to learn by instruction, but this
>		is not necessary. At the very least the system MUST
>		have preprogrammed instincts. This is a boot strap
>		for the developing intelligence.  Without a starting
>		point, the system cannot progress.

I rather suspect that instruction will turn out to be a special case of
learning from observation.  Or it may be a composite of all three of the
other modes of learning.  I certainly doubt it is actually a distinct mode
in its own right.  However, mentioning it here is probably appropriate, since
this is neither certain, nor entirely obvious.

I think some sort of definition of 'instinct' might be in order below.  It
should probably be fairly general, so as to allow for a wide variety of
implementations. [For instance, a chess program with prewired rules and
basic moves (including opening books), could be said to have an instinctive
knowledge of chess - if it then could expand it repertoire, and improve
its game over time, I would say it was learning]

>b)	The system MUST be autonomous.  ...
>	This can be disected as:
 
>	i)	The system MUST be able to effect its environment
>		based on its own independent conclusions.

Perhaps 'internal processes' might be better than 'independent conclusions'.
As written this assumes a particular class of models of decision making.  I
am not sure that this may not be too restrictive.

>
>	ii)	The system MUST be its own master and therefore
>		doesn't require operator intervention.

Maybe a little more precision about what it means to 'be its own master'.
This could be construed so as to rule out any form of subordinate relationships,
such as employees, servents, slaves &c.   To say that human slaves are/were
unintelligent is certainly not what you intended.  [I suspect I agree with
what you really mean, but it is not really clear here]

>	iii)	The system MUST be motivated. It must have needs and
>		requirements that can to be satisfied by its own
>		actions.

This is one that may not really be necessary.  It could either be an emergent
result of all the others, or it could be irrelevant to intelligence.
I am sonewhat uncertain here - does anyone have any other comments.

>c)	The system MUST be able to reason. That is to say, it must
>	use some form of deductive reasoning, based on known facts
>	and capable of producing insights (deductions) which later
>	become known facts.

Limiting reasoning to deductive reasoning is almost certainly too restrictive.
Most existing intelligent animals, and humans in particular, use analogical
reasoning much of the time (based on similarities and patterns rather than
principles and derivations). Almost any sort of extrapolation should be
included.  Also, internal 'manipulation' of mental models of reality to
estimate the effect of various actions is one of humanities most powerful
forms of reasoning.  We call it "imagination", and "mental rehersal" and
"planning" and many other things.

I do agree that reasoning is a critical component of intelligence, but the
defintion of resoning needs to be general enough to cover most types used
by humans in day-to-day activities.

>d)	The system MUST be able to develop self awareness. This is
>	related to autonomy, reasoning and learning, but also
>	embodies the need for external senses. Without external
>	senses there is no way of appreciating the difference between
>	"me" and "outside of me". Sensationations of pain and
>	pleasure can provide motivation.

I have mentioned this before, but I believe self awareness comes from self-
monitoring.  That is from 'sensory' input about internal state as well as
external environment.  In humans, and probably most animals, this includes
a kinesthetic sense, pain, hunger (and other bodily need sensitivities).
Self-awareness would then develop from the observation that the internal
senses follow different rules than the external ones.

-----------------------------
uunet!tdatirv!sarima				(Stanley Friesen)