[comp.ai] Adaptive vs. intelligent

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/10/89)

From article <6605@sdcsvax.UCSD.Edu>, by pluto@beowulf.ucsd.edu (Mark E. P. Plutowski):
> To my question->
>>>	To what degree do you agree that learning is
>>>	a necessary condition of intelligence? 
>
> Frans van Otten writes:
>>What do you mean with a "neccesary condition" ?  ...
>>...  I think that intelligence is in itself
>>static: it doesn't require learning ....
>>...  But by learning the intelligent being
>>can become more intelligent, whatever that might mean....
>
> All good points.  Let me clarify my position:
> Intelligence is dynamic.  A system that is unable
> to learn from its mistakes is not intelligent.  
....
> Now, the question becomes:  What is the minimal amount
> of simultaneous knowledge and learning necessary to qualify
> for "intelligent behaviour?"
>
> Both are necessary since an adaptive system could "learn"
> very quickly without dsplaying intelligent behaviour,
> for instance, by forgetting simple basics.

Many years ago, I saw a definition of intelligence that said simply
that intelligence is the ability to learn.

If you have no prior (hardwired, built-in, etc.) knowledge you can't
learn.  But if you have no prior (hard-wired, built-in, etc.) mechanism
for learning, you can't learn either.  So to design a learning system
you have to build in both, but to test it you only have to test the
ability to learn.

In reference to "adaptive systems," I wonder what the difference is
between an intelligent system and an adaptive system.  In one sense of
the term "adaptive," they are the same.  But "adaptive" can mean
"designed to be adaptive," rather than "naturally adaptive."  Systems
we design to be adaptive have strictly limited capacity for learning.
They adapt to certain conditions in the environment, and then stop.
They learn nothing further until there is a change in the conditions
they were designed to adapt to, and even then they don't learn anything
new, they just unlearn and relearn the same old thing.  People, on the
other hand, "never stop learning" (old saw).  A system that always
learns more, building generalization on generalization, or innovation
on innovation, is intelligent.

Let me offer an example.  I ask whether the system of evolution, that
is, the origin of species by variation and natural selection, is
intelligent.  I claim that there is a sense in which it is.  It builds
generalization on generalization, and innovation on innovation.  It
does not forget useful things, but it rejects mistakes.  On the other
side, it does not remember mistakes so as to avoid making them again;
would that disqualify its claim to intelligence?  Does that make it
"adaptive" as distinct from "intelligent?"

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

jwi@lzfme.att.com (J.WINER) (06/12/89)

Mark E. P. Plutowski):

| |||	To what degree do you agree that learning is
| |||	a necessary condition of intelligence? 
| |
| | Frans van Otten writes:

| | Now, the question becomes:  What is the minimal amount
| | of simultaneous knowledge and learning necessary to qualify
| | for "intelligent behaviour?"
| |
| | Both are necessary since an adaptive system could "learn"
| | very quickly without dsplaying intelligent behaviour,
| | for instance, by forgetting simple basics.

martin.b.brilliant writes:

| Many years ago, I saw a definition of intelligence that said simply
| that intelligence is the ability to learn.
...
| In reference to "adaptive systems," I wonder what the difference is
| between an intelligent system and an adaptive system.  In one sense of
| the term "adaptive," they are the same.  But "adaptive" can mean
| "designed to be adaptive," rather than "naturally adaptive."  Systems
| we design to be adaptive have strictly limited capacity for learning.
| They adapt to certain conditions in the environment, and then stop.
| They learn nothing further until there is a change in the conditions
| they were designed to adapt to, and even then they don't learn anything
| new, they just unlearn and relearn the same old thing.  People, on the
| other hand, "never stop learning" (old saw).  A system that always
| learns more, building generalization on generalization, or innovation
| on innovation, is intelligent.

Apparently, you think that a system that learns unnecessary (and
possibly incorrect or un-useful) things is intelligent? What about
systems (like people on the radical left, right or center) who
"never stop learning" *incorrect* things? A better definition of an
intelligent system might be one that can cope with unanticipated
(or even random) situations.

Jim Winer ..!lzfme!jwi 

I believe in absolute freedom of the press.
        Pax Probiscus!  Sturgeon's Law (Revised): 98.89%
        of everything is drek (1.11% is peanut butter).
        Rarely able to send an email reply sucessfully.
        The opinions expressed here are not necessarily  
Those persons who advocate censorship offend my religion.

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/13/89)

To my question:

>> What is the minimal amount of simultaneous knowledge 
>> and learning necessary to qualify
>> for "intelligent behaviour?"


martin.b.brilliant writes:

>I ask whether the system of evolution, that is, 
>the origin of species by variation and natural selection, is
>intelligent.  I claim that there is a sense in which it is.  

   I agree.  Certainly this is among the many wondrous things
   which persuade people to consider the possibility of intelligence 
   greater than our own.  
 
>It builds
>generalization on generalization, and innovation on innovation.  It
>does not forget useful things, but it rejects mistakes.  On the other
>side, it does not remember mistakes so as to avoid making them again;
>would that disqualify its claim to intelligence?  Does that make it
>"adaptive" as distinct from "intelligent?"


   Evolution may be intelligent, by some definitions of
   the term.  But, it is not a lifeform, it is not embodied 
   within a living thing, 'it' is a process larger than all of us.

   The changes due to evolution may only be called
   "adaptation" since the plasticity of such changes
   is over generations, not within one lifetime.
   I would limit "intelligence" in living things to those
   which can learn within the span of a lifetime.

   If we are not so egotistical (or bound by religious belief)
   to believe we humans are the  only intelligent life on earth,
   and, if we agree that evolution is the reason for life on earth,
   (Then,) to merge your question with mine (By recombinant splicing) 
   results in:

   When is it possible for evolution to form structures 
   which are plastic enough to allow adaptation within
   one lifetime?  And then, when does such adaptation qualify
   for "intelligence?"   

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/13/89)

In article <1319@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>In reference to "adaptive systems," I wonder what the difference is
>between an intelligent system and an adaptive system.  

I conjecture that the only difference is resources - an
adaptive system need not have a knowledge store, while an
intelligent system must.

mv10801@msi-s6 (Jonathan Marshall [Learning Center]) (06/13/89)

Intelligence is not just the ability to learn or adapt.  I would like
to claim that intelligent organisms all share 3 fundamental properties:
	1. they can adapt (or learn),
	2. they self-organize, and
	3. they have initiative.

A programmable calculator can learn, but it probably doesn't
self-organize, and it doesn't have initiative -- so we wouldn't call
it intelligent.  Even if a robot were somehow programmed to have
initiative, say to visually seek electric power outlets and recharge,
we wouldn't ennoble it with the term "intelligence" -- unless it could
learn and unless its learning were accomplished through a process of
self-organization.

A pigeon can learn, self-organizes what it has learned, and has some
degree of initiative.  So, we would say that a pigeon has some
intelligence.

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/14/89)

In article <13493@umn-cs.CS.UMN.EDU> mv10801@uc.msc.umn.edu (Jonathan Marshall) writes:
>Intelligence is not just the ability to learn or adapt.  I would like
>to claim that intelligent organisms all share 3 fundamental 
>properties:
>	1. they can adapt (or learn),
>	2. they self-organize, and
>	3. they have initiative.

I can live with this list, although I think 
there is some overlap among the 3 items.
But whether or not it is minimal (or, complete,)
it provides bounding criteria for this 
discussion. 

OK, then, by the criteria stated above, 
(the Marshall Law ?) what is the smallest, 
or, most primitively evolved species which meets 
these criteria?  

Does a honeybee meet criteria #2 ?  

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/14/89)

In article <13493@umn-cs.CS.UMN.EDU> mv10801@uc.msc.umn.edu (Jonathan Marshall) writes:
>Intelligence is not just the ability to learn or adapt.  I would like
>to claim that intelligent organisms all share 3 fundamental properties:
>	1. they can adapt (or learn),
>	2. they self-organize, and
>	3. they have initiative.

I would propose a fourth (on a trial basis):

4. They all model their environments.

Caveats: It may be a feature coincidental 
with intelligence, and not a necessary condition 
of intelligence.  It may not apply to all life, 
such as non-vertebrate  species. Perhaps it would 
be better stated as a distinguishing feature among 
organisms which meet the other 3 criteria.  

mkamer@cs.columbia.edu (Matthew Kamerman) (06/14/89)

It is intriguing to observe evolved intelligences discussing (albeit 
sometimes tangentially) what should be considered intelligent behavior 
in their tools.  With regard to tools one would hope that definitions would
have something to do with the goals that "artificially intelligent" systems
are supposed to fulfill for their creators.  Perhaps before arguing the
merits of the Turing Test we might work towards a consensus on different 
groups of design goals (evolved intelligence simulators, friendly
interfaces, data base searchers, goal-constraint resolvers, etc.).  If such
a consensus can be reached then fruitful discussions of HOW WELL system "A"
satisfies (can theoretically satisfy, etc.) the requirements of a class "B"
AI system might emerge.

With regard to evolved intelligence one might again consider goals (albeit 
this time as a cognitive "crutch").  The sole "goal" of an evolved system
is increasing its "inclusive fitness", the sum over (space, time, and
systems)  of the product of each system and its degree of similarity to the
evolved system.  To this end evolved systems exhibit locally counter-
entropic behaviors which will be called actions.  A possible definition of 
evolved intelligence might be the degree to which energy invested in
internal rearrangements which have no significant direct effect on the 
system's gross physical structure increases inclusive fitness.

More directly, might evolved intelligence be defined as the ability to 
decide amongst actions more effectively (hence forth effectiveness will be 
understood to refer to inclusive fitness) than by unbiased random
selection?  Note that processing time is often a relevant factor.  A rapid 
approximation can be far more effective than a tedious accuracy.  Also, 
intelligence in terms of the marginal value of energy invested in thinking
may vary from one class of problems to another.  Essential to measuring an
evolved system's intelligence is an understanding of the types of problems 
relevant to its survival.

In addition to defining and measuring intelligence it might also be useful
to discover contributing factors and quantify their relationship to a given
system's intelligence.  Some factors might be speed, freedom from noise,
breadth in terms of the number of concurrent threads at the point at which
the marginal value of an additional thread drops to zero (Yes, I'm still 
referring to evolved intelligences!), depth in terms of the maximum number
of steps in a given thread before the marginal value of an additional step
drops to zero, and experience.

I'm at a loss for a good, functional definition of experience (any ideas?),
but here are some possible contributing factors.  Experience supposes
either a memory (a low noise mechanism, functionally different from the main
processor, from which threads can be resumed or rapidly, in terms of
original creation times, recreated), or a very low noise processor able to
hold a large number of concurrent threads.  In either, the ability to
amortize thinking costs by sharing information amongst threads past,
present, and future is a significant contributing factor.  Thinking
invested in such "meta"-tasks as indexing, developing indexing heuristics
(learning how to learn), processor resource allocation (attention), and
developing processor resource allocation heuristics (learning to
prioritize), might be the most effective thinking although it results in 
no direct overt activity whatsoever.  Further, if a memory is used, speed,
freedom from noise, parallel access, capacity, modifiability, and other
device related considerations might be added to the contributing factors.  

This group seems dedicated to issues more philosophical than searching
for quantifiable functional definitions related to a system's reason for
being.  So, I'd welcome the opportunity for an E-Mail dialogue with any
moved to enlarge upon, detract from, modify or otherwise discuss  ;-)
these (ideas?)..  E-Mail posted after June 21 should be directed to 
mkamer@ibm.com as it may be weeks before I log on to this account again.

** Sender Unknown ** (06/14/89)

3@umn-cs.CS.UMN.EDU>
From: mv10801@msi-s6 (Jonathan Marshall [Learning Center])
35 GMT
Reply-To: mv10801@uc.msc.umn.edu (Jonathan Marshall)Sender: news@umn-cs.CS.UMN.EFollowup-To: comp.ai
Followup-To: comp.ai

Testi

dsr@stl.stc.co.uk (David Riches) (06/14/89)

I belonged to an Alvey project called Adaptive Intelligent Dialogues
in which we investigated Adaptation.  During the course of this 4 year
project we produced a taxonomy of adaptive systems.  This is
reproduced below.  The levels refer to the 'level of adaptivity' with
corresponding meanings in the computer world and w.r.t. evolution.

Much of this work was done by Pete Totterdell (pc1pat@ibm.sheffield.ac.uk)
see; Totterdell, P.A.; Norman, M.A and Browne, D.P.; "Levels of
adaptivity in interface design"; HCI Interact'87, 1987, pp 715-722.

Also the project (which finished October 1988) is producing a book on
this subject and further information can be obtained from Prof. Mike Norman
Hull University, Hull, England. (mike@isdg.cs.hull.ac.uk)

---------------------------------------------------------------------
Levels	Computers	Evolution	Features

0.5	Tailorable/	-		Deferred selection
	Adaptable

1	Adaptive	Tropism/	Apparent Learning
			reflexes	(i.e. fully determined by design)
					Discrimination

2	Self-Regulating	Operant		Learning; Varied responses
			conditioning	selected for different situations;
					Evaluation by trial and error

3	Self-Mediating	Internal	Planning, Problem Solving;
			evaluation	rule-mediated representation;
					Initial evaluation internal to
					system

4	Self-Modifying	Abstraction	Evaluating the evaluation;
					Generalisation; Meta-knowledge

   Dave Riches
   PSS:    dsr@stl.stc.co.uk
   ARPA:   dsr%stl.stc.co.uk@earn-relay.ac.uk
   Smail:  Software Design Centre, (Dept. 103, T2 West), 
	   STC Technology Ltd., London Road,
	   Harlow, Essex. CM17 9NA.  England
   Phone:  +44 (0)279-29531 x2496

philo@pnet51.cts.com (Scott Burke) (06/14/89)

>>I ask whether the system of evolution, that is, 
>>the origin of species by variation and natural selection, is
>>intelligent.  I claim that there is a sense in which it is.  
>
>   Evolution may be intelligent, by some definitions of
>   the term.  But, it is not a lifeform, it is not embodied 
>   within a living thing, 'it' is a process larger than all of us.
 
   Must intelligence be embodied in a "lifeform"?  This objection seems
aimed strictly at denying us applying the term to any but organic systems
which display intelligence.  Why is it any more valid to apply the term to an
organic system, than to an inorganic one, or one composed of discrete organic
entities?  There is no real reason, other than that it provides an artificial
cutoff for political purpose.  In Douglas Hofstadter's book, there is mention
of a gedanken which exposes this particular bias:
   Suppose that we have the capability to duplicate the structure and exact
function of individual neurons with a mechanical/silicon device, and we go
about replacing the neurons in a man's brain one by one.  At what point does it
cease to possess the necessary qualities for thinking?  (Or in this case
"intelligence")
   It would seem logical to conclude that the system of devices would be every
bit as "intelligent" as the original "brain" which was in the lifeform -- and
if the silicon replaced brain could be extracted from the organism, it would
function as an independently intelligent non-lifeform.
   Denying a system -- any system -- the description of "intelligent" on the
basis of its form (organic, discrete organic units, or silicon) has little or
no logical* basis.  Without knowing what it is ABOUT ORGANIC LIFEFORMS that
makes there "mind behavior" intelligent, it is senseless to presume that those
qualities are not inherent in other systems simply because we are not
accustomed to ascribing the trait to them (ie. evolution).
   What this objection ultimately reduces to is saying that a machine can't be
intelligent "by virtue of it being a machine", and not because it doesn't
display the requisite characteristics.  It is the requisite characteristics
which matter -- not the form.
 
>   The changes due to evolution may only be called
>   "adaptation" since the plasticity of such changes
>   is over generations, not within one lifetime.
>   I would limit "intelligence" in living things to those
 
   Again, the question arises, is it a logical necessity that intelligence
perform according to some arbitrary specification (ie. "as fast as humans
can") in order for it to be intelligence?  If we were to slow a man's brain
down such that it's neurons interacted at half their normal speed (but in all
other respects functioned the same), would there be any reason to deny that he
was still thinking/intelligent?  What if you halved that speed again?  And
again?  At what point does he cease to be intelligent?  In short, the function
of the system is still the same; if we are to deny intelligence on the basis
of non-functional criterion, we must show some logical necessity between the
criterion and the function (intelligence/thinking).  
   Perhaps there is a logically necessary reason why the functional quality
intelligence is displayed by systems having the specific characteristics of
being lifeforms, or being "as fast as lifeforms", but if there is, it is not
obvious.  It is a part of the question of AI and philosophy/psychology to
determine the real answers to these questions, rather than simply declaring
them to be self evident truths and be done with it.

UUCP: {amdahl!bungia, uunet!rosevax, chinet, killer}!orbit!pnet51!philo
ARPA: crash!orbit!pnet51!philo@nosc.mil
INET: philo@pnet51.cts.com

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/14/89)

From article <13493@umn-cs.CS.UMN.EDU>, by mv10801@msi-s6 (Jonathan Marshall [Learning Center]):
> Intelligence is not just the ability to learn or adapt.  I would like
> to claim that intelligent organisms all share 3 fundamental properties:
> 	1. they can adapt (or learn),
> 	2. they self-organize, and
> 	3. they have initiative.

I would like at this point to hold fast to what I think is progress,
namely, a general agreement that "intelligent" systems form a subset of
"adaptive" systems.  That is, "adaptive" is a necessary, but not
sufficient, condition for "intelligent."

> A programmable calculator can learn, but it probably doesn't
> self-organize, and it doesn't have initiative -- so we wouldn't call
> it intelligent.....

I have always had trouble with the use of the word "learn" in referring
to programmable calculators.  A calculator in "learn" mode does not
learn, in the sense that a rat in one of B. F. Skinner's mazes is
learning.  It is only being programmed.  And I think Jonathan Marshall
has shown why.  I think "initiative" is the key.  Skinner's rats learn
by trial and experience, which presupposes initiative, but the
calculator does not have initiative.

But I don't know exactly what "self-organize" means.  May I try to
guess by filling in blanks?  I suggested earlier that an intelligent
system is different from a merely adaptive one in that it never stops
learning.  What I meant was that after it has learned all there is to
know at a certain level or organization of knowlege, it asks questions
at a higher level and proceeds to answer them.  Even if it does not
overtly ask questions, it builds ever higher levels of generalization
in its knowledge base.  That is something a power-seeking robot
(electric, not political, power) does not do.  Is that what
self-organizing means?

Of course, all these terms are hard to define.  Initiative is just
another word for free will, which has been the subject of endless
discussion here.  Even "adaptive" is going to be hard to define.
But we can agree that if we knew the definitions, the words would
have the following relationship:

If a system is intelligent, it must be adaptive and self-organizing and
have free will.  And if it is adaptive and self-organizing and has free
will, we would probably call it intelligent.

Now let's try this out on the evolution system, Darwin's old protocol
of variation and natural selection.   Variation gives it initiative.
Natural selection makes it adaptive.  And I would venture to suggest
that the leap from single cells to multicelled organisms, as well as
the leap in the other direction from simple cells to eukaryote
cells-within-cells, demonstrate self-organization, in the sense of
building generalization upon generalization.  So evolution might be an
intelligent system.  But though it has a knowledge base, it doesn't use
it to model its environment, which is an additional criterion that Mark
Plutowski suggested.  So maybe only people are intelligent, after all.

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) (06/15/89)

In article <6626@sdcsvax.UCSD.Edu> pluto@beowulf.UCSD.EDU (Mark E. P. Plutowski) writes:
>   Evolution may be intelligent, by some definitions of
>   the term.  But, it is not a lifeform, it is not embodied 
>   within a living thing, ...
>

Are you sure about that? It certainly doesn't look like a lifeform, or
even a thing, if you happen to be around 2 meters tall, with a lifespan
of threescore and ten years, and a specious present of a largish
fraction of a second. But then, if a lifeform lived for millions of
years, minimally occupied a whole planet, and had a specious present of
thousands of years, for example, it would be pretty hard for us
ephemeral hot-heads even to notice it, let alone regard it as a thing of
some kind.
-- 
Chris Malcolm    cam@uk.ac.ed.edai   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK		

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/16/89)

In article <1430@cbnewsh.ATT.COM> mbb@cbnewsh.ATT.COM (martin.b.brilliant) writes:
>From article <13493@umn-cs.CS.UMN.EDU>, by mv10801@msi-s6 (Jonathan Marshall [Learning Center]):
>> Intelligence is not just the ability to learn or adapt.  I would like
>> ...claim ...intelligent organisms all share 3 fundamental properties:
>> 	1. they can adapt (or learn),
>> 	2. they self-organize, and
>> 	3. they have initiative.
		.
		.
		.
>I would like at this point to hold fast to what I think is progress,
>namely, a general agreement that "intelligent" systems form a subset of
>"adaptive" systems.  That is, "adaptive" is a necessary, but not
>sufficient, condition for "intelligent."
>Of course, all these terms are hard to define.  Initiative is just
>another word for free will, which has been the subject of endless
>discussion here.  Even "adaptive" is going to be hard to define.

	We may be able to avoid the "free will tarpit" by
	sticking to "initiative" or, by substituting 
	another word that more correctly defines the notion
	of having impetus, a purpose, an agenda.  This
	avoids the "free will" question in that something
	could have such a strong motivation to act, that it 
	really has no choice but to perform a narrow range
	of actions -- practically no free will.   Or, it
	might have a strong motivation, yet is able to
	pick and choose, rationally or not, among the choices
	such that it meets the criterion of "free will." 
	In both cases, motivation exists, while free will may
	or may not.  Noting the vigor (and lack of progress)
	in the free will discussion, it is best to avoid using
	the term in our definition.  So long as we agree on
	the need for "a motivation to satisfy goals, needs, 
	and tendencies"  initiative should suffice - what do you think?

	Self-organization was another term M.B.Brilliant wished
	to discuss.  The most concise definition I can think of
	is to describe what it is not: supervised learning.
	That is, purely autonomous learning without need for
	positive or negative examples for training purposes.
	However, it is also commonly used to describe systems which 
	dynamically allocate internal resources in modelling
	the external world - the best example of which is the
	mapping of tactile sensory input onto somatotopographic
	maps - where the mapping of resources is performed 
	purely according to the distribution of occurrences of 
	environmental events, without need for supervision.  

pluto@beowulf.ucsd.edu (Mark E. P. Plutowski) (06/17/89)

  In article Chris Malcolm writes:
  >In article Mark E. P. Plutowski writes:
  >>   Evolution may be intelligent, by some definitions of
  >>   the term.  But, it is not a lifeform, it is not embodied 
  >>   within a living thing, ...
  >Are you sure about that? It certainly doesn't look like a lifeform,...
  >... But then, if a lifeform lived for millions of
  >years, minimally occupied a whole planet, and had a specious present of
  >thousands of years, for example, it would be pretty hard for us
  >ephemeral hot-heads even to notice it, let alone regard it as a thing of
  >some kind.


I would enjoy a discussion of the Gaia theory (elsewhere),
but the pertinent topic here is a question: Is computation = cognition?

This particular discussion, "Subject: Re: Adaptive vs. intelligent"  
focusses on the role of adaptation.  So far, we have a couple 
constructive suggestions, such as:  

A.	Discuss the scope of a the following criteria
	w.r.t. definitions of intelligence:
	1) adaptation (within its lifetime)  
	2) self-organization 
	3) initiative 

B.	Consider the role of modelling the external environment:
	o Is it a necessary condition of intelligence?
	o If so, does this exclude adaptive processes 
	  which (so far as we know) do not internally model 
	  their external environment (such as evolution) 
	  from meeting the proposed criteria?

dsr@stl.stc.co.uk (David Riches) (06/19/89)

I belonged to an Alvey project called Adaptive Intelligent Dialogues
in which we investigated Adaptation.  During the course of this 4 year
project we produced a taxonomy of adaptive systems.  This is
reproduced below.  The levels refer to the 'level of adaptivity' with
corresponding meanings in the computer world and w.r.t. evolution.

Much of this work was done by Pete Totterdell (pc1pat@ibm.sheffield.ac.uk)
see; Totterdell, P.A.; Norman, M.A and Browne, D.P.; "Levels of
adaptivity in interface design"; HCI Interact'87, 1987, pp 715-722.

Also the project (which finished October 1988) is producing a book on
this subject and further information can be obtained from Prof. Mike Norman
Hull University, Hull, England. (mike@isdg.cs.hull.ac.uk)

---------------------------------------------------------------------
Levels	Computers	Evolution	Features

0.5	Tailorable/	-		Deferred selection
	Adaptable

1	Adaptive	Tropism/	Apparent Learning
			reflexes	(i.e. fully determined by design)
					Discrimination

2	Self-Regulating	Operant		Learning; Varied responses
			conditioning	selected for different situations;
					Evaluation by trial and error

3	Self-Mediating	Internal	Planning, Problem Solving;
			evaluation	rule-mediated representation;
					Initial evaluation internal to
					system

4	Self-Modifying	Abstraction	Evaluating the evaluation;
					Generalisation; Meta-knowledge


If you need any more info. then let me know.

   Dave Riches
   PSS:    dsr@stl.stc.co.uk
   ARPA:   dsr%stl.stc.co.uk@earn-relay.ac.uk
   Smail:  Software Design Centre, (Dept. 103, T2 West), 
	   STC Technology Ltd., London Road,
	   Harlow, Essex. CM17 9NA.  England
   Phone:  +44 (0)279-29531 x2496

mbb@cbnewsh.ATT.COM (martin.b.brilliant) (06/19/89)

From article <1516@stl.stc.co.uk>, by dsr@stl.stc.co.uk (David Riches):
| I belonged to an Alvey project called Adaptive Intelligent Dialogues
| in which we investigated Adaptation.  During the course of this 4 year
| project we produced a taxonomy of adaptive systems.....
| ---------------------------------------------------------------------
| Levels  Computers	Evolution	Features
| 
| 0.5	Tailorable/	-		Deferred selection
| 	Adaptable
| 
| 1	Adaptive	Tropism/	Apparent Learning
| 			reflexes	(i.e. fully determined by design)
| 					Discrimination
| 
| 2	Self-Regulating	Operant		Learning; Varied responses
| 			conditioning	selected for different situations;
| 					Evaluation by trial and error
| 
| 3	Self-Mediating	Internal	Planning, Problem Solving;
| 			evaluation	rule-mediated representation;
| 					Initial evaluation internal to
| 					system
| 
| 4	Self-Modifying	Abstraction	Evaluating the evaluation;
| 					Generalisation; Meta-knowledge

I had to sit on this for a while before answering.  I don't understand 
all the terms, but the basic idea seems at least interesting.  Coming
out of a 4-year AI project gives it a lot of credibitity.

In terms of this taxonomy, where would the system of evolution fit in?
It's at least a 2, since it does trial and error in a big way.  I'm not
sure it goes any higher, because it doesn't do any forward planning. 
But on the other hand, it has in the past modified its own memory
structure, so it might qualify as a 4.  Any ideas?

M. B. Brilliant					Marty
AT&T-BL HO 3D-520	(201) 949-1858
Holmdel, NJ 07733	att!hounx!marty1 or marty1@hounx.ATT.COM

Disclaimer: Opinions stated herein are mine unless and until my employer
	    explicitly claims them; then I lose all rights to them.

briand@infmx.UUCP (brian donat) (06/21/89)

> David Riches

>I belonged to an Alvey project called Adaptive Intelligent Dialogues
>in which we investigated Adaptation.  During the course of this 4 year
>project we produced a taxonomy of adaptive systems.  This is
>reproduced below.  The levels refer to the 'level of adaptivity' with
>corresponding meanings in the computer world and w.r.t. evolution.

>---------------------------------------------------------------------
>Levels	Computers	Evolution	Features

>0.5	Tailorable/	-		Deferred selection
>	Adaptable

OK, so here we have a system which (by the term tailorable), is assumed
to solve problems based upon what functionaly is a rigid switch, for a
decision maker.  Each activation of the switch gives a defined output, 
whether the switch is activated by a single or multiple inputs.

>1	Adaptive	Tropism/	Apparent Learning
>			reflexes	(i.e. fully determined by design)
>					Discrimination

By the term 'Apparent Learning' it is assumed that it realy doesn't fit
the definition although it appears to.    So perhaps we have here a 
system which is only slightly more evolved than the rudimentary switch.
In such a system, it might be assumed that it alters its composition so
that as it experiences an event, it remembers the event by storing something
or by altering something so that the next time the equivalent event occurs,
its response is 'tuned' and it need not waste time analyzing and just does
the appropriate thing.

FINE.  OK.    Again it appears that the programmer controls outcomes
although this time, the inputs and sequencing of responses need not be
rigid.   Inputs (environment) begin to influence the system's character.


>2	Self-Regulating	Operant		Learning; Varied responses
>			conditioning	selected for different situations;
>					Evaluation by trial and error

Another evolution?    What could this be?     Real learning?   

I'm sorry, but I'm lost here.   This seems to be something similar to
level 1, except that you're saying, that it actually 'tries' multiple
inputs sequentially and selects one for subsequent processing based 
upon some programmer defined criteria.

I don't see learning here.   Learning would mean that the system 
auto-generates new code (it's own response) to go beyond the 
programmer's built in restrictions, tests the new code and then, 
judges which code handled the input situation better and keeps it for
subsequent 'reference'.   I say reference here, because this leaves
the door open for the system to reject implementation of the same response,
if causal values should weigh it's decison to do so.   Learning then 
becomes the machine's confidence in the same solution.   

I believe what we have so far, is three examples of tailored adaptation
with varying complexities.


>3	Self-Mediating	Internal	Planning, Problem Solving;
>			evaluation	rule-mediated representation;
>					Initial evaluation internal to
>					system


What is planning without the ability to recognize a problem?   What is
a problem to a system which can not 'generate' problems?   When you 
say 'problem solving', each of your earlier levels seem to have as
much to do with this same type of problem solving as this level.    
The system is solving your problem, ultimately, based on your rules!

The problem is given by you.  You expect it to give a solution which you
have already defined.   This system is no less 'programmer rigid' than 
those you site in the first two levels of adaptability.

Intelligence must be autonomous to the point that it observes less formal
rules than 'tailored rules'.   If you really had a system which was
at an adaptive level such that it could 'plan' and 'solve problems' based
on planning, you'd have a system that should be able to 'define and solve
its OWN problems'. 

Perhaps my perspective is wrong, but it really does seem like you're 
just building a more complex machine which is really still operating on
the same adaptive principles (limitation to programmer definition) which
you have already defined in level 0.5 as 'tailored'.

>4	Self-Modifying	Abstraction	Evaluating the evaluation;
>					Generalisation; Meta-knowledge

This doesn't help much.   Looking back on results of programmer defined
tests is just another conditional in your switch which amounts to another
programmer defined test and thus is also tailored.


Perhaps these suffice as 'primitives' for a single adaptive level
which is defined by variations which are tailored.


Your level 2 'primitives' might better allow introduction of a device for
recognizing the 'need' for a response such that the recognition of the 
need and the response to it are both auto-generated, unless the response
is already conditioned; in other words, two part problem resolution where
problems are auto-defined as a response in themselves and actions are
not only taken from a known selection of givens, but are created as 
required and the givens become mallable over time.

The machine should create it's own problems and resolutions and it should 
do this for all situations which first threaten it's survival, prevent 
it's own self-demise and for those situations whereby it enhances it's 
own survival and then, for general and logical resolutions.


If you evolve an adaptive system which can guarantee its own survival from
its own recognition of problems and its own resolutions to those problems,
you'll have obtained a level two adaptive system.


Level 3 would inlcude problem solving of the type which goes beyond 
mere survival and defines and resolves problems which are in the realm 
which are common to we humans alone (in terms of life on this planet).


Everything must start somewhere, but I believe your aim's a bit low.


-- brian

/=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\
| Brian L. Donat		Informix Software, Inc.  Menlo Park, CA        |
|					... infmx!briand		       |
|        								       |
\=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-/

dsr@stl.stc.co.uk (David Riches) (06/27/89)

This contains the reply from Pete Totterdell who introduced the
Taxonomy of Adaptive Levels into the Project.

Here are some comments for Brian:

I am not sure how much information Dave gave you. If it was just the
taxonomy without any explanation then I can understand why you had
problems with it! Unfortunately I cant reproduce the whole argument
here. If I did I think you would probably see where all these levels
have come from, although you wouldn't necessarily agree with them.
Here are some specific points.

In article <1597@infmx.UUCP> briand@infmx.UUCP (brian donat) writes:
|
||---------------------------------------------------------------------
||Levels	Computers	Evolution	Features
|
||0.5	Tailorable/	-		Deferred selection
||	Adaptable
|
|OK, so here we have a system which (by the term tailorable), is assumed
|to solve problems based upon what functionaly is a rigid switch, for a
|decision maker.  Each activation of the switch gives a defined output, 
|whether the switch is activated by a single or multiple inputs.
|
||1	Adaptive	Tropism/	Apparent Learning
||			reflexes	(i.e. fully determined by design)
||					Discrimination
|
|By the term 'Apparent Learning' it is assumed that it realy doesn't fit
|the definition although it appears to.    So perhaps we have here a 
|system which is only slightly more evolved than the rudimentary switch.
|In such a system, it might be assumed that it alters its composition so
|that as it experiences an event, it remembers the event by storing something
|or by altering something so that the next time the equivalent event occurs,
|its response is 'tuned' and it need not waste time analyzing and just does
|the appropriate thing.
|
|FINE.  OK.    Again it appears that the programmer controls outcomes
|although this time, the inputs and sequencing of responses need not be
|rigid.   Inputs (environment) begin to influence the system's character.
|

Apparent Learning. There is no remembering going on here. Hill climbing
(either biological or AI style) is a good example. The system finds the
best solution to a problem (and gives the illusion of learning) simply
by having fixed responses which are controlled by the environment. A
plant growing towards light is an example.

|
||2	Self-Regulating	Operant		Learning; Varied responses
||			conditioning	selected for different situations;
||					Evaluation by trial and error
|
|Another evolution?    What could this be?     Real learning?   
|
|I'm sorry, but I'm lost here.   This seems to be something similar to
|level 1, except that you're saying, that it actually 'tries' multiple
|inputs sequentially and selects one for subsequent processing based 
|upon some programmer defined criteria.
|
|I don't see learning here.   Learning would mean that the system 
|auto-generates new code (it's own response) to go beyond the 
|programmer's built in restrictions, tests the new code and then, 
|judges which code handled the input situation better and keeps it for
|subsequent 'reference'.   I say reference here, because this leaves
|the door open for the system to reject implementation of the same response,
|if causal values should weigh it's decison to do so.   Learning then 
|becomes the machine's confidence in the same solution.   
|
|I believe what we have so far, is three examples of tailored adaptation
|with varying complexities.
|

At this level, where we have genuine learning, the system has a
number of different responses to a given stimulus and it learns (and
sometimes remembers) which is the best. But the best response may
change as circumstances change.

|
||3	Self-Mediating	Internal	Planning, Problem Solving;
||			evaluation	rule-mediated representation;
||					Initial evaluation internal to
||					system
|
|
|What is planning without the ability to recognize a problem?   What is
|a problem to a system which can not 'generate' problems?   When you 
|say 'problem solving', each of your earlier levels seem to have as
|much to do with this same type of problem solving as this level.    
|The system is solving your problem, ultimately, based on your rules!
|
|The problem is given by you.  You expect it to give a solution which you
|have already defined.   This system is no less 'programmer rigid' than 
|those you site in the first two levels of adaptability.
|
|Intelligence must be autonomous to the point that it observes less formal
|rules than 'tailored rules'.   If you really had a system which was
|at an adaptive level such that it could 'plan' and 'solve problems' based
|on planning, you'd have a system that should be able to 'define and solve
|its OWN problems'. 
|
|Perhaps my perspective is wrong, but it really does seem like you're 
|just building a more complex machine which is really still operating on
|the same adaptive principles (limitation to programmer definition) which
|you have already defined in level 0.5 as 'tailored'.

The planning referred to at this level is meant to cover the sort
of insightful behaviour that you refer to when you talk of generating
new problems and solutions. An example is the ape which learns that
to reach food it needs to reconstruct the problem and use the short
stick to reach the big stick which is out of reach and then use the
big stick to reach the food. However, I take the point about
problem solving. All levels are problem solving, although I think that
AI uses problem solving in the more restricted sense of this particular
level.

||4	Self-Modifying	Abstraction	Evaluating the evaluation;
||					Generalisation; Meta-knowledge
|
|This doesn't help much.   Looking back on results of programmer defined
|tests is just another conditional in your switch which amounts to another
|programmer defined test and thus is also tailored.
|
|
|Perhaps these suffice as 'primitives' for a single adaptive level
|which is defined by variations which are tailored.
|
|
|Your level 2 'primitives' might better allow introduction of a device for
|recognizing the 'need' for a response such that the recognition of the 
|need and the response to it are both auto-generated, unless the response
|is already conditioned; in other words, two part problem resolution where
|problems are auto-defined as a response in themselves and actions are
|not only taken from a known selection of givens, but are created as 
|required and the givens become mallable over time.
|
|The machine should create it's own problems and resolutions and it should 
|do this for all situations which first threaten it's survival, prevent 
|it's own self-demise and for those situations whereby it enhances it's 
|own survival and then, for general and logical resolutions.
|
|
|If you evolve an adaptive system which can guarantee its own survival from
|its own recognition of problems and its own resolutions to those problems,
|you'll have obtained a level two adaptive system.
|
|
|Level 3 would inlcude problem solving of the type which goes beyond 
|mere survival and defines and resolves problems which are in the realm 
|which are common to we humans alone (in terms of life on this planet).
|
|
|Everything must start somewhere, but I believe your aim's a bit low.
|
|

My main departure from your argument is that I get very worried when I
see terms like "auto-generate" which often suggests that people are
looking for something from nothing in order to give a sense of free
will, and to get away from pre-determination. Biologists seem happy
with the idea that rats are learning when they learn to run a maze
and yet we do not need to invoke explanations involving auto generation
in order to explain this type of behaviour. And of course even in the
situation where you are generating new problems and new solutions, the
antecedents will have been pre-determined by the programmer. What the
programmer doesn't control however is the course of the interaction
between system and environment.

I also felt that your third level, about going beyond survival, would
shift the ground somewhat because it describes the level in terms of
a type of problem which the other levels don't. And in many ways it
is simply a bi-product of the fact that all the levels are about
trying to bring the environment under control. However, it might
be the basis for another equally valid taxonomy which looks at problem
types.

Hope this makes sense .. Peter (Peter Totterdell pc1pat@uk.ac.shef.ibm)


   Dave Riches
   PSS:    dsr@stl.stc.co.uk
   ARPA:   dsr%stl.stc.co.uk@earn-relay.ac.uk
   Smail:  Software Design Centre, (Dept. 103, T2 West), 
	   STC Technology Ltd., London Road,
	   Harlow, Essex. CM17 9NA.  England
   Phone:  +44 (0)279-29531 x2496