[comp.ai.philosophy] What AI can learn from ALife

reh@wam.umd.edu (Richard E. Huddleston) (03/21/91)

It's starting to look at though the phenomenon of emergent behavior can
explain the generally mixed results from traditional AI.
 
Perhaps 'intelligence' isn't a 'thing' -- it's more a side effect that occurs
within sufficiently complex systems.  It seems, also, that the systems
themselves don't have to be all that complex.
 
Getting intelligent behavior out of computer systems might start becoming a
little more like designing and building the behavior you want by properly
defining the agents who behave in that desired way completely by 'accident'.
 
(1) Does anyone know of any papers/etc. describing efforts at building
models of intelligence from observed emergent behaviors?
 
(2) How about any systems built around _The Society of Mind_ concepts?
 
email to reh@epsl.umd.edu or post it
 
Thanks in advance.