[comp.ai.philosophy] emergence

mt@media-lab.MEDIA.MIT.EDU (Michael Travers) (10/03/90)

It's interesting to note that some of the better work done on emergent
properties comes from the group at Los Alamos that is interested
specifically in NON-linear systems, that for one reason or another do
not obey the superposition principle.  These people (such as Chris
Langton, chair of the Artificial Life workshops) are very much NOT
asserting that emergent properties are nonphysical or inherently
inexplicable.  In fact, they rather make a fetish of insisting that
complex properties like life or intelligence be modelled bottom-up in
terms of simpler processes.  

In non-linear systems, you can't find analytic solutions to systems
involving interactions between many components, so you turn to
simulation.  Emergence then is more of a methodological than a
theoretical point: to model a complex system, you need to start one
level lower (ie, if you care about intelligence, start with neurons;
if you care about life, start with chemistry) and then wait and hope
that the thing you really care about will happen of its own accord.

So, if you want to criticize the emergent folks, you should attack
them for using methods that are weak relative to the phenomenon they
want to understand, rather than for being mystics or non-materialists.

-- 

Michael Travers / MIT Media Lab / mt@media-lab.media.mit.edu

n025fc@tamuts.tamu.edu (Kevin Weller) (10/03/90)

In article <3531@media-lab.MEDIA.MIT.EDU> mt@media-lab.MEDIA.MIT.EDU (Michael Travers) writes:
>It's interesting to note that some of the better work done on emergent
>properties comes from the group at Los Alamos that is interested
>specifically in NON-linear systems, that for one reason or another do
>not obey the superposition principle.  These people (such as Chris
>Langton, chair of the Artificial Life workshops) are very much NOT
>asserting that emergent properties are nonphysical or inherently
>inexplicable.  In fact, they rather make a fetish of insisting that
>complex properties like life or intelligence be modelled bottom-up in
>terms of simpler processes.  

I have come to agree with your definition of emergence as
methodological instead of metaphysical, at least in a practical sense.
See my response to Jonathan Buss for a more complete description.

>Michael Travers / MIT Media Lab / mt@media-lab.media.mit.edu

-- Kev

jsp@milton.u.washington.edu (Jeff Prothero) (10/04/90)

Perhaps the key characteristic of an 'emergent phenomenon' is that it
has interesting characteristics which it possesses *independently* of
the underlying (implementation) system? We avoid analysing computer
programs in terms of electron diffusion not just because such an
analysis would be awkward, opaque and difficult, but because it is, in
a fundamental sense, *irrelevant*.  The same computer program could be
run on a VLSI-based machine, a vacuum-tube based machine, an
optical-based machine, a Tinker-Toy(R)-based machine, or
hand-interpreted by a human.  Barring implementation defects, the
behavior of the program will be the same in every case.

Understanding a computer program which implements Euclid's GCD just
does not depend in any interesting fashion on the physics of
TinkerToys, even if the program is destined to be run on a computer
constructed from TinkerToys.

Perhaps "emergent systems" generally may be characterised by a
similar resilient self-integrity:  They possess interesting properties
which are independent of the underlying system, and which in fact
may be based on quite different underlying systems.  

Intelligent systems may possess properties which are quite independent
of the specific characteristics of neurons, and may (potentially?) be
manifested on systems with radically different low-level
architectures.  Studying neurons may tell us as much about
intelligence as studying TinkerToys does about Euclid's GCD algorithm.

hawley@icot32.icot.or.jp (David John Hawley) (10/04/90)

In article <JSP.90Oct3123040@milton.u.washington.edu> jsp@milton.u.washington.edu (Jeff Prothero) writes:
>Perhaps the key characteristic of an 'emergent phenomenon' is that it
>has interesting characteristics which it possesses *independently* of
>the underlying (implementation) system?

An implementation is a system that obeys some specification. 
Practically, specifications are incomplete, and in fact it may be impossible
to simultaneously give a complete specification and verify compliance.
If emergent properties of some system are not predictable from the behaviour of
the components thereof, then it is impossible to predict which aspects of the
component's behaviour might be relevant to any hoped for emergent property.

For example, the apocryphal story of disk drives gyrating around the floor
when presented with certain pattern of disk accesses, seems to be an example
of a phenomenon (gross physical movement) emerging from the interaction of
a program, OS, and a storage device. The fact that the storage device was
not implemented as solid-state is crucial.

In other words, the implementation is the interesting thing, and not the
specification.

---------------------------
David Hawley, ICOT, 4th Lab
csnet: hawley%icot.jp@relay.cs.net uucp:{enea,inria,mit-eddie,ukc}!icot!hawley
ICOT, 1-4-28 Mita, Minato-ku, Tokyo 108 JAPAN. TEL/FAX {81-3-456-}2514/1618

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/04/90)

In article <JSP.90Oct3123040@milton.u.washington.edu> jsp@milton.u.washington.edu (Jeff Prothero) writes:
>Perhaps the key characteristic of an 'emergent phenomenon' is that it
>has interesting characteristics which it possesses *independently* of
>the underlying (implementation) system? We avoid analysing computer
>programs in terms of electron diffusion not just because such an
>analysis would be awkward, opaque and difficult, but because it is, in
>a fundamental sense, *irrelevant*.  The same computer program could be
>run on a VLSI-based machine, a vacuum-tube based machine, an
>optical-based machine, a Tinker-Toy(R)-based machine, or
>hand-interpreted by a human.  Barring implementation defects, the
>behavior of the program will be the same in every case.
>
  I wish to differ with this, if only in a small detail.  A 'program'
is a conceptual entity only, what you are talking about here seems
to be a process.  The behaviour of the program+machine (process) will
differ quite a bit over various machines and possibly even on same
machine when run at different times.  It is the *interpretation* of
the results which is the same.  This is perhaps a minor point for your
discussion, but I think one has to be careful in maintaining the use/
mention distinction when talking about philosophy or principles.  A
program exists only in our minds, what exists in 'reality' is an
electronic state (insofar as that is definable) and changes of that
state.

>Understanding a computer program which implements Euclid's GCD just
>does not depend in any interesting fashion on the physics of
>TinkerToys, even if the program is destined to be run on a computer
>constructed from TinkerToys.
>
>Perhaps "emergent systems" generally may be characterised by a
>similar resilient self-integrity:  They possess interesting properties
>which are independent of the underlying system, and which in fact
>may be based on quite different underlying systems.  
>
Hmmm.  Let me see if I understand you correctly.  You are proposing
that an emergent is a meta-property observed only when a certain
conceptual entity is imposed on a physical system, and not observed
otherwise.  Of course, we are always imposing some conceptual entity
on 'reality', meaning we are conscious.

>Intelligent systems may possess properties which are quite independent
>of the specific characteristics of neurons, and may (potentially?) be
>manifested on systems with radically different low-level
>architectures.  Studying neurons may tell us as much about
>intelligence as studying TinkerToys does about Euclid's GCD algorithm.

I think that this is slightly beside the point, although it is very
true.  The question of emergence comes in at the point when we interpret
the TinkerToy's state(s) as being the solution to the GCD problem, but
have trouble interpreting its actions as being Euclid's algorithm.  If
we understand its 'answer', but cannot comprehend how it performs the
computation, then the "production of an answer to the GCD problem" must
be considered an emergent property of the TinkerToy, at least for the
time being.  By 'cannot' I mean "unable by any means, even in principle".
In this sense, your definition of emergent (as I have it :) is good
but incomplete; the "GCD solving" property of the TinkerToy is a meta-
property existing in the mind of the perceiver (whom I'll leave alone
here :), but with the proviso that this meta-property is not reducable
to that perceiver into more elementary meta-properties.  How this 
meta-irreducibility maps from the perceiver's mind to the real physical
world ('reality again'!) is another (and I think ultimately unanswerable)
question.

Please tell me if I've misinterpreted you! :>

--
      Cameron Shelley        | "Armor, n.  The kind of clothing worn by a man
cpshelley@violet.waterloo.edu|  whose tailor is a blacksmith."
    Davis Centre Rm 2136     |
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

reinke@aisunk.uucp (Robert Reinke) (10/05/90)

>In article <JSP.90Oct3123040@milton.u.washington.edu> jsp@milton.u.washington.edu (Jeff Prothero) writes:
>>Perhaps the key characteristic of an 'emergent phenomenon' is that it
>>has interesting characteristics which it possesses *independently* of
>>the underlying (implementation) system? ...

and cpshelley@violet.uwaterloo.ca (cameron shelley) responds:
>  I wish to differ with this, if only in a small detail.  A 'program'
>is a conceptual entity only, what you are talking about here seems
>to be a process.  The behaviour of the program+machine (process) will
>differ quite a bit over various machines and possibly even on same
>machine when run at different times...

I have followed the discussion about emergent properties with interest, and
I think things are getting off track.  Though we are naturally interested
in emergent properties in computer programs/systems, I believe there
is a definition of emergence that need not refer to computers per se, namely:
 
     An emergent property is a property of a system as a whole that is
     not possessed by any of its components.

An example of this (from an introductory Neurobiology course) is a system of 
neurons in the crayfish (lobster? -- some crustacean in any case) which has 
an output that is cyclic over time.  None of the neurons in the system have a
cyclic output behavior independently, nor can the cyclicity be attributed to
any individual relationships; only the system has a whole has cyclic behavior.

Under this definition, "mind" or "intelligence" may be called an emergent 
property of the brain, not becaused it has no relationship to the brain or 
its components, but because it is a property of the brain (and body?) as a 
whole.

I don't doubt that there are problems with this definition, but I can't think
of any offhand.  Anyone?

Bob Reinke (r-reinke@uiuc.edu)
Department of Computer Science
University of Illinois at Urbana-Champaign
    

sarima@tdatirv.UUCP (Stanley Friesen) (10/05/90)

In article <1990Oct4.173933.7319@ux1.cso.uiuc.edu> r-reinke@uiuc.edu (Robert Reinke) writes:
>...  Though we are naturally interested
>in emergent properties in computer programs/systems, I believe there
>is a definition of emergence that need not refer to computers per se, namely:
> 
>     An emergent property is a property of a system as a whole that is
>     not possessed by any of its components.

An excellent definition.  It is broadly applicable, and does not imply dualism.

>An example of this (from an introductory Neurobiology course) is a system of 
>neurons in the crayfish (lobster? -- some crustacean in any case) which has 
>an output that is cyclic over time.  None of the neurons in the system have a
>cyclic output behavior independently, nor can the cyclicity be attributed to
>any individual relationships; only the system has a whole has cyclic behavior.

An excellent example.  I would like to add a note of emphasis here.  In this
case I suspect that the neurobiologists involved in the research have a fairly
good, essentially complete model of how the system of neurons in question
generates cyclic output.   That is the behavior of the system is *not*
inexplicable, it is merely not inherent in the individual neurons.  And even
though it is explained it is *still* emergent.

BTW, I suspect that the emergent property here is also somewhat independent
of the lower level implementation.  It is likely that a finite state machine
(or set of them) could be designed that was cyclic by essentially the same
mechanism.

>Under this definition, "mind" or "intelligence" may be called an emergent 
>property of the brain, not becaused it has no relationship to the brain or 
>its components, but because it is a property of the brain (and body?) as a 
>whole.

I would agree.  And in fact it is probably emergent in almost the same way
as the cyclicity mentioned above.  That is it is based on interactions amoung
populations of neurons, each of which performs some basic data transformation.

------------------
uunet!tdatirv!sarima				(Stanley Friesen)

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/06/90)

In article <1990Oct4.173933.7319@ux1.cso.uiuc.edu> r-reinke@uiuc.edu (Robert Reinke) writes:
>>In article <JSP.90Oct3123040@milton.u.washington.edu> jsp@milton.u.washington.edu (Jeff Prothero) writes:
>>>Perhaps the key characteristic of an 'emergent phenomenon' is that it
>>>has interesting characteristics which it possesses *independently* of
>>>the underlying (implementation) system? ...
>
>and cpshelley@violet.uwaterloo.ca (cameron shelley) responds:
>>  I wish to differ with this, if only in a small detail.  A 'program'
>>is a conceptual entity only, what you are talking about here seems
>>to be a process.  The behaviour of the program+machine (process) will
>>differ quite a bit over various machines and possibly even on same
>>machine when run at different times...
>
>I have followed the discussion about emergent properties with interest, and
>I think things are getting off track.  Though we are naturally interested
>in emergent properties in computer programs/systems, I believe there
>is a definition of emergence that need not refer to computers per se, namely:
> 
>     An emergent property is a property of a system as a whole that is
>     not possessed by any of its components.
>

  Yes, I guess the discussion is wandering.  Your definition here is, I
believe open to alot of abuse, ie. trivial observations.  For instance,
the keys I am hitting to produce this wondrous followup are part of the
computer system as a whole (or just my terminal - whatever).  The fact
that the computer does not just behave like a big key, or switch etc...
is trivial, but then implies that its capability of adding and so forth
is emergent.  Of course we would want to exclude such "observations"
from consideration, so the idea of emergent should be strengthened so
that the property of a system as a whole is also not an *obvious* 
consequence of the properties of its components.

>An example of this (from an introductory Neurobiology course) is a system of 
>neurons in the crayfish (lobster? -- some crustacean in any case) which has 
>an output that is cyclic over time.  None of the neurons in the system have a
>cyclic output behavior independently, nor can the cyclicity be attributed to
>any individual relationships; only the system has a whole has cyclic behavior.
>
>Under this definition, "mind" or "intelligence" may be called an emergent 
>property of the brain, not becaused it has no relationship to the brain or 
>its components, but because it is a property of the brain (and body?) as a 
>whole.
>
Well, I can't speak for the crustaceans of the world, but I think 
"intelligence" would qualify as emergent - for now.

>I don't doubt that there are problems with this definition, but I can't think
>of any offhand.  Anyone?
>

Well... refinement perhaps? :>

--
      Cameron Shelley        | "Armor, n.  The kind of clothing worn by a man
cpshelley@violet.waterloo.edu|  whose tailor is a blacksmith."
    Davis Centre Rm 2136     |
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

abbott@aerospace.aero.org (Russell J. Abbott) (10/06/90)

John McCarthy writes:
| I'm suspicious that "emergent" is just a fancy term for the fact
| that any system has some properties that are not properties of
| the components.  Let's take a trival example.  Suppose we make
| an EXOR circuit out of AND gates and inverters.  3 AND gates and
| 3 inverters will do it.  Does the fact that the circuit computes
| EXOR count as an emergent property, since none of the components
| computes EXOR?  I suspect the users of "emergent" want to suggest
| something fancier.  But what?

and Marvin Minsky writes: 
| What is fascinating is the extent to which, in science, it has so
| often sufficed merely to know the dyadic relationships of the objects,
| just two at a time.  This is the case in Newtonian mechanics; all the
| forces are simply dyadic, and one has only to sum them to find the
| accelerations that determine all the trajectories.

It seems to me that an important distinction is between designed systems,
in which the components interact in pre-specified ways, and other sorts
of interactions, which we tend to understand best if there are only two
elements interacting.  (As an aside, as a society we seem to make this a
legal distinction allowing patents on designed objects but not on laws
of nature, which tend to be binary.)

A second important distinction is between intentionally and accidently
designed systems.  I would call the "design" that is encoded in the
genetic codes of various species accidental design.  The "information"
in the genetic code certainly determines how organisms will develop and
"operate."  But the design so encoded is totally ad hoc and
"accidental."  (Sorry about all the quotes.  I can't think of better
words.)  This contrasts with designs that are encoded in software,
persumably intentionally.

My hypothesis is that we tend to think of a property as emergent if it
is a result of an accidently designed system.  So, EXOR circuits are not
normally thought of as having emergent properties since those properties
are seen as intentionally designed in.  Other properties that are not
thought of as intentionally designed into a system but that exist
because of the way the system is designed are seen as emergent.

-- Russ Abbott@itro3.aero.org

aboulang@bbn.com (Albert Boulanger) (10/06/90)

In article <3531@media-lab.MEDIA.MIT.EDU> mt@media-lab.MEDIA.MIT.EDU (Michael Travers) writes:


   It's interesting to note that some of the better work done on emergent
   properties comes from the group at Los Alamos that is interested
   specifically in NON-linear systems, that for one reason or another do
   not obey the superposition principle.  These people (such as Chris
   Langton, chair of the Artificial Life workshops) are very much NOT
   asserting that emergent properties are nonphysical or inherently
   inexplicable.  In fact, they rather make a fetish of insisting that
   complex properties like life or intelligence be modelled bottom-up in
   terms of simpler processes.  

Right on! Good to emphasize the NON-linear. I believe that our
beginning to tackle the analysis of non-linear systems is a big part
of what make the current connectionist movement different from the
last cycle and is not just old wine in new bottles.

Here are some common ingredients in systems with emergent properties:

The system has nonlinear dynamics.

The system has something called "frustration". This represents two (or
more) processes in competition; the classical example is the
frustration of spins in simple models of magnetic domains, called spin
glasses. This frustration, via the nonlinearity of the system, acts to
amplify novelty, and, by a process of self-organization can create
amazing (and robust) structure.  This is the basis to solitons.


Coming from the empirical side of the tracks, I thought I would share
with you two examples of physical systems that is often said to have
emergent properties (which is often closely related to
self-organization). I think these can serve as useful checks on any
discussion of emergent properties of systems.


Example 1: Video Feedback. This is something one can do themselves:

"Space-Time Dynamics in Video Feedback"
James. P. Crutchfield
Physica 10D(1984) 229-245
(This is in one of the cellular automata conference books)

"Spatio-Temporal Complexity in Nonlinear Image Processing"
James P. Crutchfield
IEEE Trans Circuits and Systems Vol 35, No 7, July 1988, 770-780

To do this simply point the camera at the TV and zoom in. More
interesting behavior can be had with a rotation between the camera and
the screen. Play with the color, and contrast settings. Place things
like your hands in front of the screen. 


Example 2: 4-Wave-Mixing with Photorefractive Crystals

"Photorefractive Nonlinear Optics"
Jack Feinberg
Physics Today, October 1988, 46-52

"Optical Systems That Imitate Human Memory"
Dana Z. Anderson
Computers in Physics March/April 1989

One interesting emergent behavior in these crystals is the self-organizing beams
that is caused by the interaction of scattering processes and
reflection. Beams will jump out of the crystal and track any reflective
object that happens to wander by. Read the first article for a host of
other interesting behaviors. Here is a quote from the first article:

"When pressed with a working demonstration of these photorefractive
circus acts, even hardened physicists, weary from decades of proposal
writing, instinctively begin to play with the incident optical beams
to see how the crystal will respond. Experimenting with a
photorefractive crystal make one feel like a young child examining a
caterpillar: If I poke it here, what will it do?"

For me, emergent properties of nonlinear systems is an affirmation
that there is, indeed, plenty of room at the bottom.

Regards,
Albert Boulanger
aboulanger@bbn.com

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/06/90)

I like the observations made by abbott@antares.UUCP (Russell J.
Abbott)in article 58.  I had had the feeling that "emergent" involved
a 3-part a relation between a system's structure, its behavior, and
the observer -- but I didn't have a good example.  Abbott's suggests
that an important element is the observer's knowledge or belief about
whether the system was "designed" to have that behavior.  Nice.
Also verges on a 4-part relation.

wcalvin@milton.u.washington.edu (William Calvin) (10/07/90)

The emergent properties of the pattern generation circuits of the
lobster stomatogastric ganglion may be found in:
	Daniel K. Hartline, "Pattern generation in the lobster
(Panulirus) stomatogastric ganglion. II.  Pyloric network
simulation."  Biological Cybernetics 33:223-236 (1979).
	Daniel K. Hartline, "Simulation of restricted neurla networks
with reprogrammable neurons."  IEEE Trans. Circuits SSystems
36:653-660 (1989).

Briefly, the stomatogastric ganglion has only 30 neurons (all known
as identifiable individuals) and generates two independent rhythms, a
gastric mill rhythm (lobsters and crabs have teeth in their stomachs)
and a pyloric rhythm (for squeezing stomach contents into the gut). 
About 15 cells generate the gastric mill rhythm, the other 15 the
pyloric rhythm (which is a complicated three-phase rhythm, not your
usual rhythmic alternation).  Furthermore, the circuits are
functionally "rewired" by neuromodulators.

  William H. Calvin, University of Washington NJ-15, Seattle WA 98195 USA
  wcalvin@u.washington.edu         Favorite books on brains and evolution
  (probably because I wrote them myself):  _The Cerebral Symphony_
  (Bantam 1989); _The River that Flows Uphill_ (Sierra Club Books 1987).

wcalvin@milton.u.washington.edu (William Calvin) (10/08/90)

Emergent properties include all sorts of nonbiological examples,
such as the pattern of linked back eddies downstream from a
rapid; there is a list of the ones seen from a float trip through
the bottom of the Grand Canyon at Mile 166 in my 1987 book on
brains and evolution, _The River That Flows Uphill_ (Sierra Club
Books).
     One of the more interesting emergents is anatomy:  patterns
of development that produce patterns of body parts.  It has
always been hard to imagine quite how a designerless system
gradually shapes up anatomy towards some unwritten-in-the-genes
optimum.  But form tends to follow function for some interesting
reasons that began to be recognized in the aftermath of Darwin. 
I thought that, in the spirit of Minsky's postings a few years
ago from his book, that I would excerpt two pages from Chapter 2
of my forthcoming (12/90 from Bantam in hardcover) book,
The Ascent of Mind
Ice Age Climates and the Evolution of Intelligence.

-----------------------------------------------------------------
                 copyright 1990 by W. H. Calvin
GENES NEED ONLY be approximately correct, as a little behavioral
versatility can do the rest.  While this versatility during life
may not alter the genes passed on to offspring, it does serve to
shape up those genes:  behavior can drag along anatomy.  This was
recognized by three scientists in 1894; though often called the
Baldwin Effect, it probably ought to be called the Morgan-
Baldwin-Osborn Effect.  Perhaps we would understand it more
intuitively were it called the Old-Family-Recipe Effect.
     Anyone who has ever asked for a copy of "that wonderful
recipe" knows that the recipe card is always faded, flour-
encrusted, written in a style of handwriting favored by some
first-grade teacher of long ago, and smeared by several ancient
droplets of an unknown fluid.  And so when you transcribe it onto
a new card to carry home with you, some copying errors are
likely.
     What's worse, the donor of this recipe has long since
stopped consulting the recipe card:  she just bakes from memory
and, over the years, has improved the cake (or whatever)
considerably beyond what would result from faithfully following
her written recipe.  Indeed, she has no idea how much her
"handful of flour" departs from the half-cup that the recipe
calls for, or how inaccurate the temperature setting on her oven
has become.  Still, she has found the winning combination (you
did, after all, ask for the recipe) and so her point-of-departure
version of the recipe comes to be copied with an unintentional
mutation or two.
     This commonplace situation suggests a simplified scheme for
how cake-baking contests at county fairs could "cause" better
cakes to evolve.  Pretend for a moment that success in baking
cakes obeys the following rules:

1.  Each participant inherits a randomly altered copy of her
     "parent's" recipe for a cake.  Perhaps a teaspoon of baking
     soda is changed into a tablespoon's worth.  Or the 385
     baking temperature into 335.  Or some other such alteration
     in the mix of ingredients, amounts, times, and temperatures.

2.  The cook can modify the recipe during her lifetime, but only
     by memory, not by amending the recipe card.  Indeed, since
     the recipe card is merely the point of departure for
     experimentation, it need never be consulted again (until
     finally copied).

3.  There are contests to select the better cakes, and the
     winners and runner-ups are the ones most likely to have
     offspring attracted by the cake-baking contests in some
     future decade.  Note that winners don't train offspring at
     cooking (in this simplified scheme):  they only pass on
     their point-of-departure version of the recipe.  The only
     thing that experience, i.e., the recognition of good
     variations, does in the long run is to make the winners'
     offspring more likely to become contest-minded cake bakers.

4.  The judging doesn't change criteria over the years ("good
     taste is eternal").  

The recipe's mutations are usually worse than the original.  In
any generation, of course, an off-on-the-wrong-foot cook who is,
nonetheless, skillful at fiddling the recipe may hit upon the
combination that constitutes the optimal recipe; inheritance is
not fate (but she cannot pass on this winning combination as
such, just the degraded recipe card).  Yet on the average, the
copying errors that move away from the optimal make it less
likely that unwritten variations in the recipe ("a lifetime of
experience") will hit on the optimum.
     Because losers tend not to have offspring that participate
in such contests (the losers don't get asked for a copy of their
recipe), diverging recipes are more likely to die out.  And so
there will be a slow convergence in copying errors toward the
optimal combination, just by carving away the other combinations. 
The optimal recipe may never be written down, but the population
of written recipes in use gets closer and closer to the
combination of ingredients, amounts, times, temperatures, and
assembly procedures that will satisfy the expert tasters of
cakes.
     Allowing a son or daughter to learn the parent's hard-earned
variations on the recipe would represent Lamarckism:  inheritance
of acquired-during-life characteristics.  This "Training Effect,"
of course, happens with real cooks and their offspring; we
encourage this mode of transmission with schools and books.  But
we theorists may temporarily leave such influences out of
explanations, just to demonstrate that the whole population of
written recipes (or whatever) can nonetheless shift closer and
closer to the unwritten optimal even without the additional
Lamarckism (in the case of biological inheritance, we also leave
instruction out because there is little evidence for it).
     Adding some version of Lamarckian shaping has two
interesting effects:  cakes converge on the optimal even more
quickly, but the written recipes converge more slowly than they
would otherwise.  (In the terminology of evolutionary biology: 
With Lamarckism, the phenotypes evolve faster but, paradoxically,
the genotypes evolve slower!)  Should there be a "lost
generation" that never learns to cook from their expert parents,
the grandchildren will have to start over from instruction cards
that haven't been shaped up anywhere as far as they might
otherwise have been.  While shaping up the "written version" may
be safer in the long run, one has to first survive the short run
-- and climates often shift so rapidly that survival depends on
changing food-finding strategies just as quickly (in the cake
analogy, suppose that next year's judges went sour on sugar, all
trying to lose weight because of a new preventive medicine
campaign against obesity).  And so both the Old-Family-Recipe
Effect and the Training Effect may prove essential in the short
run because the judging criteria have changed.
     In the analogy, the individual ingredients-and-procedures
are the genes, the recipe is the sperm-or-ovum, and the whole
population of cake recipes is the genome.  And, of course, the
cake is only the recipe's way of getting a copy made of itself. 
The Selfish Recipe has struck again.
                               ###

                    William H. Calvin
                    University of Washington -- Biology NJ-15
                    Seattle WA 98195       ph. 1-206-328-1192
                    wcalvin@u.washington.edu

minsky@media-lab.MEDIA.MIT.EDU (Marvin Minsky) (10/08/90)

In article <8724@milton.u.washington.edu> wcalvin@milton.u.washington.edu (William Calvin) writes:

>GENES NEED ONLY be approximately correct, as a little behavioral
>versatility can do the rest. 

Your essay looks profound, and I can't wait to see the book.  

A point somewhat like this was made by Jeff Hinton, not long ago: he
pointed out that a small genetic "hint", together with some learning
mechanisms, can speed up evolution enormously. 

I think his argument went something like this: Suppose some 40 binary
accident were needed to accompish some amazing performance, and you
happened to possess 20 of them, by genetic chance.  This could easily
happen in a population substantially greater than a million.  Now,
suppose eacg such creature lives long enough to try 2^20 behavioral
variants -- and if the miraculous 1/2^40 event occurs, it gets a
selective reproductive advantage.  Then !!!!! we have selection for a
one-in-a-trillion conbination.  

Then, quite rapidly, perhaps, that sub-population will accumulate
genes that further predispose them to now need only 2^19, then 2^18,
etc., learning trials.  When that gets down to, say, 2^10, we have
something that might occur regularly, in the first few hours or days
of infancy -- and then there might be no selective advantage
in further genetic fixation.

Anyway, that's what I think Hinton was suggesting.  Sounds like you
also noticed something of that sort.

mct@praxis.co.uk (Martyn Thomas) (10/08/90)

I've been following this stream for a while, and I have a question.

If you cannot agree what you mean by a property being "emergent", and you
cannot agree whether a given property of a system should be described as
"emergent" then would it not be more helpful to identify the important
concept which you want to be able to label, and then simply agree to use the
term "emergent" (or any other, if that one is irretrievably damaged) to
label it?


-- 
Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:	+44-225-444700.   Email:   mct@praxis.co.uk

jhess@orion.oac.uci.edu (James Hess) (10/09/90)

Help!  We now have at least three definitions of emergence running around, and 
considerable verbiage generated by the confusion.

1) My prefered definition:  Those properties which are not possessed by the 
   components of a system but emerge as properties of the system as a whole.

2) Those unanticipated or unpredictable properties which emerge only in the
   system as a whole, which are not properties of its components or 
   subsystems.

   This is a special case of the definition above; the qualifiers speak to our
   state of knowledge rather than some aspect of emergence or the system.

3) Those functions which are not dependent on the implementation of the 
   system (for computers, in a specific set of hardware or software).

   In this case, the two systems (complete systems, not hardware or software
   alone) might be thought of as formal analogues or models of each other.  
   This redefintion needs some qualification, in that they may only be models 
   in the sense of black boxes which given the same input produce the same
   output, rather than processual models or structural models.  (For a good
   discussion of this subject, see discussion of models and simulations in
   James G. Miller's "Living Systems".

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/10/90)

In article <271172B0.12370@orion.oac.uci.edu> jhess@orion.oac.uci.edu (James Hess) writes:
>Help!  We now have at least three definitions of emergence running around, and 
>considerable verbiage generated by the confusion.
>
>1) My prefered definition:  Those properties which are not possessed by the 
>   components of a system but emerge as properties of the system as a whole.
>
>2) Those unanticipated or unpredictable properties which emerge only in the
>   system as a whole, which are not properties of its components or 
>   subsystems.
>
>   This is a special case of the definition above; the qualifiers speak to our
>   state of knowledge rather than some aspect of emergence or the system.
>
>3) Those functions which are not dependent on the implementation of the 
>   system (for computers, in a specific set of hardware or software).
>
   Quite!  I think they're all valid concepts.  1) and 3) are of practical
interest, while 2) is mostly of philosophical interest.  We might refer to
2) as "strong emergence" and 1) as "weak emergence" using the terms 'strong'
and 'weak' in their more formal senses.  Now we can divide into camps and
squabble over which is better! :>  

  I would say that three is a more general concept, since it really says
that a 'function' observed is a member of a set of 'functions' which are
too similar (for whatever purposes) to be profitably distinguished.  That
does not require that the 'function' be emergent in the senses of 1) or
2) however.  A better label for 3) might be "emic", as used in the 
formal sense of "phonemic", "tagmemic", etc.  Does anyone find validity
in these suggested terms?



--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

burke@russell.Stanford.EDU (Tom Burke) (10/10/90)

In <271172B0.12370@orion.oac.uci.edu> jhess@orion.oac.uci.edu (James Hess) writes:

>Help!  We now have at least three definitions of emergence running
>around, and considerable verbiage generated by the confusion.
>1) My prefered definition:  Those properties which are not possessed by the 
>   components of a system but emerge as properties of the system as a whole.
>2) Those unanticipated or unpredictable properties which emerge only in the
>   system as a whole, which are not properties of its components or 
>   subsystems.
>   This is a special case of the definition above; the qualifiers speak to our
>   state of knowledge rather than some aspect of emergence or the system.
>3) Those functions which are not dependent on the implementation of the 
>   system (for computers, in a specific set of hardware or software).

...etc.

Notice that definitions (1) and (2) are circular or else rely on an
ambiguity in the term `emergence' -- i.e., they use the very term they
are meant to define.  How about the following revision:

(1') The emergent properties of a system are those which are not
     possessed by the components of the system but are products of the
     functioning of the system as a whole.

...or something like that.  I think this is probably what was meant in
the first place.

But then, this sounds more like a definition of "epiphenomenon" than
of "emergent property".  It is at least partially right, but it can't
be the whole story if you want the term `emergent' to be something
more than merely synonymous with `epiphenomenal'.  It seems like you
want a conception of emergence that requires emergent properties to
have some kind of causal efficacy in their own right.

This may have something to do with (3), but I'm not sure what.

vinsci@soft.fi (Leonard Norrgard) (10/19/90)

You wrote:
>>     An emergent property is a property of a system as a whole that is
>>     not possessed by any of its components.
>>
>
>  Yes, I guess the discussion is wandering.  Your definition here is, I
>believe open to alot of abuse, ie. trivial observations.  For instance,
>the keys I am hitting to produce this wondrous followup are part of the
>computer system as a whole (or just my terminal - whatever).  The fact
>that the computer does not just behave like a big key, or switch etc...
>is trivial, but then implies that its capability of adding and so forth
>is emergent.  Of course we would want to exclude such "observations"
>from consideration, so the idea of emergent should be strengthened so
>that the property of a system as a whole is also not an *obvious* 
>consequence of the properties of its components.

Actually, the computer system is emergent, given it is powered on and
all parts working as they should. Remove one part and it is no longer
a working computer, ie. the emergence is gone.
  A simpler example: For a fire to start (and continue) you need three
things: 1) material to burn 2) oxygen 3) heat. Combine them and fire
emerges. Remove any one and it ends. The fire is not a property of any
of the components.

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/20/90)

In article <VINSCI.90Oct18233739@nic.soft.fi> vinsci@soft.fi (Leonard Norrgard) writes:
>You wrote:
>>>     An emergent property is a property of a system as a whole that is
>>>     not possessed by any of its components.
>>>
>>
>>  Yes, I guess the discussion is wandering.  Your definition here is, I
>>believe open to alot of abuse, ie. trivial observations.  For instance,
>>the keys I am hitting to produce this wondrous followup are part of the
>>computer system as a whole (or just my terminal - whatever).  The fact
>>that the computer does not just behave like a big key, or switch etc...
>>is trivial, but then implies that its capability of adding and so forth
>>is emergent.  Of course we would want to exclude such "observations"
>>from consideration, so the idea of emergent should be strengthened so
>>that the property of a system as a whole is also not an *obvious* 
>>consequence of the properties of its components.
>
>Actually, the computer system is emergent, given it is powered on and
>all parts working as they should. Remove one part and it is no longer
>a working computer, ie. the emergence is gone.
>  A simpler example: For a fire to start (and continue) you need three
>things: 1) material to burn 2) oxygen 3) heat. Combine them and fire
>emerges. Remove any one and it ends. The fire is not a property of any
>of the components.

Ouch!  Your observation is an example of the very thing I was trying
to avoid!  The properties of a computer system are (for the most part)
a direct result of design, so that most of its properties are simple
compositions of the properties of its parts, and thus uninteresting
in a discussion of emergence.  The fact that removing a part causes
the system to cease functioning (in the case of most parts) addresses
the redundancy of the design and/or its robustness - but these are
also the results of design and not unforseeable features of the whole!

Indeed, fire is not a property of anything since it is a physical
entity.  If combustion is an inevitable result of the right material,
oxygen, and heat, and flames are an inevitable result of combustion,
then there is nothing 'emergent' here to discuss...  I think perhaps
you are using the term "emergent" in a colloquial sense that isn't
generally meant when the subject of philosophy comes up.

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

vinsci@soft.fi (Leonard Norrgard) (10/20/90)

>>[computer system example deleted, see the referenced messages]
>>  A simpler example: For a fire to start (and continue) you need three
>>things: 1) material to burn 2) oxygen 3) heat. Combine them and fire
>>emerges. Remove any one and it ends. The fire is not a property of any
>>of the components.
>
>Ouch!  Your observation is an example of the very thing I was trying
>to avoid!  The properties of a computer system are (for the most part)
>a direct result of design, so that most of its properties are simple
>compositions of the properties of its parts, and thus uninteresting
>in a discussion of emergence.  The fact that removing a part causes
>the system to cease functioning (in the case of most parts) addresses
>the redundancy of the design and/or its robustness - but these are
>also the results of design and not unforseeable features of the whole!
>
>Indeed, fire is not a property of anything since it is a physical
>entity.  If combustion is an inevitable result of the right material,
>oxygen, and heat, and flames are an inevitable result of combustion,
>then there is nothing 'emergent' here to discuss...  I think perhaps
>you are using the term "emergent" in a colloquial sense that isn't
>generally meant when the subject of philosophy comes up.

I think the problem here is the usage of "designed". We design a
computer to behave predictably, it is at all times a predictable
system given that we know besides the internal states also what
external signals are fed to it. We call a computer that doesn't work
according to its specifications broken, it doesn't obey the design
anymore.
  A broken computer has of course not changed the rules for how the
computer is to work (its "design"), but is a result of some external
force or a part that goes bad because of purely physical reasons. No
matter what, the computer system can't change its design nor the rules
that it must follow. "It" has to live with the logic design we give
it.

  In the same way the above fire can't change its design, ie. no
matter what, it must start burning given the three components are
present. We could view this as a consequnce of the design of natural
laws. And as above for the computer, the fire nor we can't (presently,
at least) change the laws of nature. "It" has to burn by following the
laws of nature.

  If consciousness now is an emergent property of the brain (or some
other physical entity), must it not obey the same laws of nature? This
of course, is the divider of dualism and materialism. If we somehow
concludes that the answer is "no" then we use "emergent" to describe
something we do not understand, we make it a synonym to either
"magical" or "unknown pysics". This effectively makes the phrase
meaningless even if it turns out to be "unknown physics": we can not
understand what we do not know.
  I think that the same argument would hold for emergent properties of
complex systems. Some thinkers might want to differ between
"emergence" in simple systems that we understand well and "emergence"
in complex systems that we understand only vaguely, if at all. I don't
think this can be done without giving the same word different
meanings, and if we do so, we loose our grip on possible connections
between the two meanings.

  So are the natural laws designed? I do not think it matters what the
answer is (and maybe there is no answer), we can't change them and it
seems hard to disregard them. I hold that I'm conscious in either
case, just as the fire will start in either case. (This is not to ask
for flames, I hope ;-).

(Note: I use "dualism" as a catch-all for "presently unknown physics"
       and "non-existant physics, wether we presently know that or not".)

cpshelley@violet.uwaterloo.ca (cameron shelley) (10/21/90)

In article <VINSCI.90Oct20152530@nic.soft.fi> vinsci@soft.fi (Leonard Norrgard) writes:
>In-Reply-To: cpshelley@violet.uwaterloo.ca's message of 20 Oct 90 00:55:19 GMT
>
[much deleted]

>I think the problem here is the usage of "designed". We design a
>computer to behave predictably, it is at all times a predictable
>system given that we know besides the internal states also what
>external signals are fed to it. We call a computer that doesn't work
>according to its specifications broken, it doesn't obey the design
>anymore.
>  A broken computer has of course not changed the rules for how the
>computer is to work (its "design"), but is a result of some external
>force or a part that goes bad because of purely physical reasons. No
>matter what, the computer system can't change its design nor the rules
>that it must follow. "It" has to live with the logic design we give
>it.
>
>  In the same way the above fire can't change its design, ie. no
>matter what, it must start burning given the three components are
>present. We could view this as a consequnce of the design of natural
>laws. And as above for the computer, the fire nor we can't (presently,
>at least) change the laws of nature. "It" has to burn by following the
>laws of nature.
>
>  If consciousness now is an emergent property of the brain (or some
>other physical entity), must it not obey the same laws of nature? 

  I would like to point out that this is a tautology, you are asking
me if I doubt that physical entities obey physical laws.  The use of
the term "emergent" is superfluous since you are refering to any
property of anything.  On the whole, this appears to be a non-question!

>This
>of course, is the divider of dualism and materialism. If we somehow
>concludes that the answer is "no" then we use "emergent" to describe
>something we do not understand, we make it a synonym to either
>"magical" or "unknown pysics". This effectively makes the phrase
>meaningless even if it turns out to be "unknown physics": we can not
>understand what we do not know.

  You are using a rhetorical device as opposed to arguement to establish
a point (don't debaters call it a "planted axiom"?).  "Of course"? 
"Somehow"?  If I disagree, am I just being stupid?  The term emergent
does not attempt to contradict physical laws (not in the way I'm applying
it, I hope) but point out where human perception and human world models
do not connect.  This only produces a contradiction if you take the 
dualist position and assume that such things as physical laws have an
existance outside of any observer - which I think is debatable at best.
Then, if I insisted on emergence in the presence of a perfect knowledge
of those immutable laws, I would indeed be talking about magic.  Is
your knowledge of the universe perfect?  Is anyone's ever likely to be?
Sorry for the heavy-handedness of my remarks, but I really think the
tenor of this thread has concentrated on looking at emergence as an
effect of our perspective and state of knowledge and not with 
reference to some ideal Weltanschauung.  Is it your contention that
if a certain body of knowledge cannot be relegated to physics, that
we cannot claim to "understand" it?  I have a model of language
syntax which only refers to 'trees' for explanation: does that mean
I understand nothing of syntax?

>  I think that the same argument would hold for emergent properties of
>complex systems. Some thinkers might want to differ between
>"emergence" in simple systems that we understand well and "emergence"
>in complex systems that we understand only vaguely, if at all. I don't
>think this can be done without giving the same word different
>meanings, and if we do so, we loose our grip on possible connections
>between the two meanings.

  Please refrain from setting up anonymous groups of "thinkers" as
the fall guys.  It only leads to the vagueness of communication which
you deplore.  To address your point, for better or worse the same
word will often be used to refer to divergent concepts, and this 
certainly can clutter thinking.  What difference of meanings are you
implying?

>
>  So are the natural laws designed? I do not think it matters what the
>answer is (and maybe there is no answer), we can't change them and it
>seems hard to disregard them. I hold that I'm conscious in either
>case, just as the fire will start in either case. (This is not to ask
>for flames, I hope ;-).

  The juxtaposition of fire and consciousness is very poetic, but I 
still maintain that your premise is out of synch with what this thread
has by and large addressed so far.  How would the 'design' of natural
laws affect the idea of emergence?  If you wish to say that the 
proposition of a relative emergence is not tenable, just say so and
support your statement with some arguement, not appeals to oratorical
devices or states of knowledge that don't exist.

--
      Cameron Shelley        | "Saw, n.  A trite popular saying, or proverb. 
cpshelley@violet.waterloo.edu|  So called because it makes its way into a
    Davis Centre Rm 2136     |  wooden head."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

sarima@tdatirv.UUCP (Stanley Friesen) (10/23/90)

In article <1990Oct20.005519.16055@watdragon.waterloo.edu> cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>In article <VINSCI.90Oct18233739@nic.soft.fi> vinsci@soft.fi (Leonard Norrgard) writes:
>>Actually, the computer system is emergent, given it is powered on and
>>all parts working as they should.

>Ouch!  Your observation is an example of the very thing I was trying
>to avoid!  The properties of a computer system are (for the most part)
>a direct result of design, so that most of its properties are simple
>compositions of the properties of its parts, and thus uninteresting
>in a discussion of emergence.

Oh, so design denies emergence!?!  That's the first I ever heard of that!
In fact I would say just the opposite, *most* designed entities show emergent
properties.

Or do you think a random pile of transistors and wires would have the same
properties as a computer?  Of course not! It is the *organization* of the
transistors and wires that gives the computer its properties, not the things
themselves.  Indeed the same components, organized differently, may have the
properties of a radio, or a CD player, or an amplifier, or a audio mixer,
or (many other things).  Thus the properties of a computer are *not* simply
the sum of th properties of its parts, and you cannot directly predict the
properties of a computer from the properties of its parts (since the parts
may be used in many other things).

It is this 'independence' of system level properties from component level
properties that constitutes emergence.  This is why I say most designed
systems are emergent - they generally are rely on organizational, system
level structures to produce the desired properties.  If they didn't, they
would not need to be designed, they could just be built.  (That is they
would be obvious and easy to make)
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)