[comp.ai.philosophy] emergence and rigour

CFoster@cogsci.ed.ac.uk (Carol Foster) (10/09/90)

I recently submitted a Ph.D. thesis which defines a notion of
strong equivalence of systems in terms of the states that they go
through.  In other words, rather than just saying two systems (under
their respective descriptions) have the same input and output or not,
we can say that two systems go through the same states or not, or
if they have a common abstraction.  The framework is intended to be
applicable to systems described in terms of various languages,
architectures or hardwares; characterising classical v. connectionist
'algorithms' was my starting point.

The point is that well-defined notions of emergent representations and
levels of abstraction arise quite naturally from this approach.

Briefly, algorithms are defined to be sets of sequences of states, where
states are sets of label-value pairs taken to be measurements of some
real or hypothetical dynamic system.  Algorithms can be thought of as
defining all possible paths for all possible inputs -- at a particular
level of description.  Abstractions (and their inverses, implementations)
are well defined, too, so that for two algorithms A and B it is provable
whether A is an abstraction of B, B is an abstraction of A or both or
neither.  Examples of valid abstraction operations include combining
2 label-value pairs into one, combining two adjacent states into one,
and uniformly applying a function to all the values for a given label
across all sequences of an algorithm.

A well-worn example from Dewan (1976) described by Hooker (1981) and
repeated by P.S. Churchland (1986, 'Neurophilosophy'):

     'Consider a set of electrical generators G, each of which
      produces alternating current electrical power at 60 Hz but
      with fluctuations in frequency of 10% around some average 
      value.  Taken singly, the frequency variability of the 
      generators is 10%.  Taken joined together in a suitable 
      network, their collective frequency variability is only 
      a fraction of that figure because, statistically, generators
      momentarily fluctuating behind the average output in
      phase are compensated for by the remaining generators, and
      conversely, generators momentarily ahead in phase have
      their energy absorbed by the remainder.  The entire system
      functions, from an input/output point of view, as a 
      single generator with a greatly increased frequency 
      reliability, or, as control engineers express it, with a
      single, more powerful, 'virtual governor'.  The property
      'has a virtual governor of reliability f' is a property
      of the system as a whole, but of none of its components.'

A really simplified example based on the above can be given as follows
(n is the average frequency value, g1-g4 are the frequency values of the
'real' generators and g is the emergent frequency value of the virtual or 
emergent generator of greater reliability):

Just looking at one possible sequence at one possible level of
description for a 4-generator system, an algorithm might include the sequence:

g1: n+9%        g1: n+8%        g1: n+4.5%      g1: n+8%
g2: n+3%        g2: n+1%        g2: n+2%        g2: n-3%
g3: n-2%        g3: n-2.5%      g3: n-3%        g3: n-5%
g4: n-8%        g4: n-6%        g4: n+1%        g4: n+1%


(The above states include values for g1-g4 and are intended to
be read across, giving 4 states through time.  This is not meant to
be realistic, only to give the flavour of the theory.)
A possible valid abstraction of this sequence results from combining the
label-value pairs and taking a function of their combined values,
giving rise to the following description in terms of the virtual
generator g:

 
g: n+2%         g: n+.5%         g: n+4.5%       g: n+1%




I realise this is a bit sketchy; please contact me directly if
you want more information.  The thesis ('Algorithms, Abstraction
and Implementation: A Massively Multilevel Theory of Strong Equivalence
of Complex Systems') will be available after the exam (19 Oct.) and
any required modifications...


CFoster@uk.ac.ed.cogsci

Centre for Cognitive Science
University of Edinburgh
2 Buccleuch Place
Edinburgh EH8 9LW
SCOTLAND 

wcalvin@milton.u.washington.edu (William Calvin) (10/10/90)

CFoster@cogsci.ed.ac.uk (Carol Foster) writes:
>A well-worn example from Dewan (1976) described by Hooker (1981)
>and repeated by P.S. Churchland (1986, 'Neurophilosophy'):
>
>   'Consider a set of electrical generators G, each of which
>    produces alternating current electrical power at 60 Hz but
>    with fluctuations in frequency of 10% around some average 
>    value.  Taken singly, the frequency variability of the 
>    generators is 10%.  Taken joined together in a suitable 
>    network, their collective frequency variability is only 
>    a fraction of that figure because, statistically, generators
>    momentarily fluctuating behind the average output in
>    phase are compensated for by the remaining generators, and
>    conversely, generators momentarily ahead in phase have
>    their energy absorbed by the remainder.  The entire system
>    functions, from an input/output point of view, as a 
>    single generator with a greatly increased frequency 
>    reliability, or, as control engineers express it, with a
>    single, more powerful, 'virtual governor'.  The property
>    'has a virtual governor of reliability f' is a property
>    of the system as a whole, but of none of its components.'

     That's a nice example (the original version BTW is E. M.
Dewan, "Consciousness as an emergent causal agent in the context
of control system theory," pp. 181-198 in _Consciousness and the
Brain_, edited by G. Globus et al, Plenum, 1976).
     I did something similar on the emergence of precision timing
from noisy neurons.  To hit a target twice as far away requires
reducing timing jitter by eight-fold; you can do that by
averaging together the timing recommendations of 64 times as many
timing neurons as sufficed at the closer target distance.  While
at some distance and target size (what in baseball country is
known as a "side of the barn" throw), the jitter of a "command
neuron" might suffice, known throwing abilities of even children
requires that projectile release be timed to a precision orders
of magnitude less than the best single neurons can manage.
     So precision timing is an emergent property of neuron
networks.  And while precision timing isn't so interesting in
itself, some of the secondary uses of movement sequencers are. 
See:
 
Calvin, W. H. (1983).  A stone's throw and its launch window: 
     timing precision and its implications for language and
     hominid brains.
     Journal of Theoretical Biology 104:121-135.

     I'd appreciate hearing of other examples of emergent
precision.  I have modeled precision differential depth
discrimination (takes 16-fold to double the distance, rather than
the 64-fold for throwing) and suspect that it applies to all
difficult jobs, i.e., that the more neurons that you can assign
to the task as you "get set", the better the precision
performance.  The neuropsychologist Marcel Kinsbourne noted in
1988 that:
     When wide areas of the [cortex] are involved in one
     mental operation...  [they] can be used either for a
     wide-ranging but shallow encoding, or for a single but
     difficult mental operation.
The "virtual governor" of the AC generators (these are all really
just applications of the Law of Large Numbers) may help explain
some of the more interesting phenomena of human-style parallel
computing used for language and our scenario-oriented
consciousness.  A short version of this is in:

Calvin, W. H. (1987).  The brain as a Darwin machine.  Nature
     330:33-34 (5 November).

It is discussed in more detail in my books, especially _The
Cerebral Symphony:  Seashore Reflections on the Structure of
Consciousness_ (Bantam 1989), and the forthcoming _The Ascent of
Mind:  Ice Age Climates and the Evolution of Intelligence_
(Bantam, xmas '90 in the US).

                                   William H. Calvin
                                   wcalvin@u.washington.edu